[HN Gopher] Why Is the Human Brain So Efficient? (2018)
       ___________________________________________________________________
        
       Why Is the Human Brain So Efficient? (2018)
        
       Author : rcshubhadeep
       Score  : 175 points
       Date   : 2020-06-05 09:21 UTC (13 hours ago)
        
 (HTM) web link (nautil.us)
 (TXT) w3m dump (nautil.us)
        
       | tehsauce wrote:
       | Why is the human brain so inefficient? It takes years just for it
       | to compute the sha-256 of this media file.
        
         | [deleted]
        
       | dtnewman wrote:
       | I imagine a group of dogs sitting around and asking "How are we
       | _so_ good at thinking about fun ways to play with squeeky toys?
       | ".
       | 
       | The truth is, that our ability to reason about ourselves is
       | limited by our ability to reason. Perhaps there are aliens out
       | there who would laugh our cognitive abilities--their's being so
       | much better than ours.
        
         | Shorel wrote:
         | >Perhaps there are aliens out there who would laugh our
         | cognitive abilities--their's being so much better than ours.
         | 
         | Yeah, but their brains would either be much bigger and/or use a
         | lot more energy, or they will have a fundamentally different
         | architecture (i.e. they are manufactured instead of evolved).
         | 
         | For the given amount of perceptions/calculations that our brain
         | makes, and the hard constraint of being a biological process,
         | we have pretty much fantastically efficient brains.
         | 
         | My computer, extremely slow when compared to the likes of
         | DeepMind, has a power source of 750 watts, while human brains
         | consume in average 12 watts.
        
         | jjoonathan wrote:
         | Less complicated systems successfully reason about more
         | complicated systems all the time. Ditto for self-reasoning.
         | See: bootloaders, update systems, and package managers.
         | 
         | In order to prove that some kind of meta-cognition is
         | inherently beyond our grasp, you don't just have to prove that
         | the system we are attempting to reason about is more complex
         | than ourselves, you also have to prove that the problem isn't
         | meaningfully reducible. Otherwise we can and will eventually
         | figure out the mental tools we need to tackle the problem, and
         | tackle it.
         | 
         | The same applies to brute physical strength. Humans have no
         | problem building machines vastly stronger, tougher, larger,
         | more precise etc than ourselves even though narrow-minded
         | reasoning might lead you to believe that this was impossible
         | ("a tool can only cut something less hard/strong than itself,"
         | "a ruler can only measure less precisely than itself" etc).
        
           | ColanR wrote:
           | I think you're describing an analogue of turing-completeness.
           | It's not (to me) a question of whether we _can_ reason about
           | something: it 's a question of how long it takes, and how
           | much knowledge is involved with the process.
           | 
           | What you're describing sounds like asking a PDP-11 to run
           | GPT-3. Technically possible, in the broadest sense of the
           | word. But a computer that can run GPT-3 successfully will
           | look at that PDP-11 in much the same way that we look at a
           | dog playing with a chew toy.
        
             | jjoonathan wrote:
             | On the contrary, I think your example proves my point quite
             | well. I understand very little about PDP-11s and only
             | slightly more about GPT-3's inner workings, yet I have no
             | trouble reasoning about whether or not a PDP-11 is suitable
             | for running GPT-3 or something even more difficult to
             | formally reason about, say Microsoft Windows. I have a
             | mental model of computer performance and compute
             | requirements that simplifies the question from a difficulty
             | of "Oh, it's Turing complete, halting problem, let's throw
             | our arms in the air like this is an infomercial!" through
             | "You need to understand literally everything about PDP-11s
             | and Windows" all the way to "50 years of exponential growth
             | is a hella large factor to try squeezing down anything by."
             | It's a trivial question hiding in the skin of an
             | intractable question, and it perfectly exemplifies why it's
             | silly to believe that human cognition will forever remain
             | intractable.
             | 
             | In order for a problem to forever remain in "let's throw
             | our arms in the air like it's an infomercial" territory, it
             | must not merely be difficult in its most pedantically
             | defined complete form, it also must stymie the search for
             | useful relaxations and workarounds. Nobody fears running a
             | program on account of being unable to prove that it will
             | halt: they just kill the program if it locks up, or
             | (equivalently) set a timeout. Personally, I'd just avoid
             | throwing my arms in the air like an infomercial altogether.
             | 
             | EDIT: substituted GPT3 -> Windows because arguments about
             | GPT-3 and/or a set of incarnations being Turing Complete
             | would be irrelevant to the main point.
        
         | aeternum wrote:
         | Most dogs seem to acknowledge that humans are better when it
         | comes to playing with toys, otherwise why would they bring them
         | over for humans to throw?
        
           | Digit-Al wrote:
           | Ah, now. See. You're mistake is thinking you can reason
           | better than a dog. The reason a dog brings the toy to the
           | human is because they know that the human is better at
           | throwing and the dog is better at fetching. Teamwork, y'see.
           | 
           | Now go forth and learn, and one day you too may be as smart
           | as a dog ;-)
        
             | hanniabu wrote:
             | Maybe also dogs can do everything humans can but they
             | decide not to because they see the stressful lives we live
             | and want no part of that. I welcome our dog overlords.
        
             | felipemnoa wrote:
             | >>The reason a dog brings the toy to the human is because
             | they know that the human is better at throwing and the dog
             | is better at fetching. Teamwork, y'see.
             | 
             | Honestly I always thought that the dog was just being
             | diligent and making sure that its humans did his daily
             | exercise routine by throwing a toy.
        
         | hinkley wrote:
         | They're Made Out of Meat, as a short film:
         | 
         | https://www.youtube.com/watch?v=7tScAyNaRdQ
        
       | SeanFerree wrote:
       | Cool article!!
        
       | Laakeri wrote:
       | Leslie Valiant has done some interesting work on quantifying the
       | efficiency of the brain from the viewpoint of computer science,
       | see e.g. https://www.youtube.com/watch?v=X9hRRh76QEA and the book
       | Circuits of the Mind.
        
       | stared wrote:
       | I wouldn't say that human brain is that efficient (per volume).
       | Compare and contrast with the brain of rats or Corvidae:
       | https://www.youtube.com/watch?v=ZerUbHmuY04.
        
         | jcims wrote:
         | It's not even a good example. Humans are about the least
         | physically agile vertebrates on the planet.
         | 
         | Think of a fruit fly. It can walk, fly, forage for food, mate,
         | etc. The entire critter has a mass of .2mg and their brains
         | have ~135k neurons. Making the horrible assumption of linear
         | power scaling, that's one microwatt.
        
           | rantwasp wrote:
           | but can it do math? can it paint? drink wine and muse on its
           | own brain efficiency?
        
             | stared wrote:
             | Fruit flies can drink wine.
             | 
             | Doing maths - well, it is a common trope that we
             | extrapolate skills of a fraction of humans on the entire
             | population. For an _average_ human 1 /3 + 1/2 can be
             | problematic.
             | 
             | Abstract counting up 5 or so - well, many birds can do
             | that, including pigeons.
        
               | rantwasp wrote:
               | the wine part was a joke. especially because fruit flies
               | definitely appear to be attracted to fruit, wine, etc
               | 
               | i believe you are underestimating how capable humans
               | really are. all of us can learn to do math and i'm
               | talking serious math not basic math.
        
               | jcims wrote:
               | My point from above is that 'humans playing tennis' isn't
               | a great benchmark for the efficiency of our brains.
        
       | plutonorm wrote:
       | This article contains inaccuracies and says almost nothing novel
       | for your average hacker new reader.
        
         | MaxBarraclough wrote:
         | > Please don't post shallow dismissals, especially of other
         | people's work. A good critical comment teaches us something.
         | 
         | https://news.ycombinator.com/newsguidelines.html
         | 
         | What are the inaccuracies?
        
           | plutonorm wrote:
           | The same accusation could be levied at the original artical.
        
             | MaxBarraclough wrote:
             | Please make a specific and substantive point. Worthwhile
             | discussions do not follow from vague and shallow
             | dismissals.
        
         | jonnypotty wrote:
         | Im with you on this. The one example of brain processing speed
         | in the real world used with any numbers is just inaccurate (The
         | speed of tennis balls and how able players are to react to
         | them)
         | 
         | There is no analysis of the energy used by the brain to achieve
         | anything or how much energy a computer uses for a similar task.
         | So where is the discussion of efficiency?
        
       | est31 wrote:
       | One kind of efficiency which hasn't been talked about is the
       | energy loss of things like state switching and keeping the
       | current state enabled. I think that brains build on much more
       | efficient primitives than the silicon transistors computer chips
       | use and thus can perform far more computations for far less
       | energy than a desktop CPU.
       | 
       | Another difference between CPUs and brains is that brains are
       | much less general purpose. CPUs do run-time interpreting of
       | instructions while brains process data in a more straightforward
       | way like GPUs do. Many problems can be implemented into GPUs and
       | they will run much faster. I'd argue that brains excel at such
       | tasks while being harder at tasks that require lots of state to
       | be kept around as well as conditional jumps like computing a hash
       | function or compiling a program. CPUs excel at those tasks.
        
         | bluedino wrote:
         | It's like comparing a human to a horse. A horse can run very
         | fast or pull a wagon. But a horse can't work on plumbing, or
         | knit.
        
         | nicoburns wrote:
         | You might be right about brains being better at certain kinds
         | of tasks, but I don't think it's right to think of them as
         | having only one processing mode.
         | 
         | Someone else mentioned "Thinking, Fast and Slow", and I find it
         | fascinating how closely the two thinking modes in that book
         | seem to map to CPU (mostly serial) and GPU (parallel)
         | processing. It also claims that people have natural preferences
         | for each mode of thinking, which is super interesting as it
         | suggests that the tasks that brains are best at performing will
         | vary from person to person (I guess this is obvious, but
         | perhaps gets lost when we start comparing to computers).
         | 
         | I'd bet on brains getting a lot of their efficiency from tight
         | integration of CPU-like, GPU-like, and ASIC-like, and full on
         | analog components. We'd probably have to apply deep-learning
         | like approaches to the hardware design itself to get close.
        
           | [deleted]
        
         | tegeek wrote:
         | Comparing Human Brain with a CPU is misconception. In the past
         | when we didn't have digital computers we used to compare Brain
         | with other machines. And now with a CPU. A Brain from a
         | primitive neuron to higher level is not comparable to any
         | machine at all including the CPU.
        
           | ResidentSleeper wrote:
           | Whether or not it's comparable depends on the level of
           | distinction you're trying to make. Obviously, CPUs don't
           | think or experience the world (but on the other hand that
           | kind of "feature" seems increasingly likely to be
           | implementable in software, even if our current CPU
           | architectures are rather unsuitable for that goal). However,
           | if we're gonna talk about energy efficiency and computation
           | performance, now that it has become evident that the brain is
           | merely a kind of a computer, we can _definitely_ look for
           | parallels.
        
             | kalcode wrote:
             | > now that it has become evident that the brain is merely a
             | kind of a computer
             | 
             | I am ignorant in this area. But I keep reading how brains
             | are nothing like computers the more we learn. Your
             | statement seems to suggest otherwise and id love to read
             | about it. Can you drop something where I can start
             | exploring about how the brain has become more evident that
             | it's merely a kind of computer? Thanks!
        
               | whatshisface wrote:
               | The brain is thought to be merely a computer in the
               | original sense of a long strip of paper along with a
               | scribe and a rulebook. The logic is, a Turing machine can
               | simulate quantum electrodynamics to an arbitrary degree
               | of accuracy. Then, two beliefs about physics and the
               | structure of the brain are included:
               | 
               | 1. There is nothing going on in the brain that would
               | require simulation to infinite accuracy. Not even a
               | chaotic system would have this property, because they
               | take a finite time to "blow up" an initial uncertainty,
               | and the smaller the initial uncertainty the longer they
               | take to blow up. For this proposition to be violated
               | there would have to be an undiscovered fininite-time
               | nondeterministic blowup, which is unlikely, but I've
               | heard rumblings that we haven't proven that it can't
               | happen in Navier-Stokes. So maybe it can happen in the
               | brain.
               | 
               | 2. There is nothing going on in the brain that depends on
               | nuclear physics or anything more "powerful" than quantum
               | electrodynamics.
               | 
               | I have not seen any evidence that 1 or 2 aren't true for
               | the brain, so that puts something behind saying it's
               | "merely a computer."
        
               | checkyoursudo wrote:
               | If you are looking for a book for an introduction, I
               | would suggest Mindware by Andy Clark is pretty
               | reasonable. Pub 2014; ISBN: 9780199828159
        
           | est31 wrote:
           | That's what the article does though. And there are
           | experiments trying to simulate parts of brains but we realize
           | that it's extremely hard to do that and we are very far away
           | from simulating even a mouse brain.
        
           | The_rationalist wrote:
           | _Comparing Human Brain with a CPU is misconception._ no it is
           | not. Yeah architecturally they are very different and CPU are
           | arguably more programmable  / general and less efficient.
           | 
           | What does matter is whether CPUs are theoretically able to
           | achieve all the things that a brain can do (and even more)
           | And indeed CPUs as turing complete, programmable machine are
           | a strict superset of what brains can do. The gap between what
           | task and at which accuracy a brain achieve vs a CPU is
           | decreasing each year as you can contemplate on the
           | paperswithcode.com leaderboards. The difficulty is in
           | software, hardware through clusterisation has arguably order
           | of magnitude more compute than a brain has.
           | 
           | There are four big missing pieces to match human brain
           | performance:
           | 
           | 1) Matching its pattern recognition abilities I believe that
           | current statistical learning techniques of SOTA neural
           | networks actually outperform humans on learning continuous
           | data. But humans outperforms by far current software at
           | zero/few shot learning on sparse/discrete data (where
           | gradient descent is not applicable) I believe humans have
           | this performance edge because of 2), 3) and 4):
           | 
           | 2) humans can encode and decode meaning with great accuracy
           | in a high level, descriptive complete declarative language
           | called natural languages. They are in many ways far superior
           | to current GQL/datalog/SQL DB languages at encoding and
           | retrieving meaning (that is an isomorphic description of a
           | denoted thing). The field of semantic parsing (+ question
           | answering from the parsed knowledge) is the key to general
           | language understanding and crucially lack funding. Once
           | machines will be able to understand language and retrieve all
           | the knowledge of say Wikipedia, they will be able to
           | transcend human performance on many intelligence/erudition
           | tasks.
           | 
           | 3) humans seems to be able to do meaningful runtime code
           | generation.
           | 
           | That is you can develop on demand new solutions to new
           | problems: such as https://www.kaggle.com/c/abstraction-and-
           | reasoning-challenge The field of specification and
           | implementation generation is too underfunded.
           | 
           | 4) is the observation that 3) is probably a necessary key for
           | unlocking 2) and that both 2) and 3) are needed to achieve
           | this communication/feedback loop between high level semantic
           | reasoning and statistical operations.
           | 
           | As we can see, humanity overfocus funding on 1) despite being
           | the most solved of all others necessary foundation's to
           | achieve AGI and hence, as a side effect, empirically prove
           | that CPUs superset brains
        
             | cmehdy wrote:
             | > CPUs as turing complete, programmable machine are a
             | strict superset of what brains can do
             | 
             | In what way can this be proven?
             | 
             | It's very tempting in an era of tech-centered growth to
             | think of computers as the solution to everything, but we
             | are barely even beginning to understand the brain. We know
             | computers fairly well and can talk about them, but how can
             | we make such a claim when we don't know the other thing
             | we're talking about?
             | 
             | In fact, the brain created the computer, didn't it?
             | Therefore, from that standpoint it is arguable that the
             | brain is a superset of the computer. It's not something I
             | really believe in (because my opinion is that you can't
             | really equate things that are of entirely different units,
             | one of which being unknown), but just a "devil's advocate".
        
               | marcosdumay wrote:
               | > In what way can this be proven?
               | 
               | Proven? Nothing in science is ever proven.
               | 
               | But on half a millenium we have failed to find anything
               | that can't be simulated by math, and Turing completeness
               | means a computer can simulate anything that can be
               | simulated by math. We also can simulate all the smallest
               | components of a brain.
               | 
               | At this point the claim that math can not simulate it is
               | highly extraordinary.
        
               | happythomist wrote:
               | We have not been able to simulate any aspect of
               | subjective, conscious experience using a mathematical
               | model, and personally I think we have no good reason to
               | believe we ever will. The qualitative, by definition,
               | cannot be quantified.
        
               | simiones wrote:
               | > Turing completeness means a computer can simulate
               | anything that can be simulated by math
               | 
               | Technically, it is not proven that Turing machines can
               | compute all computable functions, so there is some purely
               | theoretical possibility that the brain could be able to
               | compute functions that a Turing machine can't.
               | 
               | Personally I find that extremely unlikely, and agree that
               | it would be extremely surprising. But it wouldn't
               | invalidate anything we have proven so far.
        
               | marvy wrote:
               | It would imply that our brains are using currently-
               | unknown physics, since all current theories are
               | computable.
        
               | jacobr1 wrote:
               | The argument isn't "something like, or a little better,
               | than current CPUs can perform everything a brain can,"
               | but something more like "a turing machine can perform
               | everything a brain can or more." This is more an
               | ontological exercise, not an empirical one. If you reduce
               | everything to a "black box" model with inputs and
               | outputs, then sure, the mathematical abstractions of
               | theoretical brains and theoretical CPUs have a
               | congruence. Most objections to this seem to resolve
               | around qualia being something not modelable in machines,
               | but I'm skeptical of that claim.
               | 
               | Can an "arbitrarily advanced computer do everything a
               | brain can do?" Empirically, right now, current machines
               | can't but we are talking about "future machines, via
               | line-of-sight extrapolation". Not fundamental leaps in
               | tech, but incremental ones. It seems plausible, but it
               | seems we expand the depths of the complexity of the
               | requirements nearly as fast as we advance current
               | capabilities. I don't know, but I'd put my money on the
               | technology catch up.
        
               | cmehdy wrote:
               | Being skeptical of the claim that a certain qualia is not
               | modelable in machine is just as valid as being skeptical
               | of the exact opposite. This is exactly why I asked if
               | there was anything beyond what the original poster said.
               | Without it, a post based on the exact opposite assumption
               | could have been written and considered just as valid.
        
               | jacobr1 wrote:
               | Fair criticism, I didn't tackle that head-on. The
               | following doesn't actually make a cogent argument either,
               | but I'll elaborate that my intuition is that qualia
               | (conceived as something nearly tangible) are more like
               | "the soul" or "spirits" and that, as such, thinking they
               | exist in the brain or a turing-machine is nonsense. To
               | the extent they are more like some combination of memory
               | and emotional-stimuli, then they just represent a
               | particularly interesting set of internal states, but are
               | still something that can be mathematically modeled.
        
             | mannykannot wrote:
             | I am not convinced of the usefulness of this comparison.
             | 
             | The first of your big missing pieces starts from the best
             | that we have been able to achieve with computers so far,
             | and while its completion might be a big step in computing,
             | it would not necessarily be a big step in understanding the
             | human brain - after all, quite primitive animals have
             | impressive abilities in this regard. Using the best
             | computing has done as the yardstick for quantifying the
             | human brain's ability is the wrong way round.
             | 
             | The remaining missing pieces are vague, with no clear
             | indication that they fit into the brain-as-CPU model. For
             | example, while it is true that "[human languages] are in
             | many ways far superior to current GQL/datalog/SQL DB
             | languages at encoding and retrieving meaning (that is an
             | isomorphic description of a denoted thing)", this vastly
             | understates the capabilities of language. Once again, you
             | are using current technology as the yardstick, with no
             | basis for assuming that it is of the right scale.
             | 
             | Overall, you seem to be assuming that the rest of the
             | puzzle is almost within reach. That is certainly a logical
             | possibility, but not one with a great deal of objective
             | evidence in support. FWIW, my opinion on the matter is that
             | we probably don't even know, in any well-defined way, all
             | the questions to be answered.
             | 
             | Even if we grant the premise that a suitably-programmed
             | computer (not just a CPU) could have capabilities that are
             | a superset of those of a human brain, that would not
             | necessarily justify saying one is very like the other -
             | that would be like saying a dynamo is a solar cell because
             | they both produce electric current.
        
             | Someone wrote:
             | _"programmable machine are a strict superset of what brains
             | can do"_
             | 
             | As others already replied, that's a statement that isn't
             | universally accepted to be true.
             | 
             | As an example, there's consciousness. People disagree about
             | whether it exists, whether it's (fully) 'in' the brain, and
             | on whether computers could in theory be conscious.
             | 
             | There are people who answer those questions with yes, yes,
             | and no, and, since we don't even have a good idea about
             | what consciousness is, one cannot reliably argue that they
             | are wrong (also not that they are right, of course)
        
             | criddell wrote:
             | Is there anything analogous to software in biology?
        
               | sooheon wrote:
               | Biology is the ultimate legacy software running on one of
               | the oldest platforms ever developed, the organic
               | compounds. It is literally a giant genetic algorithm to
               | write instructions (DNA) for manufacturing molecular
               | machines (proteins) that interact with each other in an
               | extremely complex graph of relations (protein pathways,
               | i.e. control flow).
        
               | dependenttypes wrote:
               | I am sure that I saw this exact message on HN before. Did
               | you copy it from someone else or did you repost your own
               | post?
        
               | SomeoneFromCA wrote:
               | This a very simplistic view, based on assumption that the
               | world is discrete. The whole idea of software relies on
               | the concept of digital computer, a discrete machine. The
               | world might indeed be analogous and real numbers might
               | actually exist.
        
               | sooheon wrote:
               | I think the assumption that software may only be digital
               | is the limited one.
        
               | SomeoneFromCA wrote:
               | Otherwise it becomes a meaningless, all-encompassing
               | term.
        
               | sharpneli wrote:
               | If world did run on real numbers that we could harness
               | for computation I would be more than happy, because using
               | those we would be able to perform hypercomputation. See
               | https://en.m.wikipedia.org/wiki/Real_computation
               | 
               | However this is forbidden by Bekensteins bound, so unless
               | modern physics is horribly broken it's ruled out at least
               | in any sense visible to us even in principle.
        
               | SomeoneFromCA wrote:
               | Not a quantum physicists, but IMO Bekenstein bound is not
               | applicable here, because quantum laws are non-
               | deterministic, therefore you can describe the structure
               | of a system, but you cannot describe how it will evolve.
               | Quantum randomness might be in the very essence of how
               | the brain and mind works.
        
               | catalogia wrote:
               | Quantum randomness being necessary hardly seems like it
               | would have profound practical implications since
               | augmenting a digital computer with a geiger counter would
               | be trivial.
        
               | criddell wrote:
               | That feels more to me like hardware and software in the
               | sense of a Jacquard loom. I suppose it fits though.
               | 
               | I was thinking more about what's going on in the brain.
               | We have all the regions mapped to specific functions with
               | higher and lower level parts. The low level parts seem to
               | be like hard-wired stimulus-response mechanisms. Are the
               | higher level systems the same at a meta level or is there
               | a type of program running on the hardware of the brain?
        
             | otabdeveloper4 wrote:
             | The human brain isn't Turing-complete.
             | 
             | Turing completeness implies infinite recursion, which the
             | brain obviously can't do.
        
               | rabryan wrote:
               | Why is that obvious? My brain's been infinitely recursing
               | for years as far as I know
        
               | lagadu wrote:
               | Technically Turing completeness requires infinite memory
               | for that (or an infinite tape if we're talking about the
               | original turing machine concept), which no Turing-
               | complete machine has. In other words, the brain is as
               | Turing-complete as any machine that we also consider to
               | be so. We'll always be bounded by limited memory and
               | limited time.
        
               | SomeoneFromCA wrote:
               | We do not know if human brain is indeed Turing-complete,
               | or even if it is a Turing machine at all. Human Mind
               | certainly is, but if brain is or not we do not know.
        
               | jameshart wrote:
               | This is a silly objection, trivially because obviously no
               | finite physical system - brain or computer or whatever -
               | can be constructed with the storage equivalent of an
               | infinitely long tape. But if you allow for the fact that
               | humans can do things like _write things down_ and _share
               | information with other humans_ and _build computers to
               | store information_ , our information processing capacity
               | is not limited to the set of states we can hold inside
               | the atoms inside our head.
               | 
               | But also, the claim lacks evidence: We've never seen a
               | human being yet whose program didn't eventually halt.
               | 
               | That doesn't mean the hardware isn't capable of running a
               | program that never halts, just that we haven't found such
               | a program yet.
               | 
               | Indeed if you consider human mindware as a whole, given
               | that when humans reproduce they create new copies of the
               | mind running in new bits of hardware... maybe Human minds
               | are infinitely recursive after all?
        
               | mannykannot wrote:
               | While I agree that comparing a human brain or mind to a
               | Turing machine is not helpful, the objection you make
               | here is less significant than it first appears.
               | 
               | There is a subtle difference between unbounded recursion,
               | which a Turing machine is taken to be capable of, and the
               | actual ability to achieve infinite recursion. In no
               | application of a Turing machine, either as an actual
               | physical device or as a hypothetical one in a logical
               | argument, is it ever required to perform infinite
               | recursion, which would just be one way of not halting.
               | 
               | For all practical _and_ theoretical purposes, what
               | matters is that the machine being considered does not
               | exhaust its ability to recurse while performing the
               | computations being considered. Consequently, the standard
               | practice, of saying that computers and certain other
               | devices are Turing-equivalent, with the usually-implicit
               | caveat of being so up to the limit of their recursive
               | ability, is both reasonable and useful.
        
             | the_af wrote:
             | Before CPUs existed, we would compare brains to steam
             | engines. There was a very interesting article posted here
             | on HN a while ago, explaining why humans always pattern
             | match their understanding of the "mind" (or "soul") to
             | whatever technology is fashionable in their time: steam
             | engines, computers, etc. It also explained the pitfalls of
             | doing so.
             | 
             | I think there is at this time no indication human brains
             | are in any way similar to CPUs. It might be interesting to
             | consider the question, of course.
        
               | Stupulous wrote:
               | But steam engines and hydraulics and gear mechanisms are
               | all Turing complete. There is nothing wrong with those
               | models. You could build a brain out of any of them,
               | unless the brain computes something that is not
               | computable.
               | 
               | If the brain does something that is not computable,
               | that's a direct challenge to some of our most established
               | science. It is possible, but I think very unlikely.
        
               | TomMarius wrote:
               | I thought it was about similarity of simulated neurons,
               | not the CPU itself.
        
               | sudosysgen wrote:
               | To be fair, CPUs are Turing machines. That makes them
               | much more comparable to anything that mainly does
               | information processing than to anything else.
        
               | the_af wrote:
               | I think the danger is that it's always "obvious" that the
               | current fashionable tech works in analogous ways to the
               | mind/brain. We can spend all day finding ways in which
               | they are similar; for example how the brain does
               | information processing and the CPU does too.
               | 
               | The point is, I think, people from the steam engine era
               | had similar reasons why the mind/soul was exactly like a
               | steam engine. I won't try to reproduce them here, but I'm
               | sure there were convincing arguments _at the time_. Who
               | has the awareness to claim, before the current
               | fashionable technology becomes unfashionable, that maybe
               | no, the brain is not a close match for an information
               | processing machine? ;)
        
             | SomeoneFromCA wrote:
             | > What does matter is whether CPUs are theoretically able
             | to achieve all the things that a brain can do (and even
             | more) And indeed CPUs as turing complete, programmable
             | machine are a strict superset of what brains can do.
             | 
             | It is not proven in any way. Turing's postulate is just a
             | postulate, it is not even a theorem, just a conjecture. And
             | AFAIK it cannot be proven, actually.
        
             | coreyp_1 wrote:
             | "And indeed CPUs as turing complete, programmable machine
             | are a strict superset of what brains can do."
             | 
             | This is a fundamental assertion that I do not believe you
             | can make.
             | 
             | The brain cannot simulate a turing machine. It does not
             | have infinite memory, which is a requirement for a turing
             | machine. It can, however, stimulate a linearly bounded
             | automata.
             | 
             | It is also not implicitly obvious that a turing machine can
             | simulate a brain. The primary difficulty that I do not yet
             | see a way around is the fact that a turing machine, which
             | has as its control unit a finite State machine, is bound by
             | the finiteness of those states (finiteness of
             | representation, not of number). The brain has no such
             | constraint. It is analog, and therefore infinite in State
             | representation.
             | 
             | In my opinion, this is more akin to the P versus NP
             | problem, and that we know what needs to be equivalent in
             | order to say that P equals NP, but no one has proved it or
             | disproved it yet. That's how I feel about the statement
             | about Turing machines and the brain. I do not believe we
             | can be dogmatic on that aspect yet either way. We may have
             | opinions, just as we may have opinions about P vs NP, but
             | we must also be careful about stating what is provable and
             | what is opinion, and that is all I'm trying to do.
             | 
             | Of course, I am willing and very interested to gain more
             | insight in this area, so discussion is welcome!
        
               | deegles wrote:
               | The big question is whether a CPU can emulate a brain
               | with the same or better efficiency.
        
               | shawnz wrote:
               | > The brain cannot simulate a turing machine. It does not
               | have infinite memory, which is a requirement for a turing
               | machine.
               | 
               | In practice we call modern computers turing-complete even
               | though they don't have infinite memory. The brain can
               | simulate such a machine.
               | 
               | > The brain has no such constraint. It is analog, and
               | therefore infinite in State representation.
               | 
               | If this mattered, then it would mean analog computers are
               | more powerful than digital computers and therefore the
               | Church-Turing thesis is wrong
        
               | deepnotderp wrote:
               | Isn't the recent Google quantum "supremacy" experiment
               | evidence against the extended Church-Turing thesis?
        
               | anchpop wrote:
               | No, quantum computers as we understand them can be
               | simulated by a turing machine
        
               | coreyp_1 wrote:
               | Regarding the Church-Turing thesis, it is exactly that,
               | just a thesis. Again, akin to P vs NP. It seems to hold
               | for most cases, but is not proven.
               | 
               | The reason that it's difficult to apply in regards to the
               | brain is that we don't exactly know how the brain is
               | computing... or if it "computes" at all! To my knowledge,
               | we don't have a model of computation for consciousness,
               | emotion, free will, Etc.
               | 
               | Perhaps these are better classified as emergent Behavior
               | rather than computation, but if that is the case I still
               | don't know of a model explaining what computations or
               | rules give rise to the emergent Behavior.
               | 
               | Perhaps the problem is in our definition of computation
               | and what it means to compute.
               | 
               | We do know that the cardinality of the set of possible
               | computational problems is larger than the cardinality of
               | the set of all possible Turing machines. This is provable
               | by simple diagonalization proofs.
               | 
               | The question, then, is whether or not the computations of
               | the brain fall Within the set of Turing recognizable
               | languages (computational problems). To my knowledge, this
               | has not been shown.
        
               | supergarfield wrote:
               | As far as I understand, the prevailing opinion is that
               | the brain is a physical object and that its operation
               | does not involve currently-unknown laws of physics
               | (because we have a good understanding of what happens at
               | the scale of an entire atom or above).
               | 
               | A Turing machine can run a simulation based on such
               | physical laws to any desired level of precision (which is
               | enough, because as mentioned in TFA, processes in the
               | brain aren't individually very precise). This is true
               | because of the nature of these laws, which are mostly
               | just asking you to integrate differential equations. If
               | you accept this, then it should follow that a Turing
               | machine can in fact simulate a brain: just run a physics
               | sim on a brain's initial state.
               | 
               | (I do realize that this is far outside the realm of
               | what's doable today, but it seems to provide a solid
               | justification for why it's conceptually possible).
        
               | coreyp_1 wrote:
               | "any desired level of precision" is actually the issue.
               | The moment you choose a level of precision, you cease
               | being accurate (at that level). If you make the argument
               | that a TM has infinite memory, and can therefore
               | represent an infinite precision, then I would counter
               | that our current defintion of a TM requires a finite tape
               | alphabet (and finite number of states), which is part of
               | the TM's known computational limitations. And, of course,
               | the moment that you use any finite set of symbols to
               | represent an infinitely precise value, you fall into the
               | problem that the set of real numbers has a larger
               | cardinality than the set of possible turing machines
               | (again, simple proof via diagonalization).
               | 
               | It is possible that the brain's imprecision (I would
               | argue that "inconsistency" might be a better word) is a
               | requirement of it's computational ability. Again, we
               | haven't defined how the brain computes, nor do we have a
               | model for explaining its computation, encoding or
               | representation of knowledge, or emergent behavior. We
               | have observed phenomena related to some of these things,
               | but we are far from understanding it. It may be that the
               | computational processes are dependent on the surrounding
               | environment. We know that the biological processes are
               | influenceable by the physical world, but we do not know
               | much about how these external forces affect, limit, or
               | are required for, the process of brain computation.
               | 
               | The quantum world may play a part in consciousness (or
               | no, we don't know). Non-determinism may play a part. It
               | is possible that, in order to simulate a brain, one would
               | have to simulate the entire universe around it in order
               | to predict the behavior... meaning that it may well
               | require a universe to perform the simulation.
               | 
               | Which brings us to a related theory of whether or not we
               | are living in a simulation, but I digress... :)
        
               | [deleted]
        
               | mamon wrote:
               | > It is possible that the brain's imprecision (I would
               | argue that "inconsistency" might be a better word) is a
               | requirement of it's computational ability
               | 
               | Is it possible that brain is in fact a quantum computer?
               | I can imagine that under all those neural networks there
               | is a small part where, trapped in some complex protein
               | structure, some qbits exist and are crucial to most
               | advanced brain functions, such as consciousness.
        
               | coreyp_1 wrote:
               | "Is it possible that brain is in fact a quantum
               | computer?"
               | 
               | It's an interesting thing to ponder.
               | 
               | Quantum computing is still just another computational
               | model, and it's main Advantage is that it involves non
               | determinism. But non determinism, in and of itself, can
               | be modeled by deterministic computer.
               | 
               | I think the biggest problem is that we don't understand
               | what computation is taking place in the brain, or even if
               | it is "computation" according to our current definition
               | of the word. I think that this issue is the biggest
               | problem in reconciling whether or not it is possible to
               | accurately model the human brain.
        
               | simiones wrote:
               | > that its operation does not involve currently-unknown
               | laws of physics (because we have a good understanding of
               | what happens at the scale of an entire atom or above)
               | 
               | Well, we know certain approximations of those laws.
               | Purely theoretically, it is possible that the exact laws
               | at some level of detail that we have not yet been able to
               | observe involve functions that are not computable by a
               | Turing machine, and then it is theoretically possible
               | that the brain itself is computing functions which are
               | not computable by a Turing machine (this would of course
               | assume that the Church-Turing thesis is actually wrong).
               | 
               | As long as the Church-Turing thesis is not proven, we
               | can't say with absolute certainty that the physical world
               | can be simulated to any level of detail by a Turing
               | machine.
               | 
               | Furthermore, even if the Church-Turing thesis was proven,
               | is it possible that the physical world involves
               | transformations that are not even computable at all (even
               | if they can be approximated by computable functions)?
               | 
               | Just to be clear, I do not believe these things. But it
               | is fun to think about the limits of our knowledge.
        
               | jbay808 wrote:
               | > The brain has no such constraint. It is analog, and
               | therefore infinite in State representation.
               | 
               | This is a common misconception.
               | 
               | I'm sure you are aware that analog signals can be
               | approximated by digital values -- a 10 bit ADC will read
               | a channel to one part in 1024, etc.
               | 
               | You might say that even a 64 bit representation is a poor
               | approximation of a real life signal, which is a real
               | number with infinite precision... But it isn't.
               | 
               | The brain operates at about 300 Kelvin, and so there's a
               | noise floor to all analog signals of about that times
               | Boltzmann's constant, or 10^-20 J. If a neuron impedance
               | is 1 ohm, and at a bandwidth of just 10 kHz, the thermal
               | noise is about 1 nV. For a membrane potential of 100 mV,
               | that's a maximum possible noise to signal ratio of one
               | part in 100 million, which is 26 bits.
               | 
               | Now the brain could depend on the signal below the noise
               | floor, but if so those would be extremely fragile
               | operations, and you could get the same thing on a
               | computer by padding your numbers with random data.
        
               | p1esk wrote:
               | Given how robust a brain is against noise, I'd be
               | surprised if any brain signals are more precise than an
               | equivalent of 3-4 bits.
        
               | jbay808 wrote:
               | I agree, and I think in practice the brain's noise floor
               | is also much higher than the theoretical thermal-noise
               | minimum. But I guess the main point is that once we
               | acknowledge that even 32 bits is more than enough, the
               | difference between an analog and digital machine loses a
               | lot of its philosophical weight.
        
               | nicoburns wrote:
               | I mostly agree with your post but:
               | 
               | > The brain has no such constraint. It is analog, and
               | therefore infinite in State
               | 
               | Not necessarily infinite. A lot of people believe that
               | nothing in the world is truly infinite (just very
               | large/small). Infinite quantities in mathematics are just
               | approximations that simplify calculations.
        
             | JBiserkov wrote:
             | I agree. For some reason 2) and 3) reminded me of the book
             | "The mating mind" https://en.wikipedia.org/wiki/Geoffrey_Mi
             | ller_(psychologist)...
        
             | nil-sec wrote:
             | Turing completeness isn't necessarily an interesting thing
             | to have in common. Many (very simple) models of computation
             | are Turing complete but have vastly different properties.
             | Take for example a cellular automata, a Turing machine,
             | Wang tiles, (cyclic) Tag systems, Fractrans, Register
             | machines, string rewriting systems. All of these are Turing
             | complete. Yet they are miles apart in how they carry out
             | computation. In order to understand and do what the brain
             | is doing we have to figure out the brains model of
             | computation. It will also be Turing complete but it will
             | look very different than a Turing machine.
        
           | sa1 wrote:
           | Computers are mathematical concepts, Turing machines being
           | one such concept. Whether computers are implemented using
           | silicon, or oil, or using neurons, it doesn't really matter
           | as we have a mathematical framework for describing abstract
           | machines, and we can determine what is a machine, and what is
           | not.
           | 
           | We did not have this mathematical framework before the age of
           | Turing, Church, Russel, et al.
           | 
           | This doesn't mean that brains are very similar to CPUs, they
           | are not, just like they were not similar to mechanical
           | machines before.
           | 
           | Yet we do now have a way of studying the similarities they
           | have.
        
           | kyuudou wrote:
           | "...the question of whether Machines Can Think, a question of
           | which we now know that it is about as relevant as the
           | question of whether Submarines Can Swim."
           | 
           | Edsger Dijkstra, EWD898, 1984
        
           | derefr wrote:
           | The difference is that CPUs, unlike those other machines, can
           | be used to model/simulate things that _are_ similar to
           | brains. There is impedance in the translation, of course, but
           | that impedance can be measured as a sort of "distance"
           | between the architectures; just like one might measure the
           | "distance" between two Instruction Set Architectures.
        
       | RivieraKid wrote:
       | I don't know.
        
       | gwern wrote:
       | I was wondering why this seemed so outdated and ignorant for
       | something published in 2018 (only 10b transistors? 'computers are
       | serial', really?), but I see that it's from a 2015 textbook,
       | using citations for computing hardware published in 2008, and
       | presumably referencing hardware from 2007 or earlier...
        
       | kristopolous wrote:
       | They aren't the same thing. They are different classes of
       | objects, different tasks. This comparison is kind of silly.
       | 
       | I'd hate my computer to have the memory accuracy or the
       | computational accuracy of my brain. I'd hate to have the
       | creativity and inspiration of a computer.
       | 
       | Delete being such a nontrivial operation is probably a good thing
       | for humans. Copy being imperfect probably has something to do
       | with the phenomenon we call imagination. We use computers because
       | they are complementary, not substitutive.
       | 
       | They're just so fundamentally different.
        
         | technovader wrote:
         | Exactly. We shouldn't be so arrogant in thinking the modern
         | computer is the same thing as a human brain.
         | 
         | This comparison is pointless. The human brain is beyond
         | comprehension. Computers are just logic calculators.
        
           | ryukafalz wrote:
           | >The human brain is beyond comprehension.
           | 
           | Many things in the world were beyond our comprehension, until
           | they weren't. I see no reason why the human brain's inner
           | workings should evade our understanding indefinitely.
        
         | EmilioMartinez wrote:
         | I don't see how any of that makes the comparison "silly". It's
         | not like we have so many instances of computer paradigms to go
         | around comparing.
        
         | derefr wrote:
         | > memory accuracy
         | 
         | There are individuals with very good memories for all sorts of
         | things, who seem to manage to reconsolidate their memories
         | near-losslessly (at least within the confines of the mental
         | schema they organize said memories into.) Surgeons with
         | anatomy, lawyers and judges with case-law, etc.
         | 
         | At this point I'm convinced that the lossy method humans
         | intuitively reconsolidate memories with, isn't so much a
         | feature of our mental architecture, as it is a part of the
         | "operating system" we build up _on top of_ our mental
         | architecture--i.e. it's a _skill_ , something we can learn (or
         | accidentally invent) a better approach to.
         | 
         | > computational accuracy
         | 
         | We compute _ratios_ with extremely high accuracy /precision.
         | Just look at a professional billiards player.
         | 
         | We don't have a good mind for integer math; but you can
         | translate most integer math problems into ratio problems, and
         | then they become intuitively solvable to humans. (This is
         | basically what geometry is.)
        
           | partyboat1586 wrote:
           | A surgeon might remember anatomy with great accuracy but he
           | is unlikely to remember the details of some case law nearly
           | as well. Our memories are associative, that is how they
           | differ from computers. It's easy for a surgeon to remember
           | anatomy because he has been immersed in it for a long time
           | and it all interconnects, i.e there are a lot of associations
           | to call up the memory. Computers on the other hand could
           | remember 20 facts about anatomy and 20 facts about case law
           | no problem without needing any framework to attach them to.
        
           | Stratoscope wrote:
           | A similar example I heard was that a chess grandmaster may be
           | able to take a look at a chessboard with a game in play and
           | memorize the entire board immediately. But _only_ if the
           | board  "makes sense" - all the pieces are in positions that
           | could actually be reached in a real game.
           | 
           | If you take those same pieces and rearrange them willy-nilly,
           | then this ability to instantly memorize its layout goes away.
        
             | derefr wrote:
             | I recall that Jeff Hawkins, when talking about his
             | Hierarchical Temporal Memory ML model (which is supposed to
             | be brain-like), said something like "Nature has spatial and
             | temporal locality. Brains evolved to best store information
             | that _also_ has spatial and temporal locality--in other
             | words, to recapitulate and model the natural world. To the
             | degree that some pattern is akin to one that arises in
             | nature, the brain can store and compute upon it easily. To
             | the degree that a pattern is  'arbitrary'--something that
             | cannot arise in nature--the brain finds it hard to hold
             | into."
             | 
             | The moment-in-time arrangement of chess pieces on a board
             | does not exactly have spatial or temporal locality; but if
             | one has learned a set of mental transformation rules that
             | let that board be translated into a _narrative_ for how it
             | got to be that way--then that _narrative_ is itself
             | something quite natural for the brain 's architecture to
             | represent.
        
           | satvikpendem wrote:
           | You can even construct memory palaces which are very easy to
           | learn. I still remember them from 10 years ago.
        
           | jbay808 wrote:
           | I remember when I first took a data structures course,
           | learning things like trees and linked lists, I had a total
           | paradigm shift with respect to how I understood my own mind.
           | 
           | I had never really thought about the different ways that data
           | could be organized, and how they perform differently. I
           | figured that since this was so basic to computer science, my
           | own mind couldn't be doing something completely different. It
           | might not be the same in detail as any computer data
           | structure, but it couldn't be completely unrelated either.
           | 
           | I realized that data structures might make information _feel_
           | different. For example, I can only tell you what the 16th
           | letter of the alphabet is by counting from  "A". I can't sing
           | the alphabet song backwards. These are at least qualitatively
           | characteristics of a singly linked list. The same goes for my
           | phone number and my credit card number. I wouldn't be able to
           | dictate them backwards, except by mentally traversing them
           | forwards and then holding the whole number in my conscious
           | memory as I reverse the digits, or if that's too tiring,
           | traversing it forwards multiple times and stopping at
           | different points.
           | 
           | I have many detailed memories of past events, conversations,
           | and trivial facts, but it's hard for me to remember them on
           | command. I need some kind of prompt to point me to the right
           | index where I can retrieve it.
           | 
           | I agree a lot with the interpretation that we have a messy OS
           | that bungles memory management and does lossy compression and
           | a poor job of disk defrag, running on some very impressive
           | hardware.
        
             | ALittleLight wrote:
             | Recently I was thinking about how my brain answers the
             | question "What's your favorite movie?" and how I can easily
             | answer that question, but it's harder to answer a question
             | like "What's your favorite movie where a gun is fired?"
             | 
             | It seems to me that whenever I watch a movie, if I really
             | liked it, I check my perceived quality of the movie against
             | the quality of my current favorite movie, and if the new
             | movie beats the old favorite, I update the "favorite movie"
             | pointer to point to the new movie. When someone asks
             | "What's your favorite movie?" I just return the name of
             | whatever the favorite_movie points to.
             | 
             | The question of "Favorite movie where a gun is shot" is
             | much harder, I think, because my memories aren't really
             | indexed that way. I can't query by "gun is shot" so I can't
             | get the subset of movies I've seen with gun shots and pick
             | my favorite.
             | 
             | To me, it seems like my brain, at least for movies, has
             | something of a key value store, which I can scan, slowly
             | and imperfectly, but not query with complex questions. Or,
             | maybe, if the queries are too complex they timeout and I
             | don't get back any results.
        
             | Digit-Al wrote:
             | Some very interesting points.
             | 
             | I would like to say that the hardware is a bit of a mess as
             | well. There are weird redundant bits of legacy hardware
             | that aren't required any more, but nobody's bothered to
             | remove them from the system (appendix, wisdom teeth). There
             | are oddly paired systems (genitals combine waste removal
             | with reproduction; the nose combines air filtering with
             | scent detection; the mouth combines food intake and air
             | intake/outlet). And oddly co-dependent systems (lose your
             | sense of smell and your sense of taste takes a significant
             | hit).
        
               | jbay808 wrote:
               | What do you mean? That sounds just like a modern CPU to
               | me! :)
        
         | canjobear wrote:
         | When we say the brain has poor computational accuracy, we're
         | usually talking about the conscious brain we're aware of. But
         | our low-level motor actions and perceptions, coordinated by the
         | brain, require a lot of precise computation. These low-level
         | brain computations are the thing to compare to AI, not our
         | conscious thinking. Our conscious mind is more like low-
         | precision software running on top of an enormously powerful
         | computer.
        
           | jakeogh wrote:
           | It's unlikely we compute in any conventional sense, the
           | hardware is reality, and is going to exploit every available
           | effect that is energetically efficient.
           | 
           | Letting FPGA's "go" into the analog realm is an interesting
           | window: https://news.ycombinator.com/item?id=21253267
           | 
           | Glial brain cells:
           | https://news.ycombinator.com/item?id=22161192
        
           | joesb wrote:
           | > But our low-level motor actions and perceptions,
           | coordinated by the brain, require a lot of precise
           | computation.
           | 
           | But we don't actually do that precise computation though. try
           | repeating any action exactly and you will see some
           | inaccuracy.
        
             | maskboy wrote:
             | no, you will see some variability
        
           | TeMPOraL wrote:
           | > _But our low-level motor actions and perceptions,
           | coordinated by the brain, require a lot of precise
           | computation._
           | 
           | That accuracy is more likely achieved through fast, analog
           | feedback loops than precise calculation.
        
             | Reelin wrote:
             | Like a giant stack of op-amps.
             | (https://www.computerhistory.org/revolution/analog-
             | computers/...)
        
       | LordHeini wrote:
       | Is it though?
       | 
       | I think having a good metric is really hard.
       | 
       | For example i can have a neural net running on my smartphone
       | doing recognition tasks.
       | 
       | A task the brain is typically good at due to its neural net
       | structure while the computer basically has to simulate the net.
       | 
       | But still my smartphone can mark all the faces in a crowd
       | multiple times over in a time i can not recognize even a single
       | person.
       | 
       | And that with a camera way beyond the capabilities of the human
       | eye.
       | 
       | Modern smartphone processors draw around 1 or 2 watt max. So is
       | my phone more efficient at doing this?
       | 
       | One could argue that my brain does other stuff at the same time
       | like controlling heartbeat and what not but my phone has to keep
       | the wifi, clock and so on too.
       | 
       | The truly impressive part is the ability of the brain to do
       | completely generic problem solving for basically everything;
       | while running on 10 watt. With the added ability to learn a few
       | activities to a really high level.
       | 
       | Its is not efficient at doing a singular thing it is efficient
       | doing everything at once.
        
         | rimliu wrote:
         | Human eye is very lousy. Only a small patch is capable of some
         | decent resolution. That we actually perceive what we see as
         | something sharp is a compliment to the brain in itself.
        
         | JoeAltmaier wrote:
         | Yes but for integrating information, your brain is marvelous.
         | Somebody in the crown laughs or moves a certain way or you
         | catch a sniff - and BAM you found your person.
         | 
         | Any automated single-skill system might be more efficient, but
         | of course it becomes useless outside its parameters. Put a hat
         | on those people in the crowd and your phone may be totally
         | defeated.
        
       | mrwnmonm wrote:
       | The brain is weird. You can figure out how to split an atom, then
       | forget your keys inside your car.
        
       | jonnypotty wrote:
       | World record tennis serve is 144 miles an hour and a human can't
       | really move across a court and return a ball moving at this
       | speed. If they're lucky they can reach it and react in time to
       | hit it. I'm a bit confused by an article that claims tennis
       | players can react to and return serves up to 160 miles an hour. I
       | think evidence suggests that returning balls anywhere near this
       | fast is dependent on analysing factors before the ball starts
       | moving, the other players body position, racket position etc.
       | Players have an intuition about where the ball is going to go
       | without having to look at and analyse the flight of the ball.
       | 
       | Just did some very basic checking. Tennis court 23m 260mph =
       | 72m/s Ball takes approx. 0.3ms to travel length of court. human
       | reaction time to visual simulous 0.25ms So the idea is they move
       | and hit the ball in the remaining 0.05ms? Hmmmmm.
        
         | skummetmaelk wrote:
         | You're right that it's not possible to react if all the
         | information you have is the trajectory of the ball after it
         | leaves the racquet. Good players will subconciously be
         | predetermining the path of the ball by looking at how the
         | opponent is striking the ball.
        
         | rooam-dev wrote:
         | It doesn't specify who serves it at that speed either, could be
         | some kind of "serving machine".
        
           | jonnypotty wrote:
           | I guess so. But I'd say humans would have no chance at this
           | speed without another player to analyse visually.
        
         | ekelsen wrote:
         | Serves in tennis don't go directly down the line, they go cross
         | court. Returning players will often be standing behind the
         | baseline. Additionally balls start ~2.5-3m above the ground,
         | bounce and then come up again. The total distance traveled is
         | probably closer to ~27m.
         | 
         | The air resistance slowing the ball down is significant -
         | combined with the energy the ball loses bouncing, the ball has
         | lost more than half of its initial velocity by the time it gets
         | to the returning player.
         | 
         | I found these speed guns stats on a tennis forum for the ball
         | speed at different points:
         | 
         | Speed after being hit: 126mph
         | 
         | Speed before hitting court: 89mph
         | 
         | Speed after hitting court: 67mph
         | 
         | Speed at returner's baseline: 58mph
         | 
         | Even after doing the calculations correctly, there still isn't
         | a lot of time for reactions, but it is more plausible than your
         | initial analysis suggests. (Your units also seem off - should
         | be seconds not milli-seconds).
        
           | jonnypotty wrote:
           | Thanks for this. Much better than what I did although I don't
           | think 58 is a third of 126, more like a half.
        
         | Implicated wrote:
         | 'Players have an intuition about where the ball is going to go
         | without having to look at and analyse the flight of the ball.'
         | 
         | This is pretty easily observable with baseball players as well.
         | After playing thousands of games while standing in the same
         | (relative) place on the field I/they can anticipate where the
         | ball is going to go based on a variety of variables in real-
         | time... instantly.
        
           | emsy wrote:
           | In the book "Thinking fast and Slow" the Baseball example is
           | given for a learned intuition. The ball is too fast for
           | batters to react so they learn to anticipate the trajectory
           | of the ball from the way the pitcher throws. When they let
           | professionals players play against a female softball player
           | with lower throwing speed their intuition was off and they
           | missed the ball more often than against a male professional
           | player.
        
           | hacker_9 wrote:
           | 50% of the time it works 100% of the time
        
           | ajuc wrote:
           | There was a festival of jugglers in my city and they taught
           | me to juggle in like 15 minutes, I was amazed it's so easy
           | (the basic 3-ball juggling, and just for a monute or two, the
           | more difficult juggling is HARD and I had to train later to
           | be able to keep juggling forever).
           | 
           | There is a very easy trick - you look forward in the distance
           | keeping the balls in peripheral vision and there's 2
           | automatic reactions you have to develop:
           | 
           | 1. when the ball going up is at the top of the curve - throw
           | another ball up
           | 
           | 2. when a falling ball goes out of your peripheral vision -
           | do the "oh shit something's falling let's catch it" routine
           | with the hand that has less balls in it.
           | 
           | Hands learn very quickly how to move to catch the balls that
           | leave the peripheral vision "by itself" basing on the
           | trajectory you've seen.
           | 
           | It's actually harder to juggle when you look at the balls
           | directly, and it's impossible when you think about it and try
           | to do the moves consciously because you're too slow.
           | 
           | It was mindblowing to me that it's easier to catch a ball
           | when you don't look at it.
        
           | huffmsa wrote:
           | Almost all of the "amazing" things the brain does are
           | basically continuously refined predictive branch execution.
           | 
           | Which is why practice is important. You're essentially
           | strengthening certain neural pathways with continuous
           | exposure to certain inputs.
           | 
           | But this strength also makes us susceptible to misdirection
           | and slight of hand.
        
         | cepth wrote:
         | The article's articulation of what goes into returning a serve
         | is a bit simplistic, but the underlying idea is not crazy.
         | 
         | * When you return a serve in tennis, you are doing so from only
         | one side of the court. The opponent's serve can only land in a
         | service box that provides 13 feet of lateral space.
         | 
         | * Practically, there are relatively few spots in the service
         | box that can be reached by a serve. Because of human physiology
         | (the length of our arms, joints in the arms etc.), it would be
         | extremely painful to try to hit a fast serve to certain parts
         | of the service box. Either that, or the server would have to
         | stand in atypical positions on the service line (i.e. not at
         | the center tick) that would be a dead giveaway of where the
         | server was trying to hit to.
         | 
         | * So, in simplistic terms, most tennis players are choosing
         | between more-or-less staying in place (to return a body serve),
         | or leaping to their left or right. The serve must bounce before
         | you hit it, and it will be bouncing "towards you" vertically.
         | The returner thus is very rarely going to move vertically. This
         | usually only happens when you are moving in to pummel a slow
         | and short serve.
         | 
         | * At the highest levels of tennis, the vast majority (60%+) of
         | serves are going out wide, or down the middle
         | (https://www.atptour.com/en/news/berrettini-infosys-serve-
         | loc...). Mind you, these are also the same player who would
         | have the physical conditioning and athleticism to actually be
         | able to hit these blazing fast serves.
         | 
         | * Additional information for the returner is conveyed by the
         | serve toss. Almost all players are giving away tells here. For
         | example, if I'm a right hander serving from the deuce court,
         | and I toss my ball to the left (the "11 o'clock position"),
         | it's high unlikely that I'm hitting the ball down the middle.
         | Doing so would require one of those aforementioned contortions
         | in my arms and legs, and I would then be unlikely to generate
         | the power needed to strike the ball in a way that leads to a
         | super fast serve.
         | 
         | * So in reality, by the time that the server is making contact
         | between their racket and the ball, the returner will have a
         | general idea of the direction that the ball is going in.
         | 
         | * The article does conflate getting your racquet on the ball,
         | and making a successful return. Just as with any other tennis
         | shot, there is not guarantee that your return does not go into
         | the net, or go flying out. I think it's a far more plausible
         | claim that professional tennis players can get their ball on
         | the racquet vs claiming that they can cleanly/successfully
         | return these super fast serves.
         | 
         | Some other thoughts:
         | 
         | * Placement is just as important as speed in determining how
         | returnable a serve is.
         | 
         | * For example, there are plenty of examples of top tennis
         | players returning extremely fast serves. Federer against Isner
         | (140 mph): https://youtu.be/5gcvLbtaNxM, Murray against Raonic
         | (147 mph): https://youtu.be/8GYX4ZIPJsg
         | 
         | * The commonality between these successful returns is that the
         | serves themselves were fast, but poorly placed. By serving
         | right down the middle, the servers allowed Federer and Murray
         | to take one small step, and then make good contact with the
         | serves for an "easy return".
         | 
         | * One small quibble with the "world record tennis serve" you
         | cite. It's not 144 mph, but rather 157.2 mph (hit by John
         | Isner). If anything though, this is helps your argument.
         | 
         | * The unofficial record is 160+ MPH (hit by Sam Groth), but
         | this was at a second tier tournament with a questionable radar
         | gun (https://youtu.be/uKeL-W7xft0). Notice how even with this
         | serve, the returner correctly guesses where the serve is
         | headed, and even looks to have gotten a racquet on it.
         | 
         | * It's a bit of a chicken and an egg problem as well. There is
         | a very tiny sliver of people in the world who are physically
         | fit enough and who possess the natural physical traits (like
         | height and broad shoulders) necessary to hit serves in the 140+
         | MPH range. These people are likely playing on the ATP against
         | the players in the world best equipped (mentally and
         | physically) to return their serves.
         | 
         | * So all this is to say, returning serves in that 140-160 MPH
         | range is a low probability proposition. Heck, a perfectly
         | placed and well disguised serve even in the 110 MPH range can
         | be unreturnable (as seen in two decades of Federer highlights).
         | But, humans are indeed "capable" of returning serves in that
         | speed range.
        
       | sddfd wrote:
       | I feel uncomfortable at the ubiquitous, silent assumption that
       | what is marketed as AI is a computer implementation of a brain.
       | 
       | I see how the term neuronal network reinforces this believe, but
       | we (especially the researchers among us) should allow for the
       | possibility that we are missing something.
        
         | dkersten wrote:
         | I agree. I think its very widely known that our ANN's are only
         | very rough approximations of how the brain actually works, I
         | think the people who say its a computer implementation of the
         | brain are either laypeople who don't know much about machine
         | learning or the brain, are people marketing the hype for
         | personal gain or people without neuroscience knowledge who have
         | bought into the hype.
         | 
         | I also recently heard an argument for why our ANN models won't
         | spontaneously become sentient: human brains don't learn from
         | just observation, but also interaction. A young child doesn't
         | learn abouthow blocks are stacked by looking at images of
         | stacked boxes, they learn through experimentation, by stacking
         | boxes and seeinghow their actions affect the world around them.
         | For an AI, that means we either need to also work on robotics
         | so the AI can interact with its environment, not just sense it,
         | or we need to simulate an interactive virtual environment. Some
         | people are working on this and making great strides, but your
         | average toy ANN won't exhibit human intelligence in isolation,
         | in my opinion.
         | 
         | Combine those two things and we're still quite a ways away from
         | human-like intelligence or implementing a human (or
         | animal)-like brain.
        
           | martin-adams wrote:
           | Interestingly, there are some studies that imply that intense
           | thinking about doing an activity (such as a gym workout[1] or
           | hitting a baseball) can improve your physical skills than if
           | you didn't think about it. So this is supporting the notation
           | that you can rewire your brain by thinking, as well as
           | tactile input.
           | 
           | [1] http://nautil.us/blog/just-imagining-a-workout-can-make-
           | you-...
        
             | dkersten wrote:
             | That's not really what I'm referring to (or at least, only
             | a little). Once you have a mental model of something, you
             | can for sure think on it or build on it without
             | interaction, but to initially set up our mental models (as
             | children or whatever), I believe it takes interaction. Once
             | we have a base, we can think abstractly about it and learn,
             | but building that base..
             | 
             | Or, put another way, its my belief that you can "_improve_
             | your physical skills" by thinking, but to buildthe skill in
             | the first place, interaction is necessary.
             | 
             | But even if its not true and interaction isn't strictly
             | necessary, I think (wrongly oerhaps) that few people would
             | disagree that usually learning by doing is far superior
             | that only learning by thinking/reading/listening/watching.
             | So even if not neccesary, its at least more efficient
             | (doing both together is probably most efficient).
        
         | martin-adams wrote:
         | Absolutely. I think what AI has highlighted is that the problem
         | set is now looking more similar to a human experience. For
         | example, how you train based on input and learn from failure
         | and how limited information can confuse even a human brain
         | (think image recognition). That said, because the problem looks
         | the same, doesn't imply the method of processing is the same.
        
         | [deleted]
        
         | papito wrote:
         | Neural networks also have no ability to create new information
         | based on their own mistakes. What is a mistake? When does
         | something look "off" but still very interesting?
         | 
         | For example, you can feed a neural net all the recipes of
         | burgers to create a perfect burger. Great. But how does the
         | same net _invent_ the burger?
         | 
         | The burger, like many foods or accidental art, was invented as
         | a result of scarcity, circumstance, experimentation, or just
         | fortunate error. That sort of imperfection is very hard to
         | achieve with AI, because it is designed to be either perfect or
         | fail.
        
           | gambiting wrote:
           | >>For example, you can feed a neural net all the recipes of
           | burgers to create a perfect burger. Great. But how does the
           | same net invent the burger?
           | 
           | Wait....but it just....did? It took the information about all
           | possible burger recipes and invented a new one out of these.
           | Like, a human could only invent a new burger if they knew
           | anything about burgers in the first place, at the very least
           | that it's a bun with some filling in between, otherwise you'd
           | have no context to invent anything.
        
             | jayjader wrote:
             | Not OP, but I think they're not talking about inventing a
             | _new_ burger, but inventing _the_ burger, as in the first
             | one ever.
             | 
             | As in, the neural net in this example is able to improvise
             | a new burger recipe solely because it was given existing
             | recipes to burgers as input; it did not come up with the
             | notion of a burger and then produce a recipe that outputs
             | something fulfilling that notion when followed.
             | 
             | Personally, I would argue that this distinction is not as
             | clear-cut as the tone of the original comment seems to
             | suggest. Humans didn't invent the burger from nothing
             | either. We've been grilling meat and making bread for
             | millennia, and sandwiches have been a thing for over a
             | century.
             | 
             | A 'burger' is just another iteration of our biological
             | neural nets' attempts to make food from ingredients already
             | present in our physical reality. Given that we flow in a
             | single direction through time, any food we make is in turn
             | added to our list of ingredients for making food "the next
             | time". One could argue it is only a matter of time once
             | meat can be ground into patties and grains turned into
             | bread that burgers start being made - given the relative
             | benefits humans gain from consuming both.
             | 
             | This comes back to what others have expressed elsewhere in
             | this thread, that the probable [most] important
             | distinctions aren't between software vs hardware, or
             | organic life vs silicon processors, but the environment &
             | capacity to interact with said environment. Some sense of
             | "innate tendency to experiment" (i.e. curiosity) is
             | probably either equal in importance or a direct runner-up.
        
           | GuB-42 wrote:
           | GAN can do that. For example, AlphaZero invented strategies
           | for the game of go from nothing but a random number generator
           | and the rules of the game. As for perfection neither go nor
           | chess AIs play perfectly, and they can still beat the best
           | human players.
           | 
           | Of course, an AI intended to play go isn't going to invent
           | the burger. But I see no reason why, given a list of
           | ingredients, their properties and a model of what human enjoy
           | eating, a neural network couldn't invent the burger.
           | 
           | Creating a new recipe is just an optimization problem at its
           | core.
        
         | grenoire wrote:
         | I am definitely not an expert on this topic but my impression
         | is that the research is not really focusing on structured
         | abstractions of sensory input, or making these abstractions
         | stateful. Shapes, colours, music, and whatnot are clearly
         | stored and retrieved in our brains, which is something NN
         | research is not looking at (enough).
        
       | headalgorithm wrote:
       | See discussion from 2018:
       | https://news.ycombinator.com/item?id=16895124
        
       ___________________________________________________________________
       (page generated 2020-06-05 23:00 UTC)