[HN Gopher] Study urges caution when comparing neural networks t...
       ___________________________________________________________________
        
       Study urges caution when comparing neural networks to the brain
        
       Author : rntn
       Score  : 107 points
       Date   : 2022-11-03 19:35 UTC (3 hours ago)
        
 (HTM) web link (news.mit.edu)
 (TXT) w3m dump (news.mit.edu)
        
       | palata wrote:
       | No shit.
        
       | bee_rider wrote:
       | If they'd just called them "premium matrix multiplications" I bet
       | the field never would have caught on.
        
         | [deleted]
        
       | bawolff wrote:
       | The aspect of ai that makes me think something related is going
       | on, is how artifacts look in image generation systems like stable
       | diffusion.
       | 
       | Often these systems will have really bizzare artificats, people
       | with 3 arms, etc. However at the same time when you glance at the
       | output without looking carefully you will sometimes miss these
       | artifacts even though they should be absolutely glaring.
        
         | cameronh90 wrote:
         | Check out the Velocopedia project for something else along this
         | line of thinking:
         | http://www.gianlucagimini.it/prototypes/velocipedia.html
         | 
         | Turns out nobody quite knows how to draw a bicycle. They get
         | the gist but the details don't make sense.
        
         | Waterluvian wrote:
         | Yes! I'm confident this isn't an original thought, but I feel
         | like it's a dream generator. Things that aren't quite right but
         | are in some way, perfectly contextually and topologically
         | valid. Like it's tricking the object classifier in my brain
         | with a totally unrealistic thing that my brain is ready to
         | simply accept.
         | 
         | There's some image I see on occasion that's 100% garbage. If
         | you focus on it you cannot make out a single thing. But if you
         | glance at it or see it scaled down, it looks like a table full
         | of stuff.
        
         | jasonwatkinspdx wrote:
         | While there's definitely a similarity it's also important not
         | to over generalize. For example the human vision system and
         | stable diffusion may end up using similar feature
         | decomposition, but that doesn't mean the rest of the brain
         | works anything like that.
         | 
         | I strongly suspect that if we do ever fully map the
         | "architecture" of the brain, the result will be a massive graph
         | that's not readily understandable by humans directly. This is
         | already the case in biology. We'll end up with a computational
         | artifact that'll help us understand cause and effect in the
         | brain, but it'll be nothing like a tidy diagram of tensor
         | operations like in state of the art ML papers.
        
         | RosanaAnaDana wrote:
         | I think anyone who has tripped would also commiserate. Seeing
         | too many eyes or fingers at a glance. Things feeling cartoony
         | or 'shiny'.
         | 
         | I don't know if AGI is down the road diffusion models have
         | taken us. I'm not even really sure what most people mean by AI
         | when they talk about it. But stable diffusion et al are clearly
         | super human. I'm not sure that AGI is down the trail cut by
         | diffusion models, but if it's ever accomplished, these models
         | will almost assuredly represwbt some of the learnings required
         | to get there.
        
           | sebmellen wrote:
           | Seeing my hand covered in eyes while tripping completely
           | changed my view of the mechanisms behind sight. Something
           | that had previously seemed so "real" and deterministic
           | suddenly was no longer; the interpretation layer was
           | momentarily unveiled.
        
           | merely-unlikely wrote:
           | My pet (uneducated) theory is that AI needs to have a parent
           | layer "consciousness" before it can become an AGI. Think of
           | that voice inside your head and your ability to control
           | bodily functions without needing to do it all the time. My
           | model is our brains have many specialized "sub AIs" operating
           | all the time (remembering to breathe for example) but then
           | the AI behind the voice can come in and give commands that
           | override the lower level AIs. What you think of as "me" is
           | really just that top level AI but the whole system is needed
           | to achieve general intelligence. Sort of like a company with
           | many levels of employees serving different functions and a
           | CEO to direct the whole thing, provide goals, modify
           | components, and otherwise use discretion.
        
         | ben_w wrote:
         | For me, it's the way generative videos can rapidly, but to my
         | eyes seamlessly, transition from one shape to another. I may
         | not be able to record my dreams, but my memories of my dreams
         | do match this effect, with one place or person suddenly
         | becoming another.
        
           | Galaxeblaffer wrote:
           | It's very similar to strong trips on psychedelics.
        
         | vharuck wrote:
         | But wouldn't the people creating these models and deciding
         | whether to publish them prefer ones with these "understandable"
         | mistakes? There might have been other ones that had equal
         | potential as far the evaluation measure goes, but humans had
         | been involved all along the way and said, "Yeah, that picture
         | looks like a person made it. We should keep developing this
         | model."
        
         | ffwd wrote:
         | Not sure if I'm missing a subtle nuance in your point but to me
         | those "artifacts" are completely expected. Those artifacts like
         | 3 arms are the patterns / outputs in the model, but since it
         | doesn't have a fundamental understanding of the
         | patterns/objects like arms, it just blends many images of arms
         | together and create things like 3 arms. Also why there are so
         | many eyes, arms, legs and other things in other generative
         | programs. It just spits out the training set in random
         | configurations (ish).
         | 
         | I suspect also the reason the images look OK at a glance is
         | because the images as a whole also represent patterns in the
         | model so they actually come from "real life" / artist created
         | images and thus have some sense of cohesion. But making the AI
         | have all the right patterns so it never makes a mistake at all
         | scales of the image while also being able to combine the
         | pattern with real understanding of what they are conceptually
         | is the real trick but until then it will be a "salad bowl
         | collage" thing at random intervals.
         | 
         | The closest thing to the brain it looks like to me is simply
         | the hierarchical nature of it which seems similar to v1/v2/the
         | vision system in humans but I've only been told that, I'm no
         | neuroscientist.
        
           | l33tman wrote:
           | "It just spits out the training set in random configurations
           | (ish)." is a pretty gross misrepresentation and
           | oversimplification of how such a model works, akin to saying
           | a human artist only spits out whatever they saw earlier in
           | their life in random configurations, or saying that SD only
           | spits out pixel values it has seen before, or combinations of
           | pixel values that form edges, etc.
           | 
           | FWIW I don't think there is anything particularly wrong in
           | the model architectures or training data that in some
           | fundamental way makes it impossible to always get 2 arms.
           | After all, lots of other tricky things are almost always
           | correct. I suspect it's a question of training time and model
           | size mostly (not trivial of course as it's still expensive to
           | re-train to check modified architectures etc). It's also a
           | matter of diffusion sampling iterations and choice of sampler
           | at inference time, for the case of SD.
        
       | vavooom wrote:
       | If you are interested in learning about the intersection of
       | Artificial Neural Networks and Biological Neural Network
       | research, I recommend " _The Self-Assembling Brain - How Neural
       | Networks Grow Smarter_ " by Peter Robin Hiesinger. He attempts to
       | bridge research from both fields of study to identify where there
       | are commonalities and differences in the design of these
       | networks.
        
         | esalman wrote:
         | I second this. You can also check out the brain inspired
         | podcast that features him:
         | https://braininspired.co/podcast/124/
         | 
         | What I understand is that he claims the underlying algorithms
         | that govern our behavior and how it evolves from birth are
         | ingrained in our genetic code. Current neural network models
         | try to model our behavior, but it is way behind when it comes
         | to discovering those ingrained algorithms.
        
       | constantcrying wrote:
       | To me one important aspect is the existence of adversarially
       | attacks on neural networks. They essentially prove that the
       | neural network never "understood" its data. It hasn't found some
       | general categories which correspond somewhat to human categories.
       | 
       | Human brains can be tricked too, but never this way and never
       | beyond our capacities for rational thought.
        
         | comboy wrote:
         | Optical illusions is one thing, but, I don't know, "Predictably
         | Irrational", "Thinking fast and slow" or just whatever is
         | happening all around.
         | 
         | We do not understand our data.
         | 
         | In general yes, I believe most people will only accept thinking
         | machine when it can reproduce all our pitfalls. Because if we
         | see something and the computer doesn't, then it clearly still
         | needs to be improved, even if it's an optical illusion.
         | 
         | But our bugs aren't sacred and special. They just passed
         | Darwin's QA some thousands years ago.
        
           | ben_w wrote:
           | > But our bugs aren't sacred and special.
           | 
           | I'd agree about sacred, but I have a hunch they may indeed be
           | special... or at least useful. Current AI requires far more
           | examples than we do to learn from, and I suspect all our
           | biases are how evolution managed to do that.
        
             | marmada wrote:
             | Humans are trained on petabytes of data. From birth, we
             | ingest sights, sounds, smells etc. Imagine a movie of every
             | second of your life. And an audio track of every second of
             | your life. Etc. Etc.
             | 
             | Humans get a lot of data.
        
               | elcomet wrote:
               | And you didn't even count the data from million of years
               | of evolution. The brain doesn't come as a blank slate
               | when you're born.
        
               | ben_w wrote:
               | That's literally what I was saying when I wrote "our
               | biases are how evolution managed to do that".
        
               | ben_w wrote:
               | Humans get a lot of _data_.
               | 
               | AI gets more _examples_.
               | 
               | Tesla autopilot _has_ a movie of every second it 's
               | active, for every car in the fleet that uses it. It has
               | how many lifetimes of driving data now? And yet, it's...
               | merely ok, nothing special, even when compared to all
               | humans including those oblivious of the fact they
               | shouldn't be behind a wheel.
        
         | ben_w wrote:
         | 22 May 2018:
         | 
         | https://arxiv.org/abs/1802.08195
        
           | cuteboy19 wrote:
           | I wonder if adversarial attacks can be mitigated by simply
           | passing a few transforms of the same image to the neural
           | network.
        
       | icare_1er wrote:
       | R.Penrose rules.
        
       | buscoquadnary wrote:
       | You're telling me were not just 5 years from AGI.
       | 
       |  _Shocked pickachu face_
       | 
       | Seriously the good news is that AGI and fusion are only like 5 to
       | 10 years out. The bad news is it's been that way for the past 40
       | years.
        
         | [deleted]
        
         | ben_w wrote:
         | AGI is somewhere between "never" and "GPT-3 is it", depending
         | on how general the G has to be and how intelligent the I has to
         | be.
         | 
         | (For all it's flaws, GPT-3 is already does better at random
         | professional knowledge than many TV show script writers).
        
         | gryBrd1987 wrote:
         | I sometimes wonder how much further along these initiatives
         | might be if the economy was focused on them instead of My
         | Pillow sales, cheap crap from Walmart, and simply handing
         | personally wealthy elites stacks of cash to create pointless
         | jobs and the money was put into net new technology, not Twitter
         | 2.0 and VR 4.0
        
           | nightski wrote:
           | Who needs to eat or sleep, it just hurts productivity.
        
             | themitigating wrote:
             | The parent didn't suggest cutting off food, housing, or
             | other essentials.
        
               | mwint wrote:
               | Technically, everyone in Soviet Russia had food and
               | housing.
        
               | f6v wrote:
               | Have you lived there?
        
               | gryBrd1987 wrote:
               | Technically the US has more people in prison than China,
               | and QOL metrics have all been dropping for decades.
               | 
               | Crack epidemic. Prescription drug epidemic. RvW being
               | gutted.
               | 
               | Let's sit and fear becoming a nation state that collapsed
               | 30 years ago while ignoring we're already on the path.
        
               | [deleted]
        
           | te_chris wrote:
           | Move to China or North Korea I guess? Still seems to have
           | problems with graft though.
        
             | gryBrd1987 wrote:
             | Ah the exceptional minds of the US. "If it smells like a
             | violation of politically correct tradition as I know it,
             | it's communism!"
        
               | te_chris wrote:
               | Not American, but you asked for planned economy: those
               | are it.
        
               | gryBrd1987 wrote:
               | So all these US businesses do zero planning? The Fed is
               | raising rates for lulz? We've unintentionally allowed
               | consolidation of ownership? No one has any clue? You're
               | nitpicking semantics.
        
               | f6v wrote:
               | China is definitely not a "planned economy" in a sense
               | that you meant it. Also, every economy is planned in some
               | sense. Every government plans how much it's going to make
               | and spend.
        
         | yboris wrote:
         | The meme of fusion being "5 years from now ... for past 40
         | years" is so frustrating. This is because the investment into
         | it went down to abysmal -- not even "maintenance" level of when
         | it was just getting started.
         | 
         | If the government spent money on it, we would likely have more
         | progress.
         | 
         | And today isn't like it was before -- we have ReBCO ;)
        
         | otikik wrote:
         | We just need to figure out how to align the carbon nanotubes
         | properly
        
       | lalos wrote:
       | Here's my guess: neurons tap into quantum mechanics but we are
       | too primitive to understand that for now. The brain was initially
       | modeled as humors/fluids back when we developed aqueducts, then
       | telegraph came into the scene and it was modeled as electrical
       | impulses and now computers/ML are popular therefore we see it as
       | a neural network. Next step is quantum.
        
         | tuanx5 wrote:
         | Funny you should mention that!
         | https://www.news.ucsb.edu/2018/018840/are-we-quantum-compute...
        
         | mach1ne wrote:
         | Eh, we might go there, but I don't think the core algorithm
         | that's running in our heads has much to do with quantum-level.
        
         | varjag wrote:
         | Look up 'Penrose argument'. _Personally_ tho I believe it 's
         | the physicists' equivalent of seeing everything as a nail when
         | using a hammer.
        
           | RosanaAnaDana wrote:
           | ok but, at least historically nn as discussed were
           | interesting because of their resemblence to naturally
           | networked systems.
        
         | retrac wrote:
         | It's certainly not proven, but there are many hints in that
         | direction, and the hints keep piling up. Recent research [1] on
         | how the classic anaesthetics work (a great mystery!) suggests
         | they operate by inhibiting the entanglement of pairs of
         | electrons in small molecules which split into free radicals,
         | the electrons then physically separated but still-entangled.
         | 
         | It seems it is at least possible, that there is speed-of-light
         | quantum communication within the brain. And that consciousness
         | may hinge fundamentally on this. If this is true, we're pretty
         | much back to square one in terms of understanding.
         | 
         | [1] https://science.ucalgary.ca/news/state-consciousness-may-
         | inv...
        
           | tabtab wrote:
           | We don't currently fully know how anesthetics work largely
           | because we don't really know how the human brain works on a
           | large scale. We'd have to solve that before seriously
           | proposing quantum effects. In other words, it's too early to
           | rule out classic physics and chemistry as the brain's primary
           | mechanism. (Although solving how it works could first solving
           | quantum mysteries, but Occam's razor is classic rules in my
           | opinion.)
        
         | Retric wrote:
         | If the Brian is using some physics we don't understand that's
         | something new not Quantum Mechanics. QM a specific theory of
         | how the world operates, if something else is involved it
         | doesn't fall under that theory it's [insert new theory's name
         | here].
         | 
         | I really don't get why everyone wants the Brian to operate on
         | some new QM effect other than peoples perception that a 100
         | year old theory is somehow cutting edge, spooky, or something.
         | Perhaps it's that the overwhelming majority of people who talk
         | about QM don't actually understand it even a little bit. Odd
         | bits of QM are already why lasers, LED's, and transistors work.
         | You use incites from the theory everyday in most electronic
         | devices, but it's just as relevant for explaining old
         | incandescent bulbs we just had other theories that seemed to
         | explain them.
        
           | dekhn wrote:
           | I think you're probably missing a number of the important
           | details. In the Penrose/Hammerof model, they're explicitly
           | saying that humans are observed to generate problem solutions
           | that could not have been generated by a purely classical
           | computing process, therefore, the brain must exploit some
           | specific quantum phenomenon.
           | 
           | When you talk about QM a a theory of how the world operates,
           | there are wide ranges of QM. Everything from predicting the
           | structure and energy states of a molecule, to how P/N
           | junctions work, to quantum computers. Now, for the first one
           | (molecules), the vast majority of QM is just giving ways to
           | compute the electron density and internuclear distances using
           | some fairly straightforward and noncontroversial approaches.
           | 
           | For the other ones (P/N junctions, QC computers, etc), those
           | involve exploiting very specific and surprising aspects of
           | quantum theory: one of quantum tunnelling, quantum coherence,
           | or quantum entanglement (ordered from least counterintuitive
           | to most). We have some evidence already that there are some
           | biological processes that exploit tunnelling and coherence,
           | but none that demonstrate entanglement.
           | 
           | Personally, I think most people think the alternative to
           | Penrose- the brain does not compute non-computable functions,
           | and does not exploit or need to exploit any quantum phenomena
           | (expect perhaps tunnelling) to achieve its goals.
           | 
           | Now, if we were to have hard evidence supporting the idea
           | that brains use entanglement to solve problems: well, that
           | would be pretty amazing and would upend large parts of modern
           | biology adn technology research.
        
             | Retric wrote:
             | The Brian using entanglement would completely destroy
             | modern physics as we know it, the effect on biology would
             | be tiny by comparison.
             | 
             | Your other points are based on such fundamental
             | misunderstanding that it's hard to respond. Saying
             | something isn't the output of classical computing processes
             | while undemonstrated, is then used to justify saying they
             | must therefore use Quantum Phenomenon. But logically not
             | everything that is either classical or Quantum so even that
             | logical inference is unjustified. Logically it's like
             | saying well it's not a soda so it must be a rock.
             | 
             | PS: If people where observed to solve problems that can't
             | be solved by classical computer processing that would be a
             | really big deal. As in show up on Nightly News, and win
             | people Nobel prizes big. Needless to say it hasn't
             | happened.
        
             | russdill wrote:
             | The set of problems that are computable by a classical
             | computer are the same set of problems computable by a
             | quantum computer. I think you might be misstating the
             | Penrose argument/position.
        
             | [deleted]
        
           | RosanaAnaDana wrote:
           | My understanding of the hypothesis being represented here is
           | QM as a kind of random number generator operating at the
           | neuron/microtubule level. I didn't think there was anything
           | other than a modest injection of randomness being invoked,
           | but I could be misstating the premise.
        
             | crowbahr wrote:
             | It's an absurd premise to begin with: The scale at which
             | quantum effects propagate and are observed is radically
             | different than the scale at which the neurons in your brain
             | operate.
             | 
             | The functional channels for neurons are well understood,
             | even if we're still diagramming out all the types of
             | neurons. Voltage gated calcium channels are pretty damn
             | simple in the grand scheme of things, and they don't leave
             | space for quantum interactions beyond that of standard
             | molecular interactions.
             | 
             | The only part of the brain we don't understand is how all
             | the intricacies work together, because that's a lot more
             | opaque.
        
         | marginalia_nu wrote:
         | Neurons almost certainly use quantum processes, but so do most
         | transistors. The brain is too too warm for large-scale quantum
         | effects though. You're not going to find phase coherence at
         | that scale in such an environment, which is pretty much the
         | prerequisite for quantum effects (that is fairly well
         | understood).
        
           | tabtab wrote:
           | I believe what was meant was quantum-only or primarily-
           | quantum effects rather than the _aggregate_ effects we
           | normally see (classic physics  & chemistry), which are
           | probably the result of quantum physics, but we have "classic"
           | abstractions that model them well enough. Thus, the issue is
           | whether the brain relies mostly on classic effects (common
           | aggregate abstractions) for computations or on quantum-
           | specific effects.
        
         | Teever wrote:
         | But why is the next step quantum? And why is this the final
         | step?
        
           | tarboreus wrote:
           | Because we don't understand quantum physics, and we don't
           | understand the brain. I don't think we know if it's the final
           | step. There could be wizard jelly or something at the bottom.
        
             | marginalia_nu wrote:
             | Quantum physics is fairly well understood. Perhaps not
             | among laymen, but that's mostly due to pedagogical
             | challenges, which is why a lot of the discourse seems to be
             | stuck approaching it as though we were living nearly 100
             | years into the past.
        
       | kgarten wrote:
       | this article makes me sad ... a neural network can be also a
       | network of biological neurons, the author means artificial neural
       | network https://en.m.wikipedia.org/wiki/Neural_network the
       | Wikipedia article even goes into the differences, so why did we
       | need a study for that?
       | 
       | A study urges caution comparing Jellyfish to Jelley ... tasters
       | found they are not the same (even though I hear that fried
       | jellyfish taste nice...)
       | 
       | study urges caution comparing the model to the real thing, as the
       | model has some generalizations the real thing does not ...
        
         | Barrin92 wrote:
         | the motivation is also in the article, because the original
         | research that suggested similarities in activity only achieved
         | this by doing it under conditions that are implausible in
         | biological systems, therefore that original research likely was
         | misleading.
        
         | gryBrd1987 wrote:
         | The brain is not involved in a whole lot of behaviors though.
         | Cells organize themselves to an extent. Cuts heal without us
         | focusing conscious thought on them.
         | 
         | The brain is a hard drive but the body is the whole computer.
         | 
         | Science is proving physical causation. Not just writing down
         | what we want to be true.
        
           | RosanaAnaDana wrote:
           | Emergent pheneomena both perhaps?
        
           | tudorw wrote:
           | https://thedebrief.org/is-consciousness-really-a-memory-
           | syst...
        
       | jcims wrote:
       | Andrej Karpathy was recently on Lex Fridman's podcast and covered
       | this to some extent. He has the same perspective on this topic
       | and expanded on it quite a bit. Great listen overall IMHO -
       | https://www.youtube.com/watch?v=cdiD-9MMpb0
       | 
       | I like his idea of finding 0-days in physics. :)
        
       | [deleted]
        
       | nvrspyx wrote:
       | From the actual study's abstract:
       | 
       | > Unique to Neuroscience, deep learning models can be used not
       | only as a tool but interpreted as models of the brain. The
       | central claims of recent deep learning-based models of brain
       | circuits are that they make novel predictions about neural
       | phenomena or shed light on the fundamental functions being
       | optimized... Using large-scale hyperparameter sweeps and theory-
       | driven experimentation, we demonstrate that the results of such
       | models may be more strongly driven by particular, non-
       | fundamental, and post-hoc implementation choices than fundamental
       | truths about neural circuits or the loss function(s) they might
       | optimize. Finally, we discuss why these models cannot be expected
       | to produce accurate models of the brain without the addition of
       | substantial amounts of inductive bias, an informal No Free Lunch
       | result for Neuroscience. In conclusion, caution and
       | consideration, together with biological knowledge, are warranted
       | in building and interpreting deep learning models in
       | Neuroscience.
       | 
       | And IMO a succinct description of the problematic assumption
       | being cautioned against in the study's introduction section:
       | 
       | > Broadly, the essential claims of DL-based models of the brain
       | are that 1) Because the models are trained on a specific
       | optimization problem, if the resulting representations match what
       | has been observed in the brain, then they reveal the optimization
       | problem of the brain, or 2) That these models, when trained on
       | sensibly motivated optimization problems, should make novel
       | predictions about the brain's representations and emergent
       | behavior.
       | 
       | ---
       | 
       | I think to most, the problem with claim number 2 directly above
       | is obvious, but it's important to also look at claim 1.
        
         | [deleted]
        
       | random_upvoter wrote:
       | Any true understanding or new insight originates in the plexus
       | solaris which is near your heart, then somewhat slowly works it
       | way up to the spine. The brain is a somewhat predictable fleshy
       | motor capable of turning the insight into language, storing it in
       | memory, or acting on it. Most of the times we "get by" with the
       | stored procedures in the brain but don't imagine it's the place
       | where original understanding is generated. Funny how the ancient
       | Egyptians understood this but we don't. Of course this is also
       | why all attempts to create AI by simulating what happens in the
       | brain are doomed to hilarious failure.
        
       | nathias wrote:
       | philosophers caution when comparing an analogy to a thing
        
         | dekhn wrote:
         | also the map is not the territory
        
         | cirgue wrote:
         | I think it's also important to highlight that the analogy
         | between neural networks and brains is to help people visualize
         | what a neural network is, not what a brain is. It's really just
         | to convey the idea of multiple nodes passing information to one
         | another. After that point, the comparison is useless because
         | the two systems diverge so wildly outside of that one (pretty
         | loose) conceptual connection.
        
       | adharmad wrote:
       | This is a very old article by Francis Crick which essentially
       | says the same: https://www.nature.com/articles/337129a0
        
         | babblingfish wrote:
         | As someone who studied Neuroscience in college, I remember this
         | paper and some other examples showing just how different
         | computational neural networks are from real neurons. It's
         | difficult for me to believe that professional researchers could
         | really believe a NN is an accurate model of the real deal.
         | 
         | The paper also does not have any reference to a study or paper
         | that explicitly states that a neural network is a good model
         | for grid cells. (Please correct me if I am wrong.) So I am left
         | wondering why this direction was chosen.
         | 
         | Maybe it's a little cynical, but this topic seems to have been
         | chosen (at least in part) to produce a splashy headline. Or in
         | other words, to give the Stanford and MIT PR engine something
         | to print.
         | 
         | This is the sort of obvious thing we all knew to be true. Why
         | people with access to lab animals and a fully stocked
         | microbiology lab needed to prove it (again) I do not
         | understand.
        
       | emptybits wrote:
       | Last week's Lex Fridman podcast featured Andrej Karpathy (former
       | director of AI at Tesla, founding member of OpenAI) and they
       | discussed this aspect briefly also.
       | 
       | The usefulness of neural networks has not ceased, despite
       | researchers' early ideas and hopes about its biological analogies
       | having somewhat sheared away.
        
       | cameronfraser wrote:
       | Most introductory deep learning courses are very clear about how
       | far the analogy goes, if people are interpreting it as something
       | more I don't think it's the fault of practitioners/educators and
       | more the fault of people's imagination and selective hearing.
        
       ___________________________________________________________________
       (page generated 2022-11-03 23:00 UTC)