[HN Gopher] Parallels in the ways that humans and ML models acqu... ___________________________________________________________________ Parallels in the ways that humans and ML models acquire language skills Author : theafh Score : 61 points Date : 2023-05-22 13:48 UTC (9 hours ago) (HTM) web link (www.quantamagazine.org) (TXT) w3m dump (www.quantamagazine.org) | ckemere wrote: | I wish I could include Fig. 1 of the paper here | (https://www.nature.com/articles/s41598-023-33384-9/figures/1). | The result should be "ANN performs similar nonlinear time domain | filtering as human brain stem". There seems to be nothing at all | about the learning process, just that ABR recordings of English | and Spanish speakers hearing a confusing syllable are different, | and ANN trained on English and Spanish has a similar difference | ... | ravi-delia wrote: | That is honestly a much more interesting result than the title | would suggest. We know the brain can't do backprop (neurons are | one way), but the fact that there is convergence in algorithm | is very fun. | ckemere wrote: | I suppose. Equivalent results about natural images and edge | detection have been reported in the image processing | (classical, not deep) ML literature 20 years ago... | denial wrote: | I'm probably flaunting my ignorance here, but how isn't this an | extremely tenuous connection? The graphs are unconvincing beyond | a "... Maybe? I guess?" and comparing brain activity to NN | activity seems dubious. | | I'd be curious what other sounds look like for both. | moffkalast wrote: | Username.. checks out? | tikkun wrote: | Related: | | "Here's a phenomenon I was surprised to find: you'll go to talks, | and hear various words, whose definitions you're not so sure | about. At some point you'll be able to make a sentence using | those words; you won't know what the words mean, but you'll know | the sentence is correct. You'll also be able to ask a question | using those words. You still won't know what the words mean, but | you'll know the question is interesting, and you'll want to know | the answer. Then later on, you'll learn what the words mean more | precisely, and your sense of how they fit together will make that | learning much easier. The reason for this phenomenon is that | mathematics is so rich and infinite that it is impossible to | learn it systematically, and if you wait to master one topic | before moving on to the next, you'll never get anywhere. Instead, | you'll have tendrils of knowledge extending far from your comfort | zone. Then you can later backfill from these tendrils, and extend | your comfort zone; this is much easier to do than learning | "forwards". (Caution: this backfilling is necessary. There can be | a temptation to learn lots of fancy words and to use them in | fancy sentences without being able to say precisely what you | mean. You should feel free to do that, but you should always feel | a pang of guilt when you do.)" | | Reminds me of the attention mechanism in transformers! | | http://math.stanford.edu/~vakil/potentialstudents.html | | And for any parents with toddler age children, seeing the way | that toddlers relate to language, and that people relate to | toddlers about language, leads to lots of fun observations that | remind me of LLM related concepts. | _puk wrote: | Thanks for this! | | I have made a point over the years of hanging out with people | that are far more intelligent and talented than myself, many of | whom are in completely different fields to myself.. and I | realise that I've always done this! | | Whether it's art, music, or the future of power generation, | I've been able to hold many conversations that have an aha | moment halfway through, where some nugget clicks and backfills | the conversation to that point. | | And yes, I feel a pang of guilt when entertaining these | conversations, but I've made solid friends off of a number of | these interactions, so I figure I can't be a completely | unbearable bore! | | Or maybe I'm a bot. | jameshart wrote: | The important part is in the parens at the end of course: | | > There can be a temptation to learn lots of fancy words and to | use them in fancy sentences without being able to say precisely | what you mean. You should feel free to do that, but you should | always feel a pang of guilt when you do. | | GPT - as far as we know - feels no guilt pangs whatsoever. | ftxbro wrote: | > "While it's still unclear exactly how the brain processes and | learns language, the linguist Noam Chomsky proposed in the 1950s | that humans are born with an innate and unique capacity to | understand language. That ability, Chomsky argued, is literally | hard-wired into the human brain. The new work, which uses | general-purpose neurons not designed for language, suggests | otherwise. "The paper definitely provides evidence against the | notion that speech requires special built-in machinery and other | distinctive features," Kapatsinski said." | | chomsky isn't going to like this | nborwankar wrote: | This line of thinking may be confusing "sufficient" with | "necessary". I don't believe Chomsky's thesis and Kapatsinki's | statement are mutually exclusive. They could both be true. | Chomsky didn't appear to have made a general statement about | language acquisition in all and every mechanism. And the | existence of language acquisition via other mechanisms does not | say anything definitive about humans. The use of the word | "neuron" is not enough to define how an actual neuron might | work aside from its first order activation behavior. And | Chomsky's thesis implies a genetic ability of language | acquisition that is outside the scope of wiring up hardware and | software neurons. Note that transfer learning is very loosely | analogous to inheritance of language capability and the | expected widespread use of such models in future may actually | validate not disprove Chomsky. | pessimizer wrote: | > which uses general-purpose neurons not designed for language, | | I'm not sure about this. We've probably designed general- | purpose "neurons" to talk to us, even if we didn't think of it | that way. They aren't emulators of physical neurons, they're | abstractions of speculative neurons. The way we figure out if | they work is by making them talk to us. | canjobear wrote: | He's not going to care about it. | ftxbro wrote: | I mean he is writing articles titled like "The False Promise | of ChatGPT" he might care a little bit | https://www.nytimes.com/2023/03/08/opinion/noam-chomsky- | chat... | unnouinceput wrote: | Quote: "The results not only help demystify how ANNs learn, but | also suggest that human brains may not come already equipped with | hardware and software specially designed for language." | | I thought this was common knowledge. I mean if we'd come with | already specialized hardware for language at birth we'd speak | directly just as a new born puppy barks. Or if we'd have | specialized software then children of geniuses would be geniuses | themselves. And both cases, are obviously not happening in real | life. | [deleted] ___________________________________________________________________ (page generated 2023-05-22 23:00 UTC)