[HN Gopher] Neuromorphic learning, working memory, and metaplast...
       ___________________________________________________________________
        
       Neuromorphic learning, working memory, and metaplasticity in
       nanowire networks
        
       Author : taubek
       Score  : 62 points
       Date   : 2023-04-24 17:51 UTC (5 hours ago)
        
 (HTM) web link (www.science.org)
 (TXT) w3m dump (www.science.org)
        
       | SeriousGamesKit wrote:
       | Really excited to see this after first learning about NWNs two
       | years back. Great to see progress on these new 'hardware'
       | techniques for AI. Well done to Alon & the team!
        
       | dpflan wrote:
       | FTA: "A quintessential cognitive task used to measure human
       | working memory is the n-back task. In this study, task variations
       | inspired by the n-back task are implemented in a NWN device, and
       | external feedback is applied to emulate brain-like supervised and
       | reinforcement learning. NWNs are found to retain information in
       | working memory to at least n = 7 steps back, remarkably similar
       | to the originally proposed "seven plus or minus two" rule for
       | human subjects"
       | 
       | Hm, so is the physical design of the device, having been modeled
       | after human, imply the design of synapse networks is going to be
       | limited as much as the human "device"? Are there other species
       | with better n-back performance?
        
         | NeuroCoder wrote:
         | We use this in my lab and I think you this is a lot more
         | complex than better or worse on the task as a whole. Certain
         | kind sof stimuli will interact with subject memory in different
         | ways. So even if there's research saying another species is
         | better or worse it probably depends on what is being recalled.
        
           | dpflan wrote:
           | I think I was more thinking about the possible direct mapping
           | of the physical device to the computational device implying
           | that it may not be possible to make make a more intelligent
           | device from a device base.
           | 
           | What is your lab doing? Are you mapping physical brains?
        
             | NeuroCoder wrote:
             | We don't really do brain mapping in the sense that would
             | apply to nanotechnology. The actual mechanism of working
             | memory is pretty hard to establish in humans at this level.
        
         | sva_ wrote:
         | Pretty sure I read before that chimpanzee have higher 'n-back'
         | capacity.
        
       | dr_kiszonka wrote:
       | There are neuromorphic deep learning algorithms. From what I
       | read, one promise of these spiking neural networks is higher
       | efficiency than that of typical neural nets, which would enable
       | learning from much fewer data samples.
       | 
       | If anybody here works with SNNs, can you share if you think this
       | claim is true? Also, are there any good entry points for people
       | interested in learning more about SNNs?
        
         | jegp wrote:
         | I'm a PhD student working with neuromorphic computing. I like
         | to think about SNNs as RNNs with discretized outputs. The
         | neurons themselves may have some complicated nonlinear dynamic
         | (currents integrating into the membrane voltage somehow etc.)
         | but they are essentially just stateful transfer functions. The
         | notion of spikes is a crippling simplification, but it's power
         | efficient and you can argue for numerical stability in the
         | limit. So I tend to consider spikes as an annoying engineering
         | constraint in some neuromorphic systems. Brains function
         | perfectly well without them, although in smaller scales (C.
         | elegans).
         | 
         | The true genius of neuromorphics in my view, is that you can
         | build analog components that performs neutron integration for
         | free. Imagine a small circuit that "acts" like the stateful
         | transfer function, with physical counterparts to the state
         | variables (membrane voltage, synaptic current, etc.). In such a
         | circuit you don't need transistors to inefficiently approximate
         | your function. Physics is doing the computation for you! This
         | gives you a ludicrous advantage over current neural net
         | accelerators. Specifically 3-5 _orders of magnitude_ in energy
         | _and_ time, as demonstrated in the BranScaleS system
         | https://www.humanbrainproject.eu/en/science-development/focu...
         | 
         | Unfortunately, that doesn't solve the problem of learning. Just
         | because you can build efficient neuromorphic systems doesn't
         | mean that we know how to train them. Briefly put, the problem
         | is that a physical system has physical constraints. You can't
         | just read the global state in NWN and use gradient descent as
         | we would in deep learning. Rather, we have to somehow use local
         | signals to approximate local behaviour that's helpful on a
         | global scale. That's why they use Hebbian learning in the paper
         | (what fires together, wires together), but it's tricky to get
         | right and I haven't personally seen examples that scale to
         | systems/problems of "interesting" sizes. This is basically the
         | frontier of the field: we need local, but generalizable,
         | learning rules that are stable across time and compose freely
         | into higher-order systems.
         | 
         | Regarding educational material, I'm afraid I haven't seen great
         | entries for learning about SNNs in full generality. I co-author
         | a simulator (https://github.com/norse/norse/) based on PyTorch
         | with a few notebook tutorials
         | (https://github.com/norse/notebooks) that may be helpful.
         | 
         | I'm actually working on some open resources/course material for
         | neuromorphic computing. So if you have any wishes/ideas, please
         | do reach out. Like, what would a newcomer be looking for
         | specifically?
        
       ___________________________________________________________________
       (page generated 2023-04-24 23:00 UTC)