[HN Gopher] The brain 'rotates' memories to save them from new s...
       ___________________________________________________________________
        
       The brain 'rotates' memories to save them from new sensations
        
       Author : jnord
       Score  : 188 points
       Date   : 2021-04-16 06:04 UTC (1 days ago)
        
 (HTM) web link (www.quantamagazine.org)
 (TXT) w3m dump (www.quantamagazine.org)
        
       | wizzwizz4 wrote:
       | > _The work could help reconcile two sides of an ongoing debate
       | about whether short-term memories are maintained through
       | constant, persistent representations or through dynamic neural
       | codes that change over time. Instead of coming down on one side
       | or the other, "our results show that basically they were both
       | right," Buschman said, with stable neurons achieving the former
       | and switching neurons the latter. The combination of processes is
       | useful because "it actually helps with preventing interference
       | and doing this orthogonal rotation."_
       | 
       | This sounds like the early conservation of momentum /
       | conservation of energy debates. (Not that they used those words
       | back then.)
        
       | lupire wrote:
       | Abstract is mostly readable to a technically person:
       | 
       | https://www.nature.com/articles/s41593-021-00821-9
        
         | ThePowerOfDirge wrote:
         | I am technically person.
        
       | trott wrote:
       | Something to keep in mind though is that in a high-dimensional
       | space, approximate orthogonality of independent vectors is almost
       | guaranteed.
        
         | filoeleven wrote:
         | Can you say a bit more on what that means in this context?
        
           | FigmentEngine wrote:
           | probably a reference to the curse of dimensionality
        
           | fighterpilot wrote:
           | Not sure about the neuroscience context, but if you have two
           | large ("high-dimensional") vectors of variables that have a
           | population correlation of zero ("independent"), then the dot
           | product of a sample is likely to be close to zero
           | ("orthogonal") due to the law of large numbers.
        
         | adampk wrote:
         | Do you mean to say that the neurons in the brain are operating
         | in a higher-dimensional space than 3?
        
           | frisco wrote:
           | Yes definitely. Here the "space" doesn't refer to physical
           | space, but an abstract vector space that neuron's tuning
           | represents. For example, there is a famous paper[1] that
           | showed neurons could be responsive to abstract concepts --
           | for example, one might fire for "Bill Clinton" regardless of
           | whether the stimulus is a photo of him, his name written as
           | letters, or even (with weaker activation) photos/text of
           | other members of his family or other concepts adjacent to
           | him. The neuron's activity gives a vector in this high
           | dimensional concept space, and that's the "space" GP is
           | referring to.
           | 
           | [1] https://www.nature.com/articles/nature03687
        
             | mapt wrote:
             | Wouldn't it be especially inelegant/inefficient to try and
             | wire synapses for, say, a seven-dimensional cross-
             | referencing system, when have to actually physically locate
             | the synapses for this system in three-dimensional space?
             | 
             | (and when the neocortex that does most of the processing
             | with this data is actually closer to a very thin, almost
             | two-dimensional manifold wrapped around the sulci)
             | 
             | There has to be an information-theory connection between
             | the physical form and the dimensionality of the memory
             | lookup, even if they aren't referring to precisely the same
             | thing, right?
        
             | PullJosh wrote:
             | Can I get an ELI5 on how physical neurons, stuck in a
             | measly 3 dimensions, can possibly form higher-dimensional
             | connections on a large scale?
             | 
             | I understand higher dimensional connections in theory (such
             | as in an abstract representation of neurons within a
             | computer), but I can't imagine how more highly-connected
             | neurons could all physically fit together in meat space.
        
               | dboreham wrote:
               | Same as a silicon chip stuck in 2 dimensions can.
        
               | ajuc wrote:
               | > Can I get an ELI5 on how physical neurons, stuck in a
               | measly 3 dimensions, can possibly form higher-dimensional
               | connections on a large scale?
               | 
               | You can multiplex in frequency and time. I'm not sure if
               | neurons do it, but it's certainly possible with computer
               | networks.
        
               | wyager wrote:
               | Your stick of RAM is also stuck in 3 dimensions but it
               | reifies a, say, 32-billion-dimensional vector over Z/2Z.
        
               | CuriouslyC wrote:
               | If you take a matrix of covariance or similarity between
               | neurons based on firing pattern, and try to reduce it to
               | the sum of a weighted set of vectors, the number of
               | vectors you would need to accurately model the system
               | gives you the dimensionality of the space.
        
               | fao_ wrote:
               | This does not seem particularly like an "Explain Like I'm
               | 5"-parsable comment that the posted asked for.
        
               | dopu wrote:
               | If I'm recording from N neurons, I'm recording from an
               | N-dimensional system. Each neuron's firing rate is an
               | axis in this space. If each neuron is maximally
               | uncorrelated from all other neurons, the system will be
               | maximally high dimensional. Its dimensionality will be N.
               | Geometrically, you can think of the state vector of the
               | system (where again, each element is the firing rate of
               | one neuron) as eventually visiting every part of this
               | N-dimensional space. Interestingly, however, neural
               | activity actually tends to be fairly low dimensional (3,
               | 4, 5 dimensional) across most experiments we've recorded
               | from. This is because neurons tend to be highly
               | correlated with each other. So the state vector of neural
               | activity doesn't actually visit every point in this high
               | dimensional space. It tends to stay in a low dimensional
               | space, or on a "manifold" within the N-dimensional space.
        
               | cochne wrote:
               | Consider three neurons all connected together. Now
               | consider that each of them may have some 'voltage'
               | anywhere between 0 and 1. Using three neurons you could
               | describe boxes of different shapes in three dimensions.
               | Add more and you get whatever large dimension you want.
        
               | fsociety wrote:
               | Think of it less as n-dimensional in meat space and more
               | of n-dimensional in how it functions.
        
               | [deleted]
        
               | exporectomy wrote:
               | Do you mean due to the thickness of each connection, they
               | would occupy too much space if the number of dimensions
               | was too high? Not necessarily 4 or more, just very high
               | because there are on the order of n^2 connections for n
               | neurons?
               | 
               | In the visual cortex, neurons are arranged in layers of
               | 2D sheets, so that perhaps gives an extra dimension to
               | fit connections between layers.
        
               | andyxor wrote:
               | see related talk by the first author: "Dynamic
               | representations reduce interference in short-term
               | memory": https://www.youtube.com/watch?v=uy7BUzcAenw
        
             | MereInterest wrote:
             | There was a fun article in early March showing that the
             | same is true for image recognition deep neural networks.
             | They were able to identify nodes that corresponded with
             | "Spider-Man", whether shown as a sketch, a cosplayer, or
             | text involving the word "spider".
             | 
             | https://openai.com/blog/multimodal-neurons/
        
               | andyxor wrote:
               | deep neural nets are an extension of sparse autoencoders
               | which perform nonlinear principal component analysis
               | [0,1]
               | 
               | There is evidence for sparse coding and PCA-like
               | mechanisms in the brain, e.g. in visual and olfactory
               | cortex [2,3,4,5]
               | 
               | There is no evidence though for backprop or similar
               | global error-correction as in DNN, instead biologically
               | plausible mechanisms might operate via local updates as
               | in [6,7] or similar to locality-sensitive hashing [8]
               | 
               | [0] Sparse Autoencoder https://web.stanford.edu/class/cs2
               | 94a/sparseAutoencoder.pdf
               | 
               | [1] Eigenfaces https://en.wikipedia.org/wiki/Eigenface
               | 
               | [2] Sparse Coding
               | http://www.scholarpedia.org/article/Sparse_coding
               | 
               | [3] Sparse coding with an overcomplete basis set: A
               | strategy employed by V1?https://www.sciencedirect.com/sci
               | ence/article/pii/S004269899...
               | 
               | [4] Researchers discover the mathematical system used by
               | the brain to organize visual objects
               | https://medicalxpress.com/news/2020-06-mathematical-
               | brain-vi...
               | 
               | [5] Vision And Brain https://www.amazon.com/Vision-Brain-
               | Perceive-World-Press/dp/...
               | 
               | [6] Oja's rule https://en.wikipedia.org/wiki/Oja%27s_rule
               | 
               | [7] Linear Hebbian learning and PCA
               | http://www.rctn.org/bruno/psc128/PCA-hebb.pdf
               | 
               | [8] A neural algorithm for a fundamental computing
               | problem
               | https://science.sciencemag.org/content/358/6364/793
        
           | andyxor wrote:
           | Yes, grid cells in the hippocampus [0] form a coordinate
           | system that is used for 4D spatiotemporal navigation [1], as
           | well as navigation in abstract high-dimensional "concept
           | space" [2]
           | 
           | [0] http://www.scholarpedia.org/article/Grid_cells
           | 
           | [1] Time (and space) in the hippocampus
           | https://pubmed.ncbi.nlm.nih.gov/28840180/
           | 
           | [2] Organizing conceptual knowledge in humans with a gridlike
           | code: https://science.sciencemag.org/content/352/6292/1464
        
           | [deleted]
        
           | darwingr wrote:
           | Yes but only in aggregate, like how adding a column to a
           | database table is also adding a "dimension" to said data.
           | 
           | I'm not convinced the author's analogy of cross-writing to
           | fit more information on a page is actually going to be
           | helpful to most people's understanding. It led me at least to
           | try to imagine visually what's going on, to picture the input
           | being physically rotated. This is more akin to the more
           | abstract but inclusive concept of rotation from linear
           | algebra, where more dimensions (of information, not space or
           | time) makes sense.
        
           | gleenn wrote:
           | If you think of groups of neurons in arbitrary dimensions,
           | where some groups fire together for some things, and a
           | different group with some overlap fire for other things, then
           | it's like two dimensions where a line is a sense or thought
           | and the lines are crossing where they fire for both memories.
           | So two thoughts along two dimensions can cross and light up
           | that subset of neurons. If the two thoughts, or lines, are
           | orthogonal, then not many neurons are both firing for
           | thoughts. If you have many many neurons, and many many
           | memories, then the dimensionality, or possible subsets of
           | firing neurons, is huge. Like our two lines but now in three
           | dimensions, there are a lot of ways for them not to overlap.
           | So the possibility that many things in that space are
           | orthogonal is likely. In a highly dimensional space, a whole
           | lot of things don't overlap.
        
         | dopu wrote:
         | Sure, but the neural activity is actually low-dimensional (see
         | Extended Fig 5e). By day 4, the first two principal components
         | of the neural activity explains 75% of the variance in
         | response. ~3-4 dimensions is not particularly high dimensional.
        
       | ivan_ah wrote:
       | The Nature version is paywalled
       | https://www.nature.com/articles/s41593-021-00821-9
       | 
       | but I found the preprint of the paper on biorxiv.org:
       | https://www.biorxiv.org/content/10.1101/641159v1.full
        
       | ordu wrote:
       | Curious. I cannot understand it clearly. Lets take for example
       | "my wife and my mother-in-law" illusion[1]. It is known for it's
       | property that one cannot see both women at once. If we assume
       | that it has something to do with such a coding in neurons, would
       | it mean that those women are orthogonal, or it would mean that
       | they refuse to go orthogonal?
       | 
       | [1] https://brainycounty.com/young-or-old-woman
        
         | bserge wrote:
         | Sorry, I'm pretty tired, but I fail to see the relation to this
         | article, how does that example apply?
         | 
         | I thought that was more of a case of a human's facial
         | recognition being a special function, and we're not able to
         | process two or more people's faces at the same time. Like, see
         | the details in them, recognize that it's _their face_.
         | 
         | You're either looking at one person, or the other, but if you
         | try to look at both of them at the same time, they become
         | "blurry", unrecognizable, even though you remember all the
         | other information about them both.
         | 
         | But that's not related to memory integrity and new
         | emotions/sensations?
        
           | ordu wrote:
           | It is a work of human visual perception at work. Somehow you
           | mind chooses how to interpret sensations from a retina, and
           | shows you one of women. Then you mind chooses to switch
           | interpretations and you see the other one. Both
           | interpretation are somewhere in memory. So it may be
           | connected with this research.
           | 
           | Like with those chords in a research. Mice hear one chord,
           | and by association from memory it expects other chord. But
           | instead it hears some third chord. Expected and unexpected
           | chords have perpendicular representation, if I understood
           | correctly.
           | 
           | Here you see a picture, and expects one interpretation or
           | other. You have memory of both, but you get just one.
           | 
           | Possibly it doesn't apply, I do not know. I'm trying to
           | understand it. The obvious step is to make a prediction from
           | a theory, should interpretations oscillate, if it has
           | something to do with perpendicularity of representation in
           | neurons?
           | 
           | When I hear another chord instead of a predicted one, do
           | prediction and sensations oscillate? I'm not quick enough to
           | judge based on a subjective experience.
        
         | vmception wrote:
         | Wish they would outline the two variants
         | 
         | I only see the young woman before I became disinterested in
         | making the other one happen because why
        
         | LordGrey wrote:
         | I spent 10 minutes staring at that picture and saw only the
         | wife. The mother-in-law never appeared.
         | 
         | This happens to me often.
        
           | andrewmackrodt wrote:
           | I had trouble at first too until I noticed the ear looking a
           | little suspicious. If you create a diagonal obstruction from
           | the top of the hat, to the nose, you are left will only the
           | mother-in-law; the ear has now become an eye.
           | 
           | Once I'd seen it once, the mother-in-law is now prominent. I
           | can still see the wife if I concisely choose to, but the
           | mother-in-law is now the default, strange huh?
        
         | chaps wrote:
         | Hmmm.. I tried to visualize them both at the same time.. it
         | took some effort, but quickly "oscillating" between the two
         | ended up settling (without a jittery oscillating feeling) on
         | seeing both at the same time. Maybe my brain was playing meta
         | tricks on me though?
        
           | c22 wrote:
           | I can "see" both at the same time, but only if I am not
           | focusing on either. I think this conflict of focus is the
           | real effect people are talking about.
        
         | Baeocystin wrote:
         | Really? I have no trouble seeing both at the same time. Nothing
         | special about it, the angles of their respective faces are
         | different enough that it doesn't feel like there's any
         | interference at all.
        
           | bserge wrote:
           | But do you really see both _at the same_ time or you just
           | switch between them really fast?
        
             | treeman79 wrote:
             | Does it matter? My vision switches eyes every 30 seconds,
             | unless I'm wearing prism glasses. I rarely notice unless
             | I'm trying to write.
        
             | Baeocystin wrote:
             | At the exact same time. No oscillating.
        
       | andyxor wrote:
       | looks similar to "Near-optimal rotation of colour space by
       | zebrafish cones in vivo"
       | 
       | https://www.biorxiv.org/content/10.1101/2020.10.26.356089v1
       | 
       | "Our findings reveal that the specific spectral tunings of the
       | four cone types near optimally rotate the encoding of natural
       | daylight in a principal component analysis (PCA)-like manner to
       | yield one primary achromatic axis, two colour-opponent axes as
       | well as a secondary UV-achromatic axis for prey capture."
        
       | fighterpilot wrote:
       | I read the abstract and don't really get it. How is this
       | different from saying that a group of neurons A is responsible
       | for memory storage and a group of neurons B is responsible for
       | sensory processing, and A != B? I think I'm misunderstanding this
       | "rotation" concept.
        
         | rkp8000 wrote:
         | It's a good question. It looks like they actually specifically
         | check for this and show that it's not two separate groups of
         | neurons. Instead a subset of the neural population changes
         | their representation of the input as it moves from sensory to
         | memory, so it's more like a single group of neurons that
         | represents current sensory and past memory information in two
         | orthogonal directions.
        
           | fighterpilot wrote:
           | So current sensory info is a vector of numbers, and past
           | memory info is a vector of numbers, and these two vectors are
           | orthogonal.
           | 
           | What are these numbers, precisely?
        
             | resonantjacket5 wrote:
             | In a simple example that I can think of it could just be a
             | vector of <present, past> aka the current info could be
             | encoded like [<2, 0>, <4, 0>] then rotated to ("y axis")
             | [<0, 2>, <0, 4>] allowing you to write more "present" data
             | to the original x dimension without overriding the past
             | data.
             | 
             | If you're asking about the exact numbers here's a snippet
             | from the xlsx document. ``` ABC _D_mean ABC_ D_se ABCD_mean
             | ABCD_se XYC _D_mean XYC_ D_se XYCD_mean XYCD_se day neuron
             | subject time 0 6.012574653 0.5990308106 6.181361381
             | 0.5737310366 6.59759636 0.6419092978 6.795648346
             | 0.5716884524 1 2 M496 -50 ```
             | 
             | According to the article SEM neural activity, though this
             | is way beyond my ability to interpret.
        
             | rkp8000 wrote:
             | My simplified picture of what's going on is something like
             | this (if I'm understanding the paper correctly). Stimulus A
             | starts out represented by the vector (1,1,1,1) and B by
             | (-1,-1,-1,-1). Those are the sensory representations. Later
             | A is represented by (1,1,-1,-1) and B by (-1,-1,1,1). Those
             | are the memory representations. The last two
             | component/neurons have "switched" their selectivity and
             | rotated the encoding. The directions (1,1,1,1) and
             | (1,1,-1,-1) are orthogonal, so you can store sensory info
             | (A vs B in the present) along one and memory info (A vs B
             | in the past) aling the other.
        
           | o_p wrote:
           | So memory and sensory get multiplexed?
        
       | [deleted]
        
       | behnamoh wrote:
       | Articles on Quanta magazine have clickbait titles.
        
         | chalst wrote:
         | And yet this title seems to capture the content quite
         | adequately.
        
       | ohazi wrote:
       | I don't remember where I came across this (was probably some pop
       | neuroscience blog or maybe radiolab), but there was some theory
       | about how memories seem subject to degredaton when you recall
       | them a lot, and less so when you don't.
       | 
       | I guess that would sort of be like the opposite of DRAM - cells
       | maintain state when undisturbed, but the "refresh" operation is
       | lossy.
        
         | plg wrote:
         | it's the theory of re-consolidation
         | 
         | here are some references
         | 
         | https://pubmed.ncbi.nlm.nih.gov/?term=memory+reconsolidation...
        
         | [deleted]
        
         | ajuc wrote:
         | > I guess that would sort of be like the opposite of DRAM -
         | cells maintain state when undisturbed, but the "refresh"
         | operation is lossy.
         | 
         | Or like any analog data medium ever :)
        
         | mncharity wrote:
         | I'm under the anecdotal and subjective impression that I can do
         | a "brain dump" describing a recently-experienced physical
         | event. But it's a one-shot exercise. Close to read-once recall.
         | The archived magnetic 9-track tape that when read becomes a
         | take-up reel of backing and a pile of rust. The memories feel
         | like they're degrading as recalled, like beach sand eroding
         | under foot, and becoming "synthetic", made up. The dump is
         | extremely sparse and patchy. Like a limits-of-perception vision
         | experiment: "I have moderate confidence that I saw a flash
         | towards upper left". Not "I went through the door and down the
         | hall" but "low-confidence of a push with right shoulder,
         | medium-confidence passing a paper curled out from the wall at
         | waist height, and ... that's all I've got". But what shape
         | curl? Where in the hall? You've whatever detail was available
         | around the moment you recalled it, because moments later extra
         | information recalled start tasting different, speculative fill-
         | in-the-blanks untrustworthy.
        
         | tshaddox wrote:
         | I would expect memories to _change_ more the more they are
         | recalled, just like I would expect a story to change the more
         | times it's told.
        
           | Phenomenit wrote:
           | Yeah I'm thinking that's because our interpretation of
           | reality and it's abstractions ar falsy and that filter is
           | applied every time we update the memory. Maybe then when we
           | are learning a new subject through say reading our filter is
           | minimal and every time we read the same info we combat our
           | falsy interpretation of reality.
        
           | ohazi wrote:
           | Yes, maybe change is a better term than degrade. The story
           | was told in terms of the details in a memory changing a lot
           | vs. remaining accurate.
        
         | sebmellen wrote:
         | How fascinating, I've experienced this myself to a large
         | degree. I have a few songs that very vividly remind me of
         | certain periods or points of my life. When I play them, I
         | always feel like I'm scratching up the vinyl surface of the
         | memory, and I lose a little bit each time. Rather disappointing
         | :(
        
         | gus_massa wrote:
         | Perhaps the Crick and Mitchison theory about why we dream:
         | https://en.wikipedia.org/wiki/Reverse_learning
         | 
         | (AFAIK it's totally wrong, but I really like it anyway. I hope
         | there is another specie in the universe that use it.)
        
       | [deleted]
        
       | User23 wrote:
       | In mice.
        
         | Jaecen wrote:
         | The experiment was on mice, but the process has been observed
         | elsewhere.
         | 
         | From the article:
         | 
         | > _This use of orthogonal coding to separate and protect
         | information in the brain has been seen before. For instance,
         | when monkeys are preparing to move, neural activity in their
         | motor cortex represents the potential movement but does so
         | orthogonally to avoid interfering with signals driving actual
         | commands to the muscles._
        
       | de6u99er wrote:
       | This makes much more sense than having secret memory cells in
       | neurons.
        
       | darwingr wrote:
       | This really would have been harder for me to understand had I not
       | taken linear and abstract algebra courses a few years ago. That
       | area of maths reused common words like "rotation" but with more
       | generalized definitions, which made it was jarring and confusing
       | to hear and take in at the time. When someone said the word
       | "rotate" my mind as if by reflex was already trying visualize a
       | 3d or 2d rotation even when it made no sense for the problem at
       | hand. Being an English speaker my whole life I thought I
       | understood what a rotation was or could be but I didn't.
       | 
       | Same goes for what's being alleged here: Is there even a way to
       | visualize this that makes mathematical sense? What will be the
       | corollaries to this discovery simply as a result of what the
       | mathematics of rotations will dictate?
        
         | dboreham wrote:
         | Same goes for the ordinary English word "Eigenvector".
        
       | lukeplato wrote:
       | There was another recent article on applications of geometry to
       | analyse neural mechanisms to encode context. It also mentioned a
       | rotation/coiling geometry:
       | 
       | https://www.simonsfoundation.org/2021/04/07/geometrical-thin...
        
       ___________________________________________________________________
       (page generated 2021-04-17 23:00 UTC)