[HN Gopher] A connectomic study of a petascale fragment of human...
       ___________________________________________________________________
        
       A connectomic study of a petascale fragment of human cerebral
       cortex
        
       Author : lawrenceyan
       Score  : 54 points
       Date   : 2022-01-16 19:12 UTC (3 hours ago)
        
 (HTM) web link (vcg.seas.harvard.edu)
 (TXT) w3m dump (vcg.seas.harvard.edu)
        
       | epgui wrote:
       | This is incredible work!
        
       | matthewfcarlson wrote:
       | There was a story I read once where general AI was unobtainable
       | but computing power continued so simulations of brains was done
       | via processing brain slices. So every AI was just a person who
       | had died with an intact brain that could be preserved. Self
       | driving cars were a thing but it was someone's grandma.
        
         | wallacoloo wrote:
         | if you can recall the author/title i'd love to look at it! i'm
         | curious how the story deals with the disjointed sensory input
         | in the "revived" agent. e.g. does Grandma experience
         | sight/sound/touch/feel? if not, how does she cope with sudden
         | full-body paralysis/numbness/loss-of-familiar-senses? did she
         | agree to this, or know it was going to happen? and so on.
         | 
         | i remember one subplot to a Cory Doctorow book focused on a lab
         | that was trying to develop a self-aware machine and the barrier
         | there was that the machine would commit suicide as soon as it
         | understood the broader context of its being. sort of makes me
         | wonder that in order to achieve the sorts of AI you're talking
         | about, we need not just map the brain but also understand (or
         | bruteforce) enough of it to avoid agent crises. the barrier to
         | that _could_ be larger than just developing a wholly new neural
         | network (idk).
        
         | klysm wrote:
         | How is that different from general AI though?
        
           | robbedpeter wrote:
           | General ai would be a unique mind. Simulation doesn't require
           | engineers to understand the process, just that the process
           | works. I can copy a mechanism without scientifically
           | understanding what that mechanism is doing. I can follow a
           | recipe or instructable or copy grandma's brain and have
           | little to no understanding of what's really happening.
           | 
           | Then again, if you can copy a genius researcher and put a
           | million of the minds to work on solving agi methodically, you
           | don't need precision and understanding to start with. You
           | just hope the million mind genius collective doesn't lie or
           | mislead.
        
           | quocanh wrote:
           | It would be General Intelligence but it's probably not
           | Artificial General Intelligence in so much as the
           | intelligence aspect wasn't designed and created. It also
           | wouldn't be able to get smarter at the rate of singularity
           | since its intelligence comes from using humans as raw
           | material.
        
       | The_rationalist wrote:
        
       | blamazon wrote:
       | Amusing how they (minimally) protected the identity of the
       | individual standing on a roller chair in the main illustration.
        
       | lawrenceyan wrote:
       | The dataset the paper uses: 1.4 petabyte browsable reconstruction
       | of the human cortex -
       | https://h01-release.storage.googleapis.com/landing.html
        
       | KhoomeiK wrote:
       | It's simply a matter of scaling this slice-scan-render technique
       | up to the entirety of a human brain to simulate human thinking,
       | correct? What're the biggest technological hurdles left?
        
         | danielmorozoff wrote:
         | pretty much everything. Connectome != functional understanding
         | of the brain. We have had c elegans ( worm) and more recently
         | fly connectomes for years. we are still struggling to
         | understand basic logic encodings in those animal models. Imho
         | we lack a foundational understanding of the logic encoding
         | mechanisms in the brain. Many neuroscientists/ computer
         | scientists are working on this problem, but to my knowledge we
         | are still not there.
        
       | JulianMorrison wrote:
       | Perhaps someone has been reading Anders Sandberg[1]?
       | 
       | 1. https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf
        
       | josemanuel wrote:
       | Was this done for the first time? Is there any novel technology
       | that enabled this? Why are these studies rare?
        
       ___________________________________________________________________
       (page generated 2022-01-16 23:00 UTC)