[HN Gopher] DeepDream: How Alexander Mordvintsev excavated the c...
       ___________________________________________________________________
        
       DeepDream: How Alexander Mordvintsev excavated the computer's
       hidden layers
        
       Author : DamnInteresting
       Score  : 46 points
       Date   : 2020-08-03 19:22 UTC (3 hours ago)
        
 (HTM) web link (thereader.mitpress.mit.edu)
 (TXT) w3m dump (thereader.mitpress.mit.edu)
        
       | colordrops wrote:
       | Ugh, I really dislike articles that are about to tell you the key
       | idea(s) at the beginning then veer off into a personal interest
       | story before doing the reveal. It's enough to get me to quit the
       | article.
        
         | zitterbewegung wrote:
         | In college I took a reading class for quantum computation .
         | 
         | The thing I learned about reading anything is reading the
         | abstract and the ending and then reading the middle part if you
         | find it interesting.
        
         | dang wrote:
         | Ok, but please don't post unsubstantive comments to Hacker
         | News.
        
           | j88439h84 wrote:
           | It is a substantive comment about the presentation of the
           | subject matter.
        
         | Koshkin wrote:
         | To be fair, judging by the title this is indeed a personal
         | interest story. (Edit: It is another thing that way too many
         | popular articles disguise themselves as personal stories as
         | though assuming that more readers will be attracted to a
         | tabloid kind of piece than to the subject itself; the problem
         | may be that most subjects are old and generally not interesting
         | any more, whereas new personal stories appear every day!)
        
       | colah3 wrote:
       | I've been incredibly lucky to work with Alex on several projects,
       | including DeepDream. He's amazing. If you think you have a new
       | idea about how to understand neural networks, there's a decent
       | chance Alex did a prototype of it five years ago.
       | 
       | Regarding DeepDream, it often feels to me -- I don't wish to
       | speak on behalf of Alex or Mike -- that we didn't really
       | understand what our results meant when we published DeepDream. It
       | was kind of like discovering that warped glass can distort and
       | magnify images: a really interesting discovery, but a lot more
       | work was needed to turn it into a scientific instrument like
       | glass can be used to form a microscope. As the community got
       | single neuron or direction feature visualizations that worked
       | well, lots of research possibilities began to open up. And in
       | retrospect, one of the most important tricks was jitter, which
       | Alex introduced. This style of feature visualization is probably
       | the single tool I rely on most in my research to this day.
       | 
       | (If you're curious what this has led to as we've continued to
       | pursue it, check out Circuits
       | (https://distill.pub/2020/circuits/zoom-in/), Building Blocks
       | (https://distill.pub/2018/building-blocks/) and Activation
       | Atlases (https://distill.pub/2019/activation-atlas/).)
       | 
       | I'd also encourage people to check out Alex' new line of
       | research, Neural Cellular Automata
       | (https://distill.pub/2020/growing-ca/). I think it's a really
       | interesting line of exploration. And as usual, Alex has an
       | incredible deep trove of small fascinating results relating to
       | NCA if you talk to him about it.
        
       | 2bitencryption wrote:
       | > The crucial point is that the machine does not see a cat or
       | dog, as we do, but a set of numbers.
       | 
       | This seems to miss the point - to follow that pattern, "Humans do
       | not see a cat or a dog, they receive a set of neural impulses".
       | 
       | If a human "knows" those impulses represent a cat, you could also
       | surely say an artificial neural net "knows" those numbers
       | represent a cat - and if you ask "how" a human/NN knows this, I
       | guess the answer is the same -- different levels of visual
       | abstraction (numbers/impulses trigger neurons that recognize
       | edges and shapes, which become eyes become faces become bodies
       | become animals...) trigger different levels of the network that
       | are familiar with those abstractions and turn them into the end
       | result: "That is a cat."
        
       ___________________________________________________________________
       (page generated 2020-08-03 23:00 UTC)