[HN Gopher] DeepMind AI learns simple physics like a baby
       ___________________________________________________________________
        
       DeepMind AI learns simple physics like a baby
        
       Author : mdp2021
       Score  : 58 points
       Date   : 2022-07-11 20:14 UTC (2 hours ago)
        
 (HTM) web link (www.nature.com)
 (TXT) w3m dump (www.nature.com)
        
       | mdp2021 wrote:
       | Another divulgative article:
       | 
       | -- DeepMind AI learns physics by watching videos that don't make
       | sense - An algorithm created by AI firm DeepMind can distinguish
       | between videos in which objects obey the laws of physics and ones
       | where they don't -
       | https://www.newscientist.com/article/2327766-deepmind-ai-lea...
       | 
       | > _Luis Piloto at DeepMind and his colleagues have created an AI
       | called Physics Learning through Auto-encoding and Tracking
       | Objects (PLATO) that is designed to understand that the physical
       | world is composed of objects that follow basic physical laws. //
       | The researchers trained PLATO to identify objects and their
       | interactions by using simulated videos of objects moving as we
       | would expect [...] They also gave PLATO data showing exactly
       | which pixels in every frame belonged to each object. To test
       | PLATO's ability to understand five physical concepts such as
       | persistence..., solidity and unchangingness..., the researchers
       | used another series of simulated videos. Some showed objects
       | obeying the laws of physics, while others depicted nonsensical
       | actions_ [with the latter, correctly the AI returned wrong
       | predictions, showing an acquired intuition of physics]
       | 
       | From the submitted one:
       | 
       | > _[Jeff Clune, Uni British Columbia, Vancouver: ]<<[Comparing AI
       | with how human infants learn is] an important research direction.
       | That said, the paper does hand-design much of the prior knowledge
       | that gives these AI models their advantage>>. // Clune and other
       | researchers are working on approaches in which the program
       | develops its own algorithms for understanding the physical world_
        
       | Jeff_Brown wrote:
       | Data and analysis alone, without experimentation, don't seem like
       | enough to achieve real intelligence. From its title this article
       | sounded like it would be about progress in learning by doing.
       | Alas, it's not.
        
         | mdp2021 wrote:
         | > _experimentation_
         | 
         | Right. "You have to tell yourself stories", as the late Prof.
         | Patrick Winston said (you are intelligent because you can
         | predict the unexperienced). Because you need concept
         | development and critical thinking - an active process.
        
       | thomasjudge wrote:
       | I feel like there is an inferential leap implied, to greatly
       | simplify, from "A does X and B does X" to "A and B must operate
       | relevantly similarly." For example, walking and flying are both
       | modes of transportation, but you can't really learn anything
       | interesting about one from studying the other
        
         | sebzim4500 wrote:
         | >For example, walking and flying are both modes of
         | transportation, but you can't really learn anything interesting
         | about one from studying the other
         | 
         | You can figure out newtonian mechanics entirely on the ground
         | and that clearly helps you understand flight. By analogy,
         | getting more understanding about what limits exist in ANNs
         | could plausibly help understand how the brain works (and vice
         | versa).
        
         | ChikkaChiChi wrote:
         | Once you understand that walking gets you from where you are to
         | where you want to be, you start to define the characteristics
         | of motion. Then, seeing other forms of conveyance that are
         | faster underpin the concept of efficiency.
         | 
         | Walking and flying may not have a lot in common to you and I,
         | but to a thing learning to crawl, there is a lot to be
         | understood.
        
       | jonbaer wrote:
       | Looking at the photo I don't think the AI is going to realize
       | eating the piece it is about to pick up and it will choke. I
       | would actually like to see more reinforcement learning agents
       | like that, the action space on infant movement is quite small so
       | it's even really about "action space discovery" to some point.
       | Things it discovers are way more interesting, like if food were
       | not on the floor/level and it has to stand to get it, it will
       | eventually get there after N attempts (over time), and then if
       | you introduce another agent if learning to block the other agent
       | will award more food, etc, then it discovers 50/50 and
       | equilibriums (better to eat now than wait). PLATO seems like a
       | step in that direction.
        
       | danielmorozoff wrote:
       | Similar work has been pursued for a number of years now in a
       | Darpa program called Machine Common Sense:
       | https://www.darpa.mil/news-events/2018-10-11
       | 
       | I recall Tenenbaum's lab had a similar paper a few years back.
        
         | mdp2021 wrote:
         | Also https://www.machinecommonsense.com/ ,
         | 
         | in which animations are shown which reveal the close similarity
         | to the DeepMind project.
        
       | mikolajw wrote:
       | Clickbait title.
       | 
       | I wish ML researchers stopped using anthropomorphizing language.
       | This has decades of solid tradition, but that's no excuse. Any
       | comparison of a machine to a human misleads the public. Machines
       | aren't like babies, artificial neural networks aren't like actual
       | neural networks or brains. Machines shouldn't be given human
       | names (PLATO is a borderline case).
       | 
       | I know this is like talking to a wall -- money requires hype --
       | but still, please stop doing that.
        
       | mdp2021 wrote:
       | A further article and a commentary just appeared on The
       | Conversation from a Professor of Psychology and Infant Studies:
       | 
       | https://theconversation.com/researchers-trained-an-ai-model-...
       | 
       | > _Typically, AI models start with a blank slate and are trained
       | on data with many different examples, from which the model
       | constructs knowledge. But research on infants suggests this is
       | not what babies do. Instead of building knowledge from scratch,
       | infants start with some principled expectations about objects
       | [...] The exciting finding by Piloto and colleagues is that a
       | deep-learning AI system modelled on what babies do, outperforms a
       | system that begins with a blank slate and tries to learn based on
       | experience alone_
        
       ___________________________________________________________________
       (page generated 2022-07-11 23:00 UTC)