[HN Gopher] To build truly intelligent machines, teach them caus...
       ___________________________________________________________________
        
       To build truly intelligent machines, teach them cause and effect
        
       Author : sonabinu
       Score  : 37 points
       Date   : 2023-02-24 14:32 UTC (2 days ago)
        
 (HTM) web link (www.quantamagazine.org)
 (TXT) w3m dump (www.quantamagazine.org)
        
       | dwheeler wrote:
       | This article _really_ needs a  "(2018)" marker.
       | 
       | This article predates GPT-3 and GPT-2, it even predates the essay
       | "The Bitter Lesson"
       | <http://www.incompleteideas.net/IncIdeas/BitterLesson.html>.
       | 
       | It might be true long-term, but it's certainly not written with
       | the current advances in mind.
        
         | daveguy wrote:
         | 1 human is the equivalent of several of the most powerful
         | computers in computation. IO is 3 logs less. I'll be worried in
         | about 60 years that we may have the computing power of an
         | artificial human. But only if we understand thought.
        
         | gibsonf1 wrote:
         | There aren't really any current advances outside of sheer scale
         | of input in the models, and all the engineering and hardware
         | around achieving that scale. And I think the point is no matter
         | how much input data you give the ml/dl system, it will still
         | have no awareness, no understanding of any kind and certainly
         | no causal awareness.
        
       | LordDragonfang wrote:
       | >Mathematics has not developed the asymmetric language required
       | to capture our understanding that if X causes Y that does not
       | mean that Y causes X.
       | 
       | X[?]Y
       | 
       | This seems like sophistry to bring up the fact that algebra is
       | symmetric and totally ignore the exist of the above.
        
       | is_true wrote:
       | Most politicians lack this too
        
       | Analemma_ wrote:
       | This article feels like it came from some alternate universe
       | where the history of AI is exactly the opposite of where it is in
       | ours, and specifically where "The Bitter Lesson" [0] is not true.
       | In our world, AI _was_ stuck in a rut for decades because people
       | kept trying to do exactly what this article suggests: incorporate
       | modeling and how people _think_ consciousness works. And then it
       | broke out of that rut because everyone went fuck it and just
       | threw huge data at the problem and told the machines to just pick
       | the likeliest next token based on their training data.
       | 
       | All in all this reads like someone who is deeply stuck in their
       | philosophy department and hasn't seen anything that has happened
       | in AI in the last fifteen years. The symbolic AI camp lost as
       | badly as the Axis powers and this guy is like one of those
       | Japanese holdouts who didn't get the memo.
       | 
       | [0]: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
        
         | sankha93 wrote:
         | The idea that symbolic AI lost is uninformed. Symbolic AI
         | essentially boils down to different kinds of modeling and
         | constraint solving systems, which are very much in use today:
         | linear programming, SMT solvers, datalog, etc.
         | 
         | Here is here symbolic AI lost: any thing where you do not have
         | a formal criteria of correctness (or goal) cannot be handled
         | well by symbolic AI. For example perception problems like
         | vision, audio, robot locomotion, or natural language. It is
         | very hard to encode such problems in terms of formal language,
         | which in turn means symbolic AI is bad at these kind of
         | problems. In contrast, deep learning has won because it is good
         | at exactly these set of things. Throw a symbolic problem at a
         | deep neural network and it fails in unexpected ways (yes, I
         | have read neural networks that solve SAT problems, and no, a
         | percentage accuracy is not good enough in domains where
         | correctness is paramount).
         | 
         | The saying goes, anything that becomes common enough is not
         | considered AI anymore. Symbolic AI went through that phase and
         | we use symbolic AI systems today without realizing we are using
         | old school AI. Deep learning is the current hype because it
         | solves a class of problems that we couldn't solve before (not
         | all problems). Once deep learning is common, we will stop
         | considering it AI and move on the to the next set of problems
         | that require novel insights.
        
         | cubefox wrote:
         | It's from 2018. Time was not kind to Pearl's picture of AI.
        
       | mrwnmonm wrote:
       | God, I hate these titles. The same science news business site
       | published this before https://www.quantamagazine.org/videos/qa-
       | melanie-mitchell-vi...
       | 
       | I have no problem if they say x thinks y. But to put it as if it
       | is a fact like "To Build Truly Intelligent Machines, Teach Them
       | Cause and Effect" and "The Missing Link in Artificial
       | Intelligence" to get more hits is disgusting.
        
         | qbit42 wrote:
         | While Quanta often has click baity headlines, it is really the
         | only decent website for pop math and theoretical computer
         | science.
        
       | gibsonf1 wrote:
       | Fully agree with this article. Our definition for intelligence:
       | "Intelligence is conceptual awareness capable of real-time causal
       | understanding and prediction about space-time."[1]
       | 
       | [1] https://graphmetrix.com/trinpod
        
         | canjobear wrote:
         | What is understanding?
        
           | gibsonf1 wrote:
           | The ability to model an object in awareness and its causality
           | that corresponds to its space-time reality
        
             | canjobear wrote:
             | What does it mean to model an object in awareness? Does
             | Dall-E model an object in awareness when it is generating
             | an image containing an object? How can you tell if it is or
             | isn't?
        
               | gibsonf1 wrote:
               | All ml/dl systems have no awareness - they just output
               | based on input training - like a calculator outputs an
               | answer. So what it means to model in awareness is what
               | you are doing right now in reading this sentence. You
               | take these words as input, model conceptually what they
               | mean mentally, connect that model to your experience of
               | space time, and then decide what to do next with that
               | understanding.
        
               | airstrike wrote:
               | To define() a Virtual Expectation of how a phenomenon
               | ought to behave and then watch it play out in reality,
               | confirming expectations most of the time but noticing
               | when it deviates (meaningfully) from the expected output
               | and refining that Virtual Expectation definition to
               | include additional rules / special cases so that future
               | reality-checks play out as expected
               | 
               | Dall-E doesn't observe the real world and compare it to
               | its "objects in awareness", so at best it only checks one
               | out of two boxes in GP's definition
        
             | mrwnmonm wrote:
             | Circular definitions, circular definitions, circular
             | definitions everywhere.
        
         | mrwnmonm wrote:
         | "Intelligence is whatever supports this product."
        
         | nradov wrote:
         | Intelligence is the ability to accomplish goals by making
         | optimal use of limited resources.
        
           | zwkrt wrote:
           | By which metric a tree is very intelligent and a man with a
           | private yacht is not.
        
             | YeezyMode wrote:
             | This is a possibility that shouldn't be dismissed. Trees
             | use mycorrhizal networks to communicate and have been
             | around for a very long time on this planet. They model the
             | environment and use either a set of micro-decisions or a
             | set of larger, slower moves that are made across longer
             | timescales than humanity is used to. You can argue whether
             | they possess sentience or not, but when discussing models,
             | decisions, and consequences - trees seem to act with plenty
             | of coordination and understanding and self-interest.
        
       | darosati wrote:
       | I don't understand why very large neural networks can't model
       | causality in principal.
       | 
       | I also don't understand the argument that even if NNs can model
       | causality in principal they are unlikely to do so in practice
       | (things I've heard: spurious correlations are easier to learn,
       | the learning space is too large to expect causality to be learned
       | from data, etc).
       | 
       | I also don't understand why people aren't convinced that LLM can
       | demonstrate causal understanding in setting where they have been
       | used for things like control like decision transformers... like
       | what else is expected here?
       | 
       | Please enlighten me
        
         | blackbear_ wrote:
         | I think one of the major difficulties is dealing with
         | unobserved confounders. The world is complex and it is unlikely
         | that all relevant variables are observed and available
        
       ___________________________________________________________________
       (page generated 2023-02-26 23:00 UTC)