[HN Gopher] The AI delusion: why humans trump machines
       ___________________________________________________________________
        
       The AI delusion: why humans trump machines
        
       Author : sebg
       Score  : 36 points
       Date   : 2020-02-06 18:36 UTC (1 days ago)
        
 (HTM) web link (www.prospectmagazine.co.uk)
 (TXT) w3m dump (www.prospectmagazine.co.uk)
        
       | Rury wrote:
       | >It will be "nothing but clever programming... fake consciousness
       | --pretending by imitating people at the biophysical level." For
       | that, he thinks, is all AI can be. These systems might beat us at
       | chess and Go, and deceive us into thinking they are alive. But
       | those will always be hollow victories, for the machine will never
       | enjoy them.
       | 
       | Mostly agree with Koch, but I'd take it a step further...
       | 
       | There's major problems behind the concepts of AI and even
       | intelligence itself - and it's difficult to articulate why. It's
       | as if these terms require aggrandizing to the point of
       | impossibility or they lose all their apparent meaning. Which is
       | why I feel we'll never achieve what we call (Strong/General) AI,
       | or if we do, we will always find ways to be unimpressed by it...
       | 
       | I mean, is it that absurd to consider that the ideal concept
       | beholden to intelligence isn't a reality - even in humans? If you
       | pull back enough layers on how or why humans think or do the
       | things they do - we arrive at things we can't explain. We don't
       | know what causes intelligence and have trouble coming up with an
       | adequate definition for it; similar to the concept of life. For
       | all we know we might be just highly complex biomechanical
       | machines operating on stimuli, analogous to what current
       | computers already do. Where's the fine line between making
       | something conscious/unconcious?
        
         | trevyn wrote:
         | > _If you pull back enough layers on how or why humans think or
         | do the things they do - we arrive at things we can 't explain._
         | 
         | No, we arrive at things that are _uncomfortable_ to explain.
         | 
         | I think one of the biggest impacts of AI is that it will force
         | us to confront this.
        
         | missosoup wrote:
         | https://en.wikipedia.org/wiki/Philosophical_zombie
         | 
         | There's no discernible difference between p-zombies and 'real'
         | conscious beings. There's a good chance that we're all
         | p-zombies and a distinction between zombie and real doesn't
         | exist.
         | 
         | > If you pull back enough layers on how or why humans think or
         | do the things they do - we arrive at things we can't explain.
         | 
         | Sounds a lot like a magical argument. There's no evidence that
         | anything about the way human minds work is fundamentally
         | unexplainable.
        
         | blueadept111 wrote:
         | The fine line is around whether a machine can actually
         | experience conscious perception, such as actually feeling pain,
         | for example. Of course, there's no way to know...
        
           | Rury wrote:
           | Pain is a signal to your brain, which causes you to react to
           | a stimuli.
           | 
           | Computers react to electrical inputs, and on some level can
           | be considered reacting to stimuli.
           | 
           | Is a computer therefore conscious?
        
       | philipkglass wrote:
       | _In Koch's picture, then, the Turing Test is irrelevant to
       | diagnosing inner life. What's more, it implies that the
       | transhumanist dream of downloading one's mind into an (immortal)
       | computer circuit is a fantasy. At best, such circuits would
       | simulate the inputs and outputs of a brain while having
       | absolutely no experience at all. It will be "nothing but clever
       | programming... fake consciousness--pretending by imitating people
       | at the biophysical level." For that, he thinks, is all AI can be.
       | These systems might beat us at chess and Go, and deceive us into
       | thinking they are alive. But those will always be hollow
       | victories, for the machine will never enjoy them._
       | 
       | This is really damning humans with faint praise. "Machines may
       | eventually do every job better than we can, and be immortal, but
       | I promise that humans will remain superior in some completely
       | undetectable way."
        
         | rosybox wrote:
         | How would Koch see a mechanism where neurons or other cells in
         | a human brain are sequentially replaced over time with
         | synthetic components that simulate their function, is the
         | consciousness lost along the way? Is there consciousness as
         | long as there is at least one biological cell left?
        
           | mgolawala wrote:
           | I wonder if an intelligence higher than ours were to figure
           | out exactly how our brain worked. Every neuron, every
           | synapse, every hormone and neurotransmitter.. How memories
           | were made, stored and retrieved. Would we appear to them to
           | merely be "simulating" consciousness through the use of these
           | mechanisms.
        
         | fdsfdsgad wrote:
         | I am a "panpsychist," or whatever, and I don't really see the
         | silver lining in any of this, either. I really don't care if
         | DeepMind or Stockfish are enjoying how much they can drag me
         | through the mud, because I already don't. It's a huge non
         | sequitur to respond to such things with "it doesn't matter,
         | it's just fake thought." And it is one the most embarassing
         | things to see people going down that lane, alongside that other
         | road, of "but can machines make _art_? "
        
         | stared wrote:
         | In this case, Koch really needs to read Daniel Dennett's
         | "Consciousness Explained". Even though it does not explains
         | consciousness, it takes it seriously to dispel myths and
         | magical thinking about consciousness.
        
         | lachlan-sneff wrote:
         | I've never understood why people feel that way. As far as I can
         | tell, there is no evidence that consciousness is not able to be
         | emulated by a machine.
        
           | axguscbklp wrote:
           | A machine could emulate consciousness in the sense that we
           | could probably in principle build a machine that acted,
           | viewed from the outside, as if it were conscious. But we have
           | no way to measure whether something is conscious or not so we
           | would never really know if the machine actually was
           | conscious, had interiority, had qualia, etc. (three different
           | ways of saying the same thing).
           | 
           | There is no good reason to believe that consciousness is
           | reducible to physical phenomena. I think that intelligence is
           | almost certainly reducible to physical phenomena, but
           | consciousness? No. Consciousness is a mystery that quite
           | likely will forever be beyond the reach of physical
           | investigation.
        
             | MikeSchurman wrote:
             | By the same argument I'll never know if other humans are
             | conscious.
        
           | fdsfdsgad wrote:
           | Emulation is fine, is whether machines are able to achieve
           | "consciousness" that is at stake. I don't know if anyone
           | feels strongly on the matter either, at least on the academic
           | level. But it's the same as with other odd philosophical
           | position: people run into problems with other alternatives,
           | go like "why not..." and suddenly they have an odd belief.
        
             | lachlan-sneff wrote:
             | Personally, I don't see the difference.
        
           | krtong wrote:
           | There is no evidence that a tree can't be used a ram stick
           | either.
        
           | ben_w wrote:
           | "Consciousness" is too poorly defined to have a proper
           | discussion about what is required to have it. We can only be
           | somewhat sure about certain things altering or removing
           | consciousness, but even then I'm sure you can have a very
           | long argument about whether or not a dream is a state of
           | consciousness. Or if consciousness is a continuous variable,
           | where perhaps a newborn has less than an adult, or a dog has
           | less than a human.
        
           | closetohome wrote:
           | Because if you don't subscribe to a branch of philosophy
           | that's ok with being just a Newtonian thinking machine, it's
           | kind of a scary concept.
        
             | fdsfdsgad wrote:
             | Not really.
        
         | [deleted]
        
         | htk wrote:
         | _Koch believes "that consciousness is a fundamental, elementary
         | property of living matter."_
         | 
         | Consciousness is "magic" then.
         | 
         |  _Even if we build machines to mimic a real brain, "it'll be
         | like a golem stumbling about," he writes: adept at the Turing
         | Test perhaps, but a zombie._
         | 
         | This guy created a whole moat of unfalsifiability around his
         | views.
        
           | vanusa wrote:
           | _Consciousness is "magic" then._
           | 
           | That doesn't follow at all.
        
             | clSTophEjUdRanu wrote:
             | They're in the camp that since you can't measure subjective
             | experience its magic.
             | 
             | Well, they're here reading this, since they cant measure
             | their experience they must not exist. /s
        
           | ben_w wrote:
           | Your second quote sounds like a fairly straightforward
           | description of P-zombies to me:
           | https://en.m.wikipedia.org/wiki/Philosophical_zombie
           | 
           | However, to add to the criticism you have of this article
           | about this book, your two quotes taken together appear to be
           | contradictory: if consciousness is a fundamental element of
           | living matter, then given that we can make new living matter,
           | why should there be any reason we can't make a conscious
           | artificial machine?
        
             | edflsafoiewq wrote:
             | P-zombies are magical nonsense.
        
       | nohat wrote:
       | Koch (or perhaps just the reporter quoting him) contradicts
       | himself. Even in his own definition of consciousness a machine
       | architecture merely needs a feedback loop to be conscious,
       | something hardly unheard of in computer programs. Now arguably
       | that definition isn't terrible because human consciousness does
       | seem like a supervisor -- something that synthesizes all the
       | subprocess work and makes sure it has a coherent story.
        
       | [deleted]
        
       | ctoth wrote:
       | Recognition of the powerful pattern matching ability of humans is
       | growing. As a result, humans are increasingly being deployed to
       | make decisions that affect the well-being of other humans. We are
       | starting to see the use of human decision makers in courts, in
       | university admissions offices, in loan application departments,
       | and in recruitment. Soon humans will be the primary gateway to
       | many core services. The use of humans undoubtedly comes with
       | benefits relative to the data-derived algorithms that we have
       | used in the past. The human ability to spot anomalies that are
       | missed by our rigid algorithms is unparalleled. A human decision
       | maker also allows us to hold someone directly accountable for the
       | decisions. However, the replacement of algorithms with a powerful
       | technology in the form of the human brain is not without risks.
       | Before humans become the standard way in which we make decisions,
       | we need to consider the risks and ensure implementation of human
       | decision-making systems does not cause widespread harm. To this
       | end, we need to develop principles for the application for the
       | human intelligence to decision making.
       | 
       | https://behavioralscientist.org/principles-for-the-applicati...
        
       ___________________________________________________________________
       (page generated 2020-02-07 23:00 UTC)