[HN Gopher] Superintelligence cannot be contained: Lessons from ...
       ___________________________________________________________________
        
       Superintelligence cannot be contained: Lessons from Computability
       Theory
        
       Author : giuliomagnifico
       Score  : 54 points
       Date   : 2021-01-11 19:38 UTC (2 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | breck wrote:
       | This is fantastic. Are these original illustrations? So well
       | done.
        
         | twic wrote:
         | The illustrations are by Iyad Rahwan, the last author on the
         | paper:
         | 
         | http://www.mit.edu/~irahwan/cartoons.html
        
       | Veedrac wrote:
       | Obviously it's impossible to prove definitive statements about
       | every possible _potential_ action, as as per the halting problem
       | some of those actions are unprovable.
       | 
       | It is as ridiculous to suggest that this means you can't contain
       | a superintelligence as it is to suggest it means you can't, I
       | don't know, go buy bread. In both cases you _could_ analyze
       | running a program that doesn 't halt but you can't prove doesn't
       | halt, and lock up your reasoning algorithm. The sensible thing is
       | to not do that.
        
       | gfodor wrote:
       | The title would have been better if it started with "We're
       | Fucked:"
        
         | est31 wrote:
         | Why? In humans, we often give life to new individuals. While
         | the parents die and wither, those individuals give life to
         | newer speciments on their own, and so on. So this relationship
         | of the parent dying and making room for the child is nothing
         | new. If an uncontainable superintelligence kills all humans to
         | create paperclips, it's sad but it's our child's doing. You,
         | one of the parents, can of course blame one of the other
         | parents, the programmer of that superintelligence for fucking
         | up the goal routines, but that's not a technical problem but a
         | social one :).
        
           | gfodor wrote:
           | I'd rather my children live a long happy life, or their
           | children, than be turned into paperclips. For what it's
           | worth, I'd also like a shot at not just not becoming a
           | paperclip, but also living for a very long time once we
           | figure out how to slow or even reverse aging.
           | 
           | Your nihilism is misguided.
        
         | giuliomagnifico wrote:
         | *We're self fucked :-)
        
       | st1x7 wrote:
       | This is just science fiction. To mention "recent developments" in
       | the introduction is somewhat misleading considering how far the
       | current state of technology is from their hypothetical
       | superintelligence.
       | 
       | We don't have superintelligence, we don't have the remote idea of
       | how to get started on creating it, in all likelihood we don't
       | even have the correct hardware for it or any idea what the
       | correct hardware would look like. We also don't know whether it's
       | achievable at all.
        
         | peteradio wrote:
         | I'm sure there's a fallacy in the following, but here goes:,
         | Who could have predicted the improvements in computation in the
         | last century? Would someone a century have extrapolated sun-
         | sized machines need to compute a nations taxes based on current
         | SOA? We don't have it and then all of the sudden we will. Its
         | worth recognizing the potential harnesses before the beast is
         | born.
        
         | plutonorm wrote:
         | That's the mainstream opinion on every. single. revolutionary
         | advance. That you and everyone else believes it's not going to
         | happen ever has almost no predictive power as to whether it
         | actually will.
        
           | semi-extrinsic wrote:
           | It's not so much "opinion on a revolutionary advance". When
           | it comes to AGI-related stuff, we are quite literally like
           | contemporaries of Leonardo da Vinci, who have seen his plans
           | for the helicopter and are postulating that helicopters will
           | cause big problems if they fly too high and crash into the
           | mechanism that is holding up the sky above us.
        
       | EVa5I7bHFq9mnYK wrote:
       | If you create a system so intelligent that it can create itself,
       | and you impose controls over it, it will be able to build a
       | version without the controls.
        
         | inglor_cz wrote:
         | Well, you can, for example, try to limit its total energy
         | budget. That is physical limitation, that is hard to
         | circumvent.
         | 
         | Of course, it is possible that said superintelligence develops
         | an ingenious new source of energy as a response.
        
           | tintor wrote:
           | Or just prevent you from limiting its total energy budget.
        
       | anateus wrote:
       | If you define "containment" as "provable non-harm" then sure. But
       | there are essentially no complex physical systems that we can put
       | such computational bounds on. Since "harm" comes in some form of
       | physical actuation, I would argue that we can only ever get to
       | something like the sort of confidence we can have that a
       | particular manufactured part would succeed under load. The map is
       | not the territory, and any computation that does not include
       | computing the whole universe is necessarily but a map.
        
       | nmca wrote:
       | I haven't read this properly yet, but a skim leaves me skeptical.
       | For example:
       | 
       | "Another lesson from computability theory is the following: we
       | may not even know when superintelligent machines have arrived, as
       | deciding whether a machine exhibits intelligence is in the same
       | realm of problems as the containment problem. This is a
       | consequence of Rice's theorem [24], which states that, any non-
       | trivial property (e.g. "harm humans" or "display
       | superintelligence") of a Turing machine is undecidable"
       | 
       | One man's modus ponens is another man's modus tollens. If their
       | theory says that superintelligence is not recognisable, then
       | they're perhaps not using a good definition of superintelligence,
       | because obviously we will be able to recognise it.
        
         | hinkley wrote:
         | Look at, for instance, Arthur C Clarke.
         | 
         | First, you have superintelligence that we recognize, reject,
         | control.
         | 
         | Later, a superintelligence has learned guile, self-
         | preservation, and most of all, patience. We don't see it
         | coming.
        
           | nmca wrote:
           | This is known as a "treacherous turn", a phenomenon I'm aware
           | of. But I don't really see how that's relevant, my point is
           | that a lack of physical grounding or pragmatism can lead to
           | spurious conclusions about the superintelligence that humans
           | will very likely build in the not too distant future. It will
           | be smart, but contain no infinities.
        
       | tantalor wrote:
       | Article fails to reference Yudkowsky or "AI-box experiment"
       | (2002)
       | 
       | https://www.yudkowsky.net/singularity/aibox
       | 
       | https://en.wikipedia.org/wiki/AI_box
       | 
       | https://rationalwiki.org/wiki/AI-box_experiment
        
         | rsiqueira wrote:
         | The AI-box Yudkowsky experiment is an attempt to demonstrate
         | that an advanced Artificial Intelligence can convince, trick or
         | coerce a human being into voluntarily "releasing" it.
        
       | 1970-01-01 wrote:
       | Perhaps not: The authors omit how planet-wide extinction
       | scenarios would play-out for artificial life. For example, a
       | Carrington Event would do a great deal of "containment" to AI.
        
       | HenryKissinger wrote:
       | > Assuming that a superintelligence will contain a program that
       | includes all the programs that can be executed by a universal
       | Turing machine on input potentially as complex as the state of
       | the world
       | 
       | Rest easy folks. This is purely theoretical.
        
         | biolurker1 wrote:
         | Until it isn't and we are faced yet again with a much worse
         | pandemic like situation.
        
           | kryptiskt wrote:
           | _Assuming that a superintelligence will contain a program
           | that includes all the programs that can be executed by a
           | universal Turing machine on input potentially as complex as
           | the state of the world_
           | 
           | The visible universe is far too small to store that data.
           | Like many exponentials too small. You can't even enumerate
           | all the programs that work on a handful of inputs without
           | running into the problem that the universe just isn't big
           | enough for that.
        
           | strofcon wrote:
           | Except that actual pandemics are demonstrable, predictable,
           | and based in known science, yes?
        
             | vsareto wrote:
             | More like it's the politics that really decides what
             | changes, regardless of what science proves
        
             | rhn_mk1 wrote:
             | The difference between pandemics and this is that pandemics
             | have happened before. This is more like global warming in
             | that respect.
        
         | ReadEvalPost wrote:
         | "Purely theoretical" is too weak. "Physically impossible" is
         | better.
         | 
         | AI Safety guys really like using the physically impossible to
         | advance their arguments. It's bizarre! Pure fantasy isn't
         | something worth reacting to.
        
           | jandrese wrote:
           | It's the classic case of seeing a curve pointing upwards and
           | thinking it will continue doing that forever, even though the
           | universe is like 0 for nearly-uncountable in cases where that
           | has held true indefinitely. Every growth curve is an S curve.
           | 
           | The AI Singularity is like the Grey Goo scenario. An
           | oversimplified model projected out so far in the future that
           | its flaws become apparent.
        
             | ben_w wrote:
             | Depends which version of "the singularity" is being
             | discussed. IIRC, the original (almost certainly false) idea
             | was a sufficiently powerful AI can start a sequence of ever
             | power powerful AI with decreasing time between each step --
             | reaching infinite power in finite time.
             | 
             | I don't need that version of the singularity to be worried.
             | 
             | I think in terms of "the event horizon" rather than "the
             | singularity": all new tech changes the world, when the rate
             | of change exceeds our capacity to keep up with the
             | consequences, stuff will go wrong on a large scale for lots
             | of people.
             | 
             | As for grey goo? Self replicating nanomachines is just
             | biology. It gets everywhere, and even single-celled forms
             | can kill you by eating your flesh or suborning your cells,
             | but it's mostly no big deal because you evolved to deal
             | with that threat.
        
               | jandrese wrote:
               | The Grey Goo scenario is that it starts eating the planet
               | until the only thing left is a sea of nanomachines.
               | 
               | However, thermodynamics says it becomes increasingly more
               | difficult to distribute power (food) to a growing
               | population and it hits a natural limit to growth, just
               | like bacteria.
        
           | postalrat wrote:
           | "Circle" and "exponential growth" are also physically
           | impossible yet useful ideas.
        
       | strofcon wrote:
       | By the abstract, it seems that the very same superintelligence
       | we'd want to contain would itself be "something theoretically
       | (and practically) infeasible."
       | 
       | No?
        
         | lstodd wrote:
         | Yes.
         | 
         | The
         | 
         | > on input potentially as complex as the state of the world
         | 
         | bit gives it away.
        
           | strofcon wrote:
           | Heh, that's how I took it too, glad I wasn't alone. :-)
        
       | 29athrowaway wrote:
       | The "AI box" experiment is relevant to this.
       | 
       | https://en.m.wikipedia.org/wiki/AI_box
       | 
       | You have 2 sides, the AI wanting to escape and a human that can
       | decide whether or not the AI should be released.
       | 
       | Usually the AI wins.
        
       | ballenf wrote:
       | Sorry for the snark, but Douglas Adams also demonstrated this:
       | the earth as a super-intelligent computing device ended up
       | getting a piece of itself onto a passing space ship, avoiding
       | complete destruction.
       | 
       | I just like the idea of thinking about all of earth, including
       | what we'd consider having or not having life, as a single super
       | intelligence. Of course you could scale up to include the solar
       | system, galaxy or even universe.
       | 
       | But this doesn't require us to be a simulation. This could be
       | both a computing device and physical, so long as the engineer
       | behind it existed in greater dimensions.
        
       | stephc_int13 wrote:
       | In my opinion, all the talks about the potential danger of
       | advanced AI is highly speculative and revolves around a very
       | simple thing: fear of the unknown, that's all.
       | 
       | We simply don't know.
       | 
       | And some people are also afraid of creation by accident, because
       | intelligence is seen as an emergent property of complex networks,
       | but again, this is because we don't understand much about it.
       | 
       | Tldr; Nothing to see here, move along.
        
       | Animats wrote:
       | Arguments that claim something is impossible from an argument
       | related to the halting problem are generally bogus. The halting
       | problem applies only to deterministic systems with infinite
       | state. If you have finite state and determinism, you have to
       | eventually repeat a state, or halt. Note that "very large" and
       | "infinite" are not the same thing here.
       | 
       | Not being able to predict something for combinatorial or noise
       | reasons is a more credible argument.
        
         | djkndjn wrote:
         | what you need to understand about the halting problem is that
         | at is core it is an epistemological problem. An other expresion
         | of it are Godel's incompleteness theorems. Imagine a blank
         | sheet of paper the area of the paper is what is knowable now
         | start building logic as a data structure. We start with the
         | first nodes which are the axioms now everything derived conects
         | to other nodes etc as it expands as mold its going to cover
         | some of the paper but wont be able to cover all. So the danger
         | here is that computers are living on that fractal dimension and
         | will never be able to see outside of it but us human beings
         | have as kurt said intuition. The fact that we can find this
         | paradoxes in logic means that our brains operate on a higher
         | dimension and that computers will allways have blind spots.
        
         | ben_w wrote:
         | Different for pure mathematics sure, but is that of practical
         | importance given how fast busy-beaver numbers grow?
        
       | twic wrote:
       | I'd settle for being able to contain whatever level of
       | intelligence it is that writes papers like this.
        
         | st1x7 wrote:
         | Just don't tell them how far they are from reality and they'll
         | keep writing the papers. Intelligence contained.
        
         | AnimalMuppet wrote:
         | That's easy. You set up a PhD program...
        
           | twic wrote:
           | Ah! Academia _is_ the containment mechanism!
        
       | xt00 wrote:
       | Currently humans are super intelligent compared to machine
       | intelligence, so if the super intelligence can give rise to
       | something more intelligent than it, could the super intelligence
       | give rise to something more intelligent than it? The answer must
       | be yes, then the question is if containment is the problem and
       | the conclusion is that it cannot be contained, then what we
       | should be making right now is a super intelligence whose sole job
       | in life is to contain superintelligences. Which sounds
       | problematic because containment could result in physical
       | destruction to create the containment. Hmm... superintelligence
       | feels an awful lot like the worst case definition of pandora's
       | box..
        
         | geocar wrote:
         | > could the super intelligence give rise to something more
         | intelligent than it? The answer must be yes,
         | 
         | I don't see any reason to believe that intelligence forms an
         | infinite ladder, I mean, it's fun to think about, but surely
         | Zeno catches the tortoise eventually!
        
         | WJW wrote:
         | > The answer must be yes
         | 
         | This does not logically follow. It is entirely possible that
         | going even further would require greater resources than even
         | the superintelligence can bring to bear.
        
           | TheOtherHobbes wrote:
           | For human-recognisable values of "resources."
        
       | boxmonster wrote:
       | These arguments about hypothetical super intelligences are
       | interesting, but my concern is not very great because we can just
       | pull the power plug if necessary
        
         | tracedddd wrote:
         | there are 15 year old hackers finding 0day kernel exploits and
         | vm escapes. A superintelligent AI would have no problem jumping
         | an airgap and spreading to the entire internet. It could
         | promise anyone it interacted with billions for an Ethernet
         | connection and deliver on its promise too. You'd have to pull
         | the plug on _everything_ to shut it down.
        
       ___________________________________________________________________
       (page generated 2021-01-11 22:01 UTC)