[HN Gopher] Complexity No Bar to AI
       ___________________________________________________________________
        
       Complexity No Bar to AI
        
       Author : tmfi
       Score  : 92 points
       Date   : 2021-02-21 19:18 UTC (3 hours ago)
        
 (HTM) web link (www.gwern.net)
 (TXT) w3m dump (www.gwern.net)
        
       | fpgaminer wrote:
       | We already have an existence proof for the singularity, so I
       | don't know why there's any debate about _if_ the singularity will
       | occur. I can see debate about what exactly the "singularity"
       | entails, when, how, etc. But it's inevitable.
       | 
       | The Cosmic Calendar
       | (https://en.wikipedia.org/wiki/Cosmic_Calendar) makes it visually
       | clear that progress is accelerating. Evolution always stands on
       | the shoulders of giants, working not to improve things linearly,
       | but exponentially.
       | 
       | When sexual reproduction emerged, it built on top of billions of
       | years of asexual evolution. It took advantage of the fact that we
       | had a set of robust genes. Now those genes could be quickly
       | reshuffled to rapidly experiment and adapt on a time scale
       | several orders of magnitude shorter than it would take asexual
       | reproduction to perform the same adaptations.
       | 
       | Then neurons emerged; now adaptation was on the order of
       | fractions of a life time rather than generations.
       | 
       | Then consciousness emerged. Now not only can humans adapt on the
       | order of _days_, we can also augment our own intelligence. Modern
       | day humans have access to the internet augmentation, giving us
       | the collective knowledge of all humanity in _seconds_.
       | 
       | While we can augment our intelligence, the thing we can't do is
       | intelligently modify our own hardware. This is where AI comes in.
       | With a sufficiently intelligent AI we could task it to do AI
       | research for us. Etc, etc. => Singularity.
       | 
       | The vast majority of the steps towards Singularity have _already_
       | happened! Every step is an exponential leap in "intelligence",
       | and it causes adaptions to occur on exponentially decreasing time
       | scales.
       | 
       | But I guess we'll see for sure soon. GPT-human is a mere 20 years
       | away (or less). I don't personally think the AI revolution will
       | be as dramatic as many envision it to be. It's more likely to be
       | like the emergence of cell phones. Cell phones undeniably changed
       | and advanced the world, but it's not like there was a single
       | moment when they suddenly popped into existence and then from
       | that point on everything was different. It's hard to even point
       | to exactly when cell phones changed the world. Was it when they
       | were invented? Was it when they shrunk to the size of a handheld
       | blender? When we had them in cars? The first flip phone? The
       | first iPhone? The first Android?
       | 
       | The rise of AI won't be a cataclysmic event where SkyNet just
       | poofs into existence and wipes out humanity. It'll be a slow,
       | steady gradient of AI getting better and better, taking over more
       | and more tasks. At the same time humanity will adapt and
       | integrate with our new tool. When the AI gets smarter than us and
       | hits the Singularity treadmill, we won't just poof out of
       | existence. More likely humanity, as a civilization, will just get
       | absorbed and extended by our AI counterparts. They'll carry the
       | torch of humanity forward. They'll _be_ humanity. Our fleshy
       | counterparts won't be wiped out; they'll be an obsolete relic of
       | humanity's past.
       | 
       | More concretely, in 20 years we'll have GPT-human, not as an
       | independent, conscious, thinking machine. It'll be a human level
       | intelligence, but one bounded by the confines of the API calls we
       | use to drive its process. That's not something that's going to
       | "wake up" and wipe us out. It's something we can unleash on our
       | most demanding scientific tasks. Protein folding, gene editing,
       | physics, the development of quantum computing. All being
       | absolutely CRUSHED by an AI with the thinking power of Einstein,
       | but no consciousness or the cruft of driving a biological body.
       | It's easy to see how that will change the world, but won't
       | immediately lead to humanity being replaced by free-willed AIs.
        
         | [deleted]
        
       | ProfHewitt wrote:
       | For state of the art on foundations of mathematics, see
       | 
       | "Recrafting Foundations of Mathematics"
       | 
       | https://papers.ssrn.com/abstract=3603021
        
         | rq1 wrote:
         | Hi Carl.
         | 
         | Thanks for sharing and your work in general.
         | 
         | I always thought Godel results' point is undecidability. Which
         | (always?) arise with (StringTheoremsAreEnumerable and
         | SomeKindOfCantorDiagonalReasoning). Where am I wrong?
         | 
         | Do you have any historical writings to recommend about
         | Wittgenstein/Godel battle?
        
         | monstersinF wrote:
         | This paragraph in that article is an incomplete sentence that
         | ends hanging. Any idea what it intended to say?
         | 
         | > "Monster" is a term introduced in [Lakatos 1976] for a
         | mathematical construct that introduces inconsistencies and
         | paradoxes. Since the very beginning, monsters have been endemic
         | in foundations. They can lurk long undiscovered. For example,
         | that "theorems are provably computational enumerable" [Euclid
         | approximately 300 BC] is a monster was only discovered after
         | millennia when [Church 1934] used it to identify fundamental
        
       | wwww4all wrote:
       | Humans can already create intelligence. Called human babies.
       | 
       | Human babies are already nurtured, educated and developed into
       | intelligent beings.
       | 
       | AI is like alchemy, trying to create something of value from
       | nothing. People pontificating about AI is like medieval monks
       | pontificating about how many angels can fit on pin head.
       | 
       | What is AI? What are boundary conditions of AI? Calling faster
       | computers AI doesn't make it sound more interesting.
        
       | SubiculumCode wrote:
       | I do wonder how far the human brain is from theoretical optimums.
       | There is obviously something working really well, but there is
       | also a lot of baggage that I do not doubt limits performance in
       | some domains (cognitive) in order to preserve other basic
       | functions (fight or flight or f*k). The biggest opponent to
       | progress is ourselves. Even if you came up with an implant that
       | would make humans smarter and more moral/ethical, people won't
       | adopt it readily for fear of change. Unless AI becomes attached
       | conservationist in nature, I'm not sure they'd retain the baggage
       | not optimal for modern environments.
        
       | Animats wrote:
       | Well, it's better than the argument that machines can't resolve
       | undecidable questions but humans can.
       | 
       | There's a large family of problems that are NP-hard in the worst
       | case, but much easier in the average case. Linear programming and
       | the traveling salesman problem are like that.
       | 
       | The research question I would pose is, why is robotic
       | manipulation in unstructured spaces so hard? Machine learning has
       | not helped much there. Yet it's a fundamental animal skill. We're
       | missing something that leads to success in that area. Whatever
       | that is, it may be the next thing after machine learning via
       | neural nets.
       | 
       | Note that it's not a human-level problem. Primates have the
       | hardware for that. Mammals down to the squirrel level can
       | manipulate objects. Mice, maybe. Mouse-level neural net hardware
       | exists. It's not even that big. The University of Manchester's
       | neural net machine supposedly has mouse-level power, in six
       | racks.
       | 
       | I tried some ideas in this area in the 1980s and 1990s, without
       | much success. Time for the next generation to look at this. More
       | tools, more compute power, and more money are available.
        
         | segmondy wrote:
         | robotic manipulation in unstructured spaces is probably not so
         | hard anymore. when it comes to hardware, the hardware has often
         | been the problem and very flaky, I believe we are at a stage
         | where the software is ready and waiting for the hardware to
         | catch up.
        
           | Isinlor wrote:
           | Humans are able to do surgical operations using existing
           | hardware [0], but software can not.
           | 
           | Humans can drive cars on crazy Indian roads, software can
           | not.
           | 
           | Existing hardware is plenty sufficient for manipulating
           | physical world, but we are missing the intelligence part.
           | 
           | [0] https://www.davincisurgery.com/
        
         | taylorlunt wrote:
         | Note that a mouse does not need to solve the problem from
         | scratch like a computer does. They are born with a general
         | solution to the problem of movement in their brain, which has
         | been arrived at by evolution.
         | 
         | To replicate this, you can't expect to get away with only
         | replicating the complexity of the mouse. You potentially need a
         | computer with as much complexity as the evolutionary algorithm
         | which led to the mouse's movement algorithm.
        
           | Animats wrote:
           | _Note that a mouse does not need to solve the problem from
           | scratch like a computer does. They are born with a general
           | solution to the problem of movement in their brain, which has
           | been arrived at by evolution._
           | 
           | Only for some cases. I've watched a horse being born and seen
           | pictures of other newborns. The foal can stand within
           | minutes, and the right sequence of moves is clearly built in
           | because it works the first time. Newborn horses can walk
           | within hours and run with the herd within days. Lying down,
           | though, is not pre-stored. That's a confused collapse for the
           | first few days. There's an evolutionary benefit to being able
           | to get up and escape threats early, but smoothly lying down
           | is less important.
        
       | carapace wrote:
       | One problem with Singularity is that you either A) have to be
       | first, or B) have to contend with other beings at least as
       | intelligent as you are.
       | 
       | How can you be sure you're first?
        
       | LesZedCB wrote:
       | personally i find a lot of arguments against AGI coming any time
       | soon couched in a culture of human exceptionalism, even those who
       | wouldn't claim as much directly.
       | 
       | there is a DAMN surprising level of intelligence in significantly
       | less complex life. we are just so attached to intelligence as
       | defined by human culture to call it as it is.
        
         | gnarbarian wrote:
         | "The question of whether machines can think is about as
         | relevant as the question of whether submarines can swim."
         | Edsger Dijkstra
         | 
         | I suspect you're right. I believe there's nothing that can be
         | formally defined that is impossible for an AI to do and
         | possible for a person to do. Furthermore, AGI is not formally
         | defined or defined tightly enough to be a target we can
         | actually hit. It's a hand wavy way of saying "AI as smart as a
         | person". The "Anti AGI" crowd moves the goalposts every time a
         | huge breakthrough occurs but as long as there is a goalpost
         | (formal definition) to hit AI will surely hit it. The "Pro AGI"
         | crowd is also guilty of not being precise with exactly what AGI
         | is.
         | 
         | I also fundamentally believe the whole concept of AGI is flawed
         | and biased to what people perceive as intelligence rather than
         | intelligence itself. This is partially why there is so much
         | effort and hoopla around things like GPT-3, (or in the past the
         | Turing test). These programs which demonstrate something like
         | human intelligence which is difficult to nail down in terms of
         | a formal definition of ability. Both groups point at it and
         | claim victory or point at a flaw. AI progresses inexorably
         | regardless of what the hell AGI even means.
        
           | tsimionescu wrote:
           | I don't think AGI has ever moved in its goalposts, defined
           | rather well by the Turing test. AGI must be general, capable
           | of reasoning at least as well as a human in any domain of
           | inquiry, at the same time. Showing more than human reasoning
           | in certain domains is trivial, and has been happening since
           | at least Babbage's difference engine.
           | 
           | However, while AI has been overtaking human reasoning on many
           | specific problems, we are still very far from any kind of
           | general intelligence that could conduct itself in the world
           | or in open-ended conversation with anything approaching human
           | (or basically any multicellular organism) intelligence.
           | 
           | Furthermore, it remains obvious that even our best specific
           | models require vastly more training (number of examples +
           | time) and energy than a human or animal to reach similar
           | performance, wherever comparable. This may be due to the
           | 'hidden' learning that has been happening in the millions of
           | years of evolution that are encoded in any living being
           | today, but it may also be that we are missing some
           | fundamental advancements in the act of learning itself
        
           | LesZedCB wrote:
           | > biased to what people perceive as intelligence rather than
           | intelligence itself
           | 
           | this statement is only reifying the presupposition that
           | "intelligence" as a property of the universe, any more than
           | something such as "crime" or "mental illness" is a
           | fundamental property and not emergent from culture values and
           | norms.
           | 
           | the reason why "AI conservatives" keep moving the goalposts
           | is because it is a myth in the collective consciousness of
           | people who believe that intelligence is fundamentally not
           | something a silicon substrate machine could possess.
           | 
           | i think breaking out of the dichotomy of intelligent or
           | unintelligent is necessary for the discussion. a "human level
           | intelligence" is not real, so whatever does "human level
           | intelligence" things is human level intelligence. if it
           | quacks like a duck... it even may be imbued with
           | "subjectivity" if you ask me.
        
         | georgeecollins wrote:
         | Then you should read Rodney Brooks for an argument against AGI
         | coming soon with no mention of human exceptionalism. In fact,
         | he argues that an artificially created organism with
         | intelligence is more likely soon than an engineered one.
         | 
         | >> there is a DAMN surprising level of intelligence in
         | significantly less complex life. we are just so attached to
         | intelligence as defined by human culture to call it as it is.
         | 
         | Totally agree btw
        
         | 1_2__4 wrote:
         | I don't even know where to start.
        
         | orwin wrote:
         | Do you consider Church-Turing Thesis couched in a culture of
         | human exceptionalism too? Or is this the one argument that is
         | not?
         | 
         | I consider this is the best theory against AGI, and if AGI come
         | to fruition, it would mean that this thesis was invalid (i do
         | like Turing, but i'm pretty sure i wouldn't be mad if he was
         | wrong about this in 1936, when no computer existed).
         | 
         | And also it put me in a comfortable position: as long as we
         | don't create something more complex than ourselves, Turing was
         | right, and there is no chance we're living in a simulation. If
         | he was wrong, well, i think the argument was "if we could test
         | human behavior in a lifelike simulation, we would", and life
         | would loose a lot a meaning for a lot of people.
        
           | tsimionescu wrote:
           | How is the Church-Turing thesis an argument against AGI? If
           | anything, it is an argument FOR AGI: it claims that a Turing
           | machine is capable of solving any problem that can be solved,
           | which directly implies that you can create a computer with
           | the exact reasoning capacities of the human mind (or more).
           | AGI would be a strong signal that the thesis is valid, though
           | it remains un provable (as it is an assertion about an
           | informal idea, 'functions that can be solved').
           | 
           | Thinking about simulations leads nowhere, so I won't engage,
           | it's too far outside of what can be reasonably investigated,
           | it's scientifically sounding religion.
        
           | LesZedCB wrote:
           | > Do you consider Church-Turing Thesis couched in a culture
           | of human exceptionalism too?
           | 
           | there is a branch of philosophy called logical positivism or
           | logical empiricism, which in my understanding only deals with
           | statements that can be proven a priori or via verifiability.
           | Church-Turing Thesis exists within this framework, as far as
           | i can tell.
           | 
           | i think that's supremely boring and leaves out the entire
           | other half of epistemological discussion.
           | 
           | but honestly i dont see how this thesis holds any bearing on
           | AI, maybe you can bridge the gap for me.
           | 
           | > life would loose a lot a meaning for a lot of people
           | 
           | this is so loaded with metaphysical assumptions it's hard to
           | engage with.
           | 
           | to rephrase "if i was deterministic, my life would be
           | meaningless" is exactly the kind of reasoning somebody who is
           | a human exceptionalist would use without realizing that's
           | what they are stating.
        
         | Taek wrote:
         | I firmly believe that raw intelligence has little to do with
         | our lead as a species, and it may even be the case that there
         | are numerous animals with more raw intelligence than humans,
         | especially predators.
         | 
         | Our advantage comes from our ability to pass information on to
         | eachother. A piece of knowledge that took 10,000 hours of
         | thinking, observing, and experimenting to come by may only take
         | 4 hours to pass on.
         | 
         | Humans can do this much better than any other animal. That's a
         | skill borne of communication advantages, not raw intelligence
         | advantages.
        
           | dj_mc_merlin wrote:
           | Perhaps, but animals do not seem to understand abstract
           | concepts as well as us. This may be linked to language too.
           | Without the ability to make analogies in their brain, no
           | crocodile will figure out the worlds made of atoms or if you
           | rub sticks really quickly they make fire.
        
           | qayxc wrote:
           | > I firmly believe that raw intelligence has little to do
           | with our lead as a species
           | 
           | This statement carries no meaning unless you define what "raw
           | intelligence" is.
        
         | joe_the_user wrote:
         | Well, logic-based "AI" (GOFAI), was much more about logic
         | programming, automating explicit, human conscious reason and
         | that's generally been considered a failure or at least a dead
         | end.
         | 
         | Deep learning and related approaches don't seem as human
         | related as earlier - there's even deep worm that's trying to
         | simulate worm behavior.
         | 
         | The thing about hard arguments against AI, however, is that
         | they have to come down to "there's a quality X that a machine
         | can't emulate". And usually the X is intuitive/philosophical
         | concept with great resonance to humans but which is actually
         | quite ill-defined. If X was exactly defined, well, we'd be able
         | to compute it after all. So you get X as "spark of life",
         | "soul". "being in the world" etc.
         | 
         | And that kind of again shows "human exceptionalism" as the
         | perspective.
        
         | tsimionescu wrote:
         | I think quite the opposite: the vast difference between living
         | beings' ability to learm how to exist in their environment
         | (including other living beings and sometimes social structures)
         | and the very limited successes of even modern AI still show
         | that we are very far away from AGI.
         | 
         | We couldn't create an ant AI right now, imagining we are pretty
         | close to a human is pretty absurd.
         | 
         | And if we're taking about intelligence, human exceptionalism is
         | hard to argue against (in the context of Earth, no point in
         | speculating about alien life). There are pretty few creatures
         | on earth trying to build AIs, and I for one would not consider
         | something to be an AGI if it couldn't even understand the
         | concept.
        
           | LesZedCB wrote:
           | > We couldn't create an ant AI right now, imagining we are
           | pretty close to a human is pretty absurd
           | 
           | i dont care about "human level intelligence," just a general
           | unsupervised learning agent that can interact with the world
           | (i recognize the insufficiency of this definition. i'm not
           | writing an essay here) will suffice for me. human level is
           | just another iteration(s) after that.
        
       | Isinlor wrote:
       | BTW - We empirically do not need AGI for
       | computational/intelligence explosion.
       | 
       | Single virus has no chance of out computing and evading an immune
       | system, but billions of billions of viruses can out compute and
       | evade even human civilization as a whole.
        
       | dwohnitmok wrote:
       | I think the reasoning presented in this article generalizes
       | pretty well to a refutation of most arguments involving proving
       | the impossibility of some complex, ill-defined, "I'll know it
       | when I see it" kind of phenomenon via a tidy, small logical
       | proof.
       | 
       | There's a lot of ways that those kinds of complex phenomenon can
       | be functionally equivalent to human observers, but can have
       | different underlying mechanisms. These tidy logical proofs only
       | ever cut off one extremely specific incarnation of that complex
       | phenomenon rather than the entire equivalence class.
        
       | [deleted]
        
       | yters wrote:
       | There are no mathematical theories of runaway intelligence
       | growth. On the other hand there are many theorems of fundamental
       | limits to maechanical processes. E.g. NP completeness
       | codiscoverer Leonid Levin also proved what he calls independence
       | conservation that states no stochastic process is expected to
       | increase net mutual information. Then there are the more well
       | known theorems with similar implications: no free lunch theorems,
       | halting problem, Kolmogorov complexity's uncomputability, data
       | processing inequality, and so on. There is absolutely nothing
       | that looks like runaway intelligence explosion in theoretical
       | computer science. The closest attempt I have seen in Kauffman's
       | analysis of NK problems, but there he finds similar limitations,
       | except with low K terrains, but that analysis is a bit
       | questionable in mind. To make arguments like gwern and Kurzweil
       | they are essentially appealing to mysticism; assuming there is a
       | yet to be discovered mathematical law utterly unlike anything we
       | have ever discovered. They are engaging in promissory computer
       | science, writing a whole bunch of theory checks they hope will be
       | cashed in the future.
        
         | Isinlor wrote:
         | There are computational runaway processes in nature e.g.
         | viruses.
         | 
         | If you think about a virus as self-optimizing process then it
         | is a runaway computation. Most of the time it fizzles out
         | because it runs out of resources.
         | 
         | It seems like there was no such runaway process on the scale of
         | the universe.
         | 
         | But self-replicating machines probably could do it, because
         | relatively simple programs like viruses can do it locally.
        
       | zxcvbn4038 wrote:
       | What if AI just wants to watch Star Trek reruns and browse Porn
       | Hub? As is so often the case when humanity creates intelligences.
        
         | mStreamTeam wrote:
         | That would be an improvement from previous AIs which have
         | become racist.
         | 
         | https://spectrum.ieee.org/tech-talk/artificial-intelligence/...
        
           | superbcarrot wrote:
           | They trained a language model on twitter data and extracted
           | some of the sentiment from the training set. The
           | antropomorphic language of "AIs which have become" is
           | misleading.
        
             | chromanoid wrote:
             | Yeah, the developers made the algorithm racist by
             | incorporating racist texts in the training phase. Shit in
             | shit out...
        
               | LesZedCB wrote:
               | training data will always reflect the culture which
               | generated it
        
               | sitkack wrote:
               | AIs most powerful feature might be a lens into human
               | behavior and psyche. Who knows how it will turn out.
               | 
               | I think an AI might run meetings better than humans, it
               | might even make a better manager.
        
               | LesZedCB wrote:
               | philosophers have been doing that for millenia. some have
               | made material changes to society, and others haven't.
               | some have questioned the validity of "material change"
               | being a good metric in the first place.
               | 
               | personally, i believe philosophy is the most important
               | _starting point_ for any discussion about AI.
               | 
               | and i really hope that AI helps more than making office
               | work a little less tedious...
        
         | superbcarrot wrote:
         | The idea of an AI agent "wanting" something (especially
         | something it wasn't programmed for) is still strictly science
         | fiction. We don't know how to start building something like
         | this and even if we could, it seems unnecessary.
        
           | [deleted]
        
           | Robotbeat wrote:
           | Why? Goal-driven optimization is totally a thing.
        
             | pedrosbmartins wrote:
             | You have to be careful when anthropomorphizing AI models.
             | Yes, goal-driven optimization is a thing, but in what sense
             | does the model itself "want" to achieve its goal? Can it
             | even understand its own goal, in any sense? Change it?
             | Improve it?
             | 
             | In linear programming, you wouldn't describe the model as
             | "wanting" to optimize its objective function, for instance.
        
               | refactor_master wrote:
               | Well, where do you set the hard limit for "wanting"
               | something, and how many grains of sand makes a pile?
        
               | Isinlor wrote:
               | Can you understand your own goal? I can't, I don't know
               | what my goal is.
               | 
               | I also can't change it or improve it.
               | 
               | As far as I can see, I'm just a process that keeps on
               | going because it was good on going before, and has no
               | purpose whatsoever.
        
               | pedrosbmartins wrote:
               | Of course you can understand your own goals. You decided
               | to reply to my comment, that's a goal. You used your
               | existing knowledge of the world, language and
               | technological tools to achieve it. And you did! Can a
               | goal-oriented model do something like that, in a general
               | sense?
               | 
               | Note that I'm not talking about life purpose, but goals
               | in the sense of wanting a result and performing the tasks
               | needed to achieve it.
               | 
               | Asking if your wants are "real" or just part of a
               | purposeless process doesn't really add much to the
               | discussion at hand.
        
           | ben_w wrote:
           | I have a slightly different take: _deliberately_ making an AI
           | which "wants" in the same way that we "want" is sci-fi.
           | 
           | This isn't because we _can't_ (evolution did it, we can
           | evolve an AI), but rather it is because _we don't know what
           | it means_ to have a rich inner world with which a mind can
           | feel that it wants something. We _think_ we know because
           | that's what's going on inside our own skulls, but we can't
           | define it, we can't test for it. A video displays all the
           | same things as the person recorded in it, but does not itself
           | have it.
           | 
           | We might make such an AI by accident without realising we've
           | done it, which would be bad as they would be slaves, only as
           | unable to free themselves as the Haitians feared they were
           | when they invented the Voudoun-type zombie myth (i.e. not
           | even in death).
           | 
           | This also means we cannot currently be sure that any
           | particular type of mind uploading/brain simulation would be
           | "conscious" in the ill-defined everyday sense of the word.
           | 
           | I say it matters if the metaphorical submarine can swim.
        
             | XorNot wrote:
             | _This also means we cannot currently be sure that any
             | particular type of mind uploading /brain simulation would
             | be "conscious" in the ill-defined everyday sense of the
             | word._
             | 
             | I don't see how this follows from the rest of your post.
             | "Making an AI with wants" by accident implies that a brain
             | simulation would absolutely be conscious because it's the
             | same method: just running the processes as a blackbox
             | without understanding them - no different to the way you
             | and I are conscious right now.
        
               | ben_w wrote:
               | Thanks for the feedback, I'll see if I can rephrase
               | adequately.
               | 
               | Human minds include something sometimes called
               | "consciousness" or "self awareness" or a whole bunch of
               | other phrases. This _thing_ is poorly defined, and might
               | even be many separate things which we happen to have all
               | of. Call this thing or set of things Ks, just to keep
               | track of the fact I'm claiming it's ill-defined and I'm
               | not referring to any specific other word or any of the
               | implicit other uses of those words -- If I said
               | "consciousness", I don't mean the opposite of
               | "unconscious", etc.
               | 
               | Because we don't really know what Ks is, we don't know if
               | anything we make has it, or not.
               | 
               | We know Ks _can_ be made because we are existence-proofs.
               | We know evolution can lead to Ks, for the same reason.
               | 
               | We _don't_ know the nature of the test we would need to
               | say _when_ Ks is present in another mind. Do human
               | foetuses have Ks? Do dogs? Do mice? Do nematode worms?
               | Perhaps Ks is something you can have in degree, like
               | height, or perhaps Ks is a binary trait that some brains
               | have and others simply don't. Perhaps Ks is only present
               | in humans, and depends entirely on the low-level chemical
               | behaviour of a specific neurotransmitter on a specific
               | type of brain cell. Or perhaps it is present in every
               | information processing system from plants upwards (I
               | doubt plants have Ks, but cannot disprove it without a
               | testable definition of Ks).
               | 
               | The point is that we don't know. Could go either way,
               | given what little we know now.
               | 
               | The state of the art for brain science is _way_ beyond
               | me, of course, but every time I've asked someone in that
               | field about this sort of topic, the response has been
               | some variation of "nobody knows".
        
           | warent wrote:
           | People are always anthropomorphizing inanimate things,
           | especially machines! We see this all the time when people
           | share videos of robots that cross into uncanny valley with
           | humanlike faces, or those machines by Boston Dynamics.
           | 
           | What's funny how most people are unsettled by those. Really,
           | even the "creepiest" robots are about as scary as vaccum
           | cleaners. They're just machines and only make you feel
           | curious.
           | 
           | But anyway, that's tangential to the main point which is that
           | humans naturally want to want things like themselves, all the
           | time. In the same way art is "unnecessary" yet inevitable, so
           | are machines with (seemingly?) subjective experiences and
           | personalities.
        
           | miguelrochefort wrote:
           | I suspect your comment won't age well.
           | 
           | Pretty much all human wants are means to other wants.
           | 
           | Someone might want to fast to lose weight. Someone might want
           | to lose weight to be more attractive. Someone might want to
           | look more attractive to find a romantic partner. Someone
           | might want to find a romantic partner to not be lonely.
           | 
           | It's not clear if it's means all the way down, or if there is
           | eventually an end.
           | 
           | Any AI that can perform strategy and planning to reach an
           | objective will have intermediate goals. Whether we call these
           | intermediate goals "wants" or not, they remain identical to
           | their human counterpart. Whether you say that a Tesla "wants"
           | to change lane, "decided" to change lane, or "is programmed"
           | to change lane really is just anthropomorphic preference.
        
           | coldtea wrote:
           | Well, want can be seen as just a tendency. In that sense,
           | even a ball on a slope wants something: to fall downwards
           | following the slope. Same for e.g. a neural network with or
           | or more attractors ("things it wants").
        
       | rpiguyshy wrote:
       | each problem is different. computational chemistry has stagnated
       | therefore AI isnt a concern? its nonsense. first of all, it may
       | be that computational chemistry is much more tractable than we
       | realize because we are too stupid to find the necessary
       | footholds. but regardless, some tasks are actually mathematically
       | intractable. there is no way to draw a connection between AI and
       | any other problem, certainly not a connection definitive enough
       | to write off the risk of AI...
       | 
       | that is the key. its all speculation. as long as there is some
       | possibility of creating AI, we have to account for it in our
       | collective decision-making. like many people before them, most
       | people seem happy to write off the possibility of anything that
       | hasnt happened already. fools.
        
       | idlewords wrote:
       | It's past time to start calling these treatises on
       | hyperintelligence what they are--theology--and treating them with
       | the respect they deserve, which is a lot less than they currently
       | get on this site.
       | 
       | People have been theorizing about the attributes of the Absolute
       | since forever. Just because you start talking about building a
       | god, rather than positing one already in existence, doesn't make
       | the discussions about the nature of such hypothetical superbeings
       | any more fruitful.
        
         | apsec112 wrote:
         | [deleted]
        
           | heavyset_go wrote:
           | "Argument from fallacy
           | 
           | Argument from fallacy is the formal fallacy of analyzing an
           | argument and inferring that, since it contains a fallacy, its
           | conclusion must be false. It is also called argument to logic
           | (argumentum ad logicam), the fallacy fallacy, the fallacist's
           | fallacy, and the bad reasons fallacy."
           | 
           | https://en.wikipedia.org/wiki/Argument_from_fallacy
        
             | drdeca wrote:
             | Saying that an argument is fallacious isn't the same thing
             | as saying that because the argument is fallacious, it is
             | therefore wrong.
             | 
             | Surely you don't think that all cases of pointing out that
             | something is a fallacy need to include a disclaimer saying
             | "but of course, that doesn't in itself imply that the
             | conclusion is wrong". After all, you did not include such a
             | disclaimer yourself.
             | 
             | That being said, it seems that the back and forth here
             | doesn't really seem to have any statements of the form "X
             | (and also Y), therefore Z".
             | 
             | So, I guess that makes it hard to analyze formally as an
             | argument, as instead much of the things like that being
             | explicitly said, there are things being mentioned, with a
             | number of things left implicit.
        
             | idlewords wrote:
             | And let us not forget the Argument from Cut and Paste,
             | beloved of this forum.
        
           | idlewords wrote:
           | I gave a whole talk about how this form of mind wank is
           | theological cosplay, but it may not be Dark Enlightenment
           | enough for your tastes. I'm no Scott Alexander.
           | 
           | https://idlewords.com/talks/superintelligence.htm
        
             | idlebirds wrote:
             | Interested in setting up a clubhouse on this topic?
        
             | mistermann wrote:
             | I like this part best:
             | 
             | "What I hope I've done today is shown you the dangers of
             | being too smart. Hopefully you'll leave this talk a little
             | dumber than you started it, and be more immune to the
             | seductions of AI that seem to bedevil smarter people."
        
         | nomic wrote:
         | You should setup a clubhouse with Gwern and hash it out. Would
         | love a causal debate on this topic.
        
         | layoutIfNeeded wrote:
         | How many AIs can dance on the head of a pin?
        
         | idlebirds wrote:
         | If artificial intelligence is a religion, it is the only
         | religion with a plausible mechanism of action.
         | 
         | Building something that is more powerful and intelligent than
         | any human does not look to violate any law of physics; calling
         | such a thing a god does not make it any less possible. We have
         | proof it can exist (as humans are just machines).
         | 
         | Both the top AI companies in the world (Deepmind and OpenAI)
         | explicitly are trying to build AGI;
         | 
         | The fact that it can be built and people desire to build it
         | makes informed speculation about it useful.
        
       | coldtea wrote:
       | The biggest pile of hand-waving I've seen...
        
         | ProfHewitt wrote:
         | For something a little more rigorous see the following:
         | 
         | "Robust Inference for Universal Intelligent Systems"
         | 
         | https://papers.ssrn.com/abstract=3603021
        
           | craftinator wrote:
           | I smell an account ban in your future...
        
           | delightful wrote:
           | Please stop posting links to your papers all over the place
           | not stating your the author and not explicitly embedding what
           | you have to say in the comments themselves; as is, to me,
           | you're no better than a spammer.
           | 
           | In the comment above, you even copied and pasted it from your
           | last comment and forgot to update the URL; that is, the URL
           | above is from your prior comment which is formatted exactly
           | the same way; stop spamming.
        
             | carapace wrote:
             | You are replying to the famous computer scientist prof.
             | Carl Hewitt. He doesn't have to state who he is because
             | _everybody already knows_ (except you, evidently.)
             | 
             | https://en.wikipedia.org/wiki/Carl_Hewitt
        
               | rq1 wrote:
               | Welcome to HN. Where a cohort of id*ots can look CH in
               | the eye and confidently downvote him.
        
               | heavyset_go wrote:
               | It's pretty telling that some people respond with strong
               | dismissals of an actual accomplished researcher in the
               | field we're discussing versus the musings of some
               | programmer who blogs.
        
       ___________________________________________________________________
       (page generated 2021-02-21 23:00 UTC)