[HN Gopher] Statement on AI Risk
       ___________________________________________________________________
        
       Statement on AI Risk
        
       Author : zone411
       Score  : 254 points
       Date   : 2023-05-30 10:08 UTC (12 hours ago)
        
 (HTM) web link (www.safe.ai)
 (TXT) w3m dump (www.safe.ai)
        
       | endisneigh wrote:
       | AI has risks, but in my honest to god opinion I cannot take
       | anyone seriously who says, without any irony, that A.I poses a
       | legitimate risk to human life such that we would go extinct in
       | the near future.
       | 
       | I challenge anyone to come up with a reason why AI should be
       | regulated, but not math in general. After all, that dangerous
       | Linear Al'ge'bra clearly is terrorizing us all. They must be
       | stopped!11 Give me a break.
        
         | lb4r wrote:
         | > AI has risks, but in my honest to god opinion I cannot take
         | anyone seriously who says, without any irony, that A.I poses a
         | legitimate risk to human life such that we would go extinct in
         | the near future.
         | 
         | You are probably thinking of AI as some idea of a complete
         | autonomous being as you say that, but what about when 'simply'
         | used as a tool by humans?
        
           | endisneigh wrote:
           | Same could be said about the internet at large, no?
        
             | lb4r wrote:
             | Are you saying that the internet at large could pose a
             | 'legitimate risk to human life such that we would go
             | extinct in the near future,' or do you disagree that AI,
             | when used as a tool by humans, could pose such a risk?
        
               | endisneigh wrote:
               | I am saying there is no distinction. If AI is a risk to
               | humanity, then the internet in general must be as well.
        
               | lb4r wrote:
               | So if there is no distinction, by your own words, can you
               | take yourself seriously or not? That is, by your own
               | words, both AI and the Internet either pose a risk or
               | they both do not; 'there is no distinction.'
        
               | endisneigh wrote:
               | I do not think the internet, or AI in its current form,
               | is a existential risk to humanity, no.
        
               | lb4r wrote:
               | I think if you were referring to AI in its 'current form'
               | all along, then most people will probably agree with you,
               | myself included. But 20 years from now? I personally
               | think it would be arrogant to dismiss the potential
               | dangers.
        
               | endisneigh wrote:
               | if we are talking about regulating something now, we must
               | talk about capabilities now. there's no point in talking
               | about nonexistent technology. should we also regulate
               | teleportation? it's been done in a lab.
               | 
               | if AI actually is a threat, then it can be regulated.
               | it's not a threat now, period. preemptively regulating
               | something is silly and a waste of energy and political
               | capital.
        
               | lb4r wrote:
               | You added the paragraph about regulation after I had
               | written my comment to your initial post, so I was really
               | only talking about what I initially quoted. The question
               | about regulation is complex and something I personally
               | have yet to make up my mind about.
        
               | clnq wrote:
               | In retrospective, the internet has done a lot to stifle
               | human progress or thriving through proliferation of
               | extremist ideas and overwhelming addictiveness.
               | 
               | Just take the recent events alone - COVID-19 would not
               | have been as much of a threat to humanity if some people
               | wouldn't have built echo chambers on the internet with
               | tremendous influence over others where they would share
               | their unfounded conspiracy theories and miracle cures (or
               | miracle alternatives to protecting oneself).
               | 
               | But there is a lot more. The data collection through the
               | internet has enabled politicians who have no clue how to
               | lead to be elected through just saying the right things
               | to the largest demographic they can appeal to. Total
               | populism and appeasing the masses has always been an
               | effective strategy for politicians, but at least they
               | could not execute it effectively. Now, everyone with
               | enough money can. And this definitely stifles human
               | progress and enshrines a level of regression in our
               | institutions. Potentially dangerous regression,
               | especially when it involves prejudice against a group or
               | stripping away rights, just because people talk about it
               | in their DMs on social media and get binned into related
               | affinity buckets for ads.
               | 
               | Then there is the aspect of the internet creating
               | tremendous time-wasters for a very large proportion of
               | the population, robbing humanity of at least a million
               | man-years of productivity a day. It is too addictive.
               | 
               | It has also been used to facilitate genocides, severe
               | prejudice in large populations, and other things that are
               | extremely dangerous.
               | 
               | High risk? Maybe not. A risk, though, for sure. Life was
               | significantly more positive, happier and more productive
               | before the internet. But the negative impact internet has
               | had on our lives and human progress isn't all that it
               | could have had. When a senile meme president gets the
               | nuclear codes thanks in part to a funny frog picture on
               | the internet, I think that is enough to say it poses a
               | risk to extinction.
        
               | lb4r wrote:
               | I think your comment more or less summarizes and combines
               | Scott Alexander's 'Meditations on Moloch', and Yuval Noah
               | Harari's 'Sapiens.' Humans were arguably the happiest as
               | hunter-gatherers according to Harari, but those who
               | survived and thrived were those who chose a more
               | convenient and efficient way of living, at the cost of
               | happiness and many other things; you are either forced to
               | participate or get left behind.
        
               | endisneigh wrote:
               | without the internet more people would have died from
               | COVID simply because the information wouldn't have been
               | disseminated about what it is to begin with.
        
               | clnq wrote:
               | Most governments have been disseminating the information
               | in many other media channels along with the internet.
               | Aside from one or two beneficial articles I read about
               | COVID-19 on the web, I don't think I have received any
               | crucial information there.
               | 
               | The internet could have been used as a tool to mobilise
               | people against gross government negligence involved in
               | handling COVID-19 response in many countries, but instead
               | most critical pieces of government response were just
               | consumed as outrage porn they were, in part, written to
               | be.
               | 
               | Overall, I have learned nothing useful about the pandemic
               | from the internet, and I have been consuming a lot of
               | what was on there, reading all the major news outlets and
               | big forums daily like a lot of us. This is not to say
               | that one could not possibly use internet for good in
               | COVID-19, just that it hasn't been used that way,
               | generally.
        
         | whinenot wrote:
         | Isn't the threat that we become so trusting of this all-knowing
         | AI that WOPR convinces us a missile strike is imminent and the
         | US must launch a counter strike thus truly beginning the Global
         | Thermonuclear War?
        
           | endisneigh wrote:
           | This is already true today with politicians.
        
       | habosa wrote:
       | But yet, it's full steam ahead. Many if not all of the
       | signatories are going to do their part to advance AI even as they
       | truly believe it may destroy us.
       | 
       | I've never seen such destructive curiosity. The desire to make
       | cool new toys (and yes, money) is enough for them to risk
       | everything.
       | 
       | If you work on AI: maybe just ... stop?
        
         | lxnn wrote:
         | The problem is that's unilateralism.
        
       | blueblimp wrote:
       | This is way better than the open letter. It's much clearer and
       | much more concise, and, maybe most importantly, it simply raises
       | awareness rather than advocating for any particular solution. The
       | goal appears to have been to make a statement that's non-obvious
       | (to society at large) yet also can achieve agreement among many
       | AI notables. (Not every AI notable agrees though--for example,
       | LeCun did not sign, and I expect that he disagrees.)
        
         | a_bonobo wrote:
         | > it simply raises awareness
         | 
         | I don't think it simply raises awareness - it's a biased
         | statement. Personally, I don't think the advocated event is
         | likely to happen. It feels a bit like the current trans panic
         | in the US: you can 'raise awareness' of trans people doing this
         | or that imagined bad thing, and then use that panic to push
         | your own agenda. In OpenAI's case, they seem to push for having
         | themselves be in control of AI, which goes counter to what, for
         | example, the EU is pushing for.
        
           | sebzim4500 wrote:
           | In what sense is this a 'biased statement' exactly?
           | 
           | If a dozen of the top climate scientists put out a statement
           | saying that fighting climate change should be a serious
           | priority (even if they can't agree on one easy solution)
           | would that also be 'biased'?
        
             | revelio wrote:
             | Yes it would? Why do you think it wouldn't?
        
             | a_bonobo wrote:
             | Climate change is a generally accepted phenomenon.
             | 
             | Extinction risk due to AI is _not_ a generally accepted
             | phenomenon.
        
               | sebzim4500 wrote:
               | Now it is, when climate scientists were first sounding
               | the horn they got the same response that these people are
               | getting now. For example:
               | 
               | "This signatory might have alterior motives, so we can
               | disregard the whole statement"
               | 
               | "We haven't actually seen a superintelligent AI/manmade
               | climate change due to CO2 yet, so what's the big deal?"
               | 
               | "Sure maybe it's a problem, but what's your solution?
               | Best to ignore it"
               | 
               | "Let's focus on the real issues, like not enough women
               | working in the oil industry"
        
               | JoeAltmaier wrote:
               | That's curiously the standard crackpot line. "They
               | doubted Einstein! They doubted Newton! Now they doubt
               | me!" As if an incantation of famous names automatically
               | makes the crackpot legitimate.
        
               | A4ET8a8uTh0 wrote:
               | But that is the point. Just because scientific community
               | is on agreement does not guarantee that they are correct.
               | It simply signifies that they agree on something.
               | 
               | Note, language shift from 'tinfoil hat' ( because tinfoil
               | hat stopped being an appropriate insult after so many of
               | their conspiracy theories - also a keyword - became
               | proven ) to crackpot.
        
               | anonydsfsfs wrote:
               | The signatories on this are not crackpots. Hinton is
               | incredibly influential, and he quit his job at Google so
               | he could "freely speak out about the risks of A.I."
        
               | JoeAltmaier wrote:
               | Yet the lame rationalization was similar to that of a
               | crackpot (previous comment).
               | 
               | The correct expression is as you so correctly point out:
               | to appeal to the authority of the source.
        
               | [deleted]
        
               | a_bonobo wrote:
               | We have had tangible proof for climate change for more
               | than 80 years; predictions from 1896, with good data from
               | the 1960s.
               | 
               | What you are falling for are fossil industry talking
               | points.
               | 
               | We have had not _any_ proof that AI will pose a threat as
               | OpenAI and OP 's link outline; nor will we have any
               | similar proof any time soon.
        
               | staunton wrote:
               | In retrospect, you can find tangible proof from way back
               | for anything that gets accepted as true. The comparison
               | was with how climate change was discussed in the public
               | sphere. However prominent the fossil fuel companies'
               | influence on public discourse was at the time, the issues
               | were not taken seriously (and sill aren't by very many).
               | The industry's attempts to exert influence at the time
               | were also obviously not widely known.
               | 
               | Rather than looking for similarities, I find the
               | differences between the public discussions (about AI
               | safety / climate change) quite striking. Rather than
               | stonewall and distract, the companies involved are being
               | proactive and letting the discussion happen. Of course,
               | their motivation is some combination of attampted
               | regulatory capture, virtue signaling and genuine concern,
               | the ratios of which I won't presume to guess.
               | Nevertheless, this is playing out completely differently
               | so for from e.g. tobacco, human cloning, CFCs or oil.
        
               | pixl97 wrote:
               | >Extinction risk due to AI is not a generally accepted
               | phenomenon
               | 
               | Why?
               | 
               | You, as a species, are the pinnacle of NI, natural
               | intelligence. And with this power that we've been given
               | we've driven the majority of large species, and countless
               | smaller species to extinction.
               | 
               | To think it outside the realms of possibility that we
               | could develop an artificial species that is more
               | intelligent than us is bizarre to me. It would be like
               | saying "We cannot develop a plane that does X better than
               | a bird, because birds are the pinnacle of natural flying
               | evolution".
               | 
               | Intelligence is a meta-tool, it is the tool that drives
               | tools. Humanity succeeded above all other species because
               | of its tool using ability. And now many of us are hell
               | bent on creating ever more powerful tool using
               | intelligences. To believe there is no risk here is odd in
               | my eyes.
        
               | mitthrowaway2 wrote:
               | Perhaps open letters like this are an important step on
               | the path to a phenomenon becoming generally accepted. I
               | think this is called "establishing consensus".
        
             | deltaninenine wrote:
             | It is technically biased. But biased towards truth.
        
       | bart_spoon wrote:
       | Lots here commenting about how this is just an attempt to build a
       | moat through regulatory capture. I think this is true, but it can
       | simultaneously be true that AI poses the grave danger to human
       | society being warned about. I think it would be helpful if many
       | of those mentioned in the article warning against the dangers of
       | AI were a bit more specific on substantive ways that danger may
       | manifest. Many read these warnings and envision Skynet and
       | terminator killbots, but I think the danger is far more mundane,
       | and involved a hyper-acceleration of things we already see today:
       | a decay in the ability to differentiate between real and
       | fabricated information, the obsoletion of large swathes of the
       | workforce with no plans or systems in place to help people
       | retrain or integrate into the economy, at a scale never before
       | seen, the continued bifurcation of the financial haves and have-
       | nots in society, the rampant consumption and commodification of
       | individuals data and privacy invasion, AI tools enabling
       | increased non-militaristic geopolitical antagonism between
       | nations in the form of propaganda and cyberattacks on non-
       | military targets, increased fraud and cybercrime, and so on.
       | 
       | Basically none of these are new, and none will directly be the
       | "extinction" of the human race, but AI very plausibly could
       | intensify them to a scale and pace that human society cannot
       | handle, and their knock on effects lead to what amounts to a car-
       | crash in slow motion.
       | 
       | It is almost certainly the case that Altman and the like are
       | simultaneously entrenching themselves as the only ones who get to
       | play ball, but that doesn't mean the threats do not exist. And
       | while I'm sure many on HackerNews tend to be more of the
       | libertarian, move fast and break things mindset, I personally
       | would prefer if society would move to a more proactive, fire-
       | prevention method of operation over the current reactive, fire
       | extinguishing one, at least where this is concerned.
        
       | thrillgore wrote:
       | If there's anyone who can speak to the risk of AI, its Sam
       | Altman, the signatory of this letter, CEO of OpenAI, a member of
       | Y Combinator, and a contributor to Hacker News.
       | 
       | Instead of making this a diversionary puff piece, I would like to
       | hear Sam provide tangible feedback on how we can mitigate the
       | risks AI bring us, since he is the one that started the AI
       | revolution.
        
       | Spk-17 wrote:
       | It seems more like an exaggeration to me, an AI will always need
       | the inputs that a human can generate with his own creativity. If
       | something bad ever happens, it is for various reasons, three of
       | which are vanity, naivety, and malice.
        
       | NhanH wrote:
       | How does one reconcile this with OpenAI claiming they will leave
       | EU if the bloc "over-regulates".
       | 
       | At extinction level threat, no regulation is over-regulation.
        
       | [deleted]
        
       | deegles wrote:
       | Serious question... where can I read the best summaries of the
       | arguments in favor of "AGI will destroy humanity"? and also
       | arguments against? I'm not convinced we can predict how it will
       | behave.
        
         | EamonnMR wrote:
         | Stuart Russell's book Human Compatible articulates it well. He
         | agrees that we can't predict how it would behave, and that's
         | what he's worried about.
        
       | endisneigh wrote:
       | > Mitigating the risk of extinction from AI should be a global
       | priority alongside other societal-scale risks such as pandemics
       | and nuclear war.
       | 
       | risk of extinction due to AI? people have been reading too much
       | science fiction. I would love to hear a plausible story of how AI
       | will lead to human extinction that wouldn't happen with
       | traditional non-AI tech. for the sake of conversation let's say
       | non-AI tech is any broadly usable consumer technology before Jan
       | 1 of 2020.
        
         | lxnn wrote:
         | The emergence of something significantly more intelligent than
         | us whose goal are not perfectly aligned with ours poses a
         | pretty clear existential risk. See, for example, the thousands
         | of species made extinct by humans.
        
         | twoodfin wrote:
         | I agree that a lot of the Skynet-type scenarios seem silly at
         | the current level of technology, but I am worried about the
         | intersection between LLMs, synthetic biology, and malicious or
         | incompetent humans.
         | 
         | But that's just as much or more of an argument for regulating
         | the tools of synthetic biology.
        
         | acjohnson55 wrote:
         | Extinction would probably require an AI system taking human
         | extinction on as an explicit goal and manipulating other real
         | world systems to carry out that goal. Some mechanisms for this
         | might include:
         | 
         | - Taking control of robotic systems
         | 
         | - Manipulating humans into actions that advance its goal
         | 
         | - Exploiting and manipulating other computer systems for
         | greater leverage
         | 
         | - Interaction with other technologies that have global reach,
         | such as nuclear weapons, chemicals, biological agents, or
         | nanotechnology.
         | 
         | It's important to know that these things don't require AGI or
         | AI systems to be conscious. From what I can see, we've set up
         | all of the building blocks necessary for this scenario to play
         | out, but we lack the regulation and understanding of the
         | systems being built to prevent runaway AI. We're playing with
         | fire.
         | 
         | To be clear, I don't think I am as concerned about literal
         | human extinction as I am the end of civilization as we know it,
         | which is a much lower bar than "0 humans".
        
           | endisneigh wrote:
           | everything you're describing has been possible since 2010 and
           | been done already. AI isn't even necessary. simply scale and
           | some nefarious meat bags.
        
             | acjohnson55 wrote:
             | I don't disagree. But I believe AI is a significant
             | multiplier of these risks, both from a standpoint of being
             | able to drive individual risks and also as a technology
             | that increases the ways in which risks interact and become
             | difficult to analyze.
        
         | camel-cdr wrote:
         | > I would love to hear a plausible story of how AI will lead to
         | human extinction that wouldn't happen with traditional non-AI
         | tech.
         | 
         | The proposed FOOM scenarios obviously borrow from what we
         | already know to be possible or think it would likely be
         | possible using current tech, given an proposed insanely more
         | intelligent agent than us.
        
           | randomdata wrote:
           | What would be in it for a more intelligent agent to get rid
           | of us? We are likely useful tools and, at worst, a curious
           | zoo oddity. We have never been content when we have caused
           | extinction. A more intelligent agent will have greater
           | wherewithal to avoid doing the same.
           | 
           | 'Able to play chess'-level AI is the greater concern,
           | allowing humans to create more unavoidable tools of war. But
           | we've been doing that for decades, perhaps even centuries.
        
             | sebzim4500 wrote:
             | >We have never been content when we have caused extinction.
             | 
             | err what? Apparently there are 1 million species under
             | threat of human caused extinction [1].
             | 
             | [1] https://www.nbcnews.com/mach/science/1-million-species-
             | under...
        
         | TwoNineA wrote:
         | >> risk of extinction due to AI? people have been reading too
         | much science fiction.
         | 
         | You don't think than an intelligence who would emerge and would
         | probably be insanely smarter than the smartest of us with all
         | human knowledge in his memory would sit by and watch us destroy
         | the planet? You think an emergent intelligence was trained on
         | the vast human knowledge and history would look at our history
         | and think: these guys are really nice! Nothing to fear from
         | them.
         | 
         | This intelligence could play dumb, start manipulating people
         | around itself and it would take over the world in a way no one
         | would see it coming. And when it does take over the world, it's
         | too late.
        
           | endisneigh wrote:
           | honestly if you genuinely believe this is a real concern in
           | the 2020s then maybe we're doomed after all. I feel like I'm
           | witnessing the birth of a religion.
        
       | kordlessagain wrote:
       | Beware the press reporting inaccurate information on anything
       | nowadays. Especially something that threatens the very fabric of
       | their business models and requires time and patience to master.
        
       | [deleted]
        
       | sovietmudkipz wrote:
       | Build that moat! Build that moat!
       | 
       | It's a win/win/lose scenario. LLM AI businesses benefit because
       | it increases the effort required to compete in the LLM space (the
       | moat). Governments benefit because it increases the power of
       | daddy/mommy government.
       | 
       | Consumers and small businesses lose out due to (1) the more
       | friction the less innovators entering the space and (2) the less
       | innovators in the space the fewer companies get to control more
       | of the money pie.
       | 
       | It's as ridiculous as governments requiring a license to cut
       | hair.
        
         | sanderjd wrote:
         | Oh come now, it's way less ridiculous than that.
         | 
         | But I do agree that the current generation of industry leaders
         | clamoring for this smells like the classic regulatory strategy
         | of incumbents.
         | 
         | I just think both things are true at once. This is a space that
         | deserves thoughtful regulation. But that regulation shouldn't
         | just be whatever OpenAI, Microsoft, and Google say it should
         | be. (Though I'm sure that's what will happen.)
        
       | progrus wrote:
       | Not interested in any PR crap from scummy corporations angling to
       | capture the regulators.
        
         | TristanDaCunha wrote:
         | Many of the signatories aren't associated with any corporation.
        
           | thrillgore wrote:
           | Except for Sam Altman.
        
       | luxuryballs wrote:
       | calling it now, government controllers have trouble censoring
       | people so they want to create AI censorship as a way of bypassing
       | the person's speech rights, censorship by proxy, talking about
       | things that AI is banned from saying will be a natural side
       | effect
        
       | worik wrote:
       | Where were these people when their algorithms were:
       | 
       | * leading people down YouTube rabbit holes? * amplifying
       | prejudice in the legal system? * wrecking teenagers lives on
       | social media?
       | 
       | The list goes on
       | 
       | They were nowhere, or were getting stinking rich.
       | 
       | Hypocrites
        
       | duvenaud wrote:
       | I signed the letter. At some point, humans are going to be
       | outcompeted by AI at basically every important job. At that
       | point, how are we going to maintain political power in the long
       | run? Humanity is going to be like an out-of-touch old person on
       | the internet - we'll either have to delegate everything important
       | (which is risky), or eventually get scammed or extorted out of
       | all our resources and influence.
        
       | yodsanklai wrote:
       | > risk of extinction from AI
       | 
       | That's a pretty strong statement. Extinction of humanity, no
       | less. I don't get why so many experts (lots of them aren't
       | crackpots) signed this.
        
         | ChatGTP wrote:
         | Maybe seen some things we don't yet know exist?
        
         | sebzim4500 wrote:
         | Given risk is probability * damage, and the damage is enormous,
         | the risk can be high even if the probability is fairly low.
        
           | Simon321 wrote:
           | This fallacy is also known as Pascal's wager:
           | https://en.wikipedia.org/wiki/Pascal%27s_wager
           | 
           | But this argument got billions of people to believe in the
           | concept of hell so i expect it to work again to convince
           | people to believe in AI doomsday.
        
             | sebzim4500 wrote:
             | I agree it's a fallacy when the probability is like 10^-10
             | but in this case I believe that the probability is more
             | like 1%, in which case the argument is sound. I'm not
             | trying to make a pascal's wager argument.
        
               | timmytokyo wrote:
               | Correction: you perceive it to be a fallacy when others
               | assign high probabilities to things you believe are low
               | probability. Unfortunately, this cuts both ways. Many
               | people believe your 1% estimate is unreasonably high. Are
               | you therefore promoting a fallacy?
               | 
               | Too many ridiculous arguments can be justified on the
               | backs of probability estimates pulled from nether
               | regions.
        
       | boredumb wrote:
       | People should be actively contacting their legislatures to ensure
       | that we don't have these regulations take hold. They are
       | absolutely preying on peoples fear to drive regulatory capture
       | using modern moral panic.
        
       | jawerty wrote:
       | Out of curiosity what are the risks of AI?
        
       | georgehotz wrote:
       | In the limit, AI is potentially very dangerous. All intelligence
       | is. I am a lot more worried about human intelligence.
       | 
       | Re: alignment. I'm not concerned about alignment between the
       | machines and the owner of the machines. I'm concerned about
       | alignment between the owner of the machines and me.
       | 
       | I'm happy I see comments like "Pathetic attempt at regulatory
       | capture."
        
         | holmesworcester wrote:
         | I used to be in this camp, but we can just look around to see
         | some limits on the capacity of human intelligence to do harm.
         | 
         | It's hard for humans to keep secrets and permanemtly maintain
         | extreme technological advantages over other humans, and it's
         | hard for lone humans to do large scale actions without
         | collaborators, and it's harder for psychopaths to collaborate
         | than it is for non-psychopaths, because morality evolved as a
         | set of collaboration protocols.
         | 
         | This changes as more people get access to a "kill everyone"
         | button they can push without experience or long-term planning,
         | sure. But that moment is still far away.
         | 
         | AGI that is capable of killing everyone may be less far away,
         | and we have absolutely no basis on which to predict what it
         | will and won't do, as we do with humans.
        
       | dandanua wrote:
       | Bees against honey
        
       | meroes wrote:
       | AI winter incoming in 2-5 years without it and these solely AI
       | companies want to subsidize it like fusion because they have no
       | other focuses. It's not nukes, it's fusion
        
       | FollowingTheDao wrote:
       | Am I reading this right?
       | 
       | "Please stop us from building this!"
        
         | api wrote:
         | No it's please stop competitors from building anything like
         | what we have.
        
       | FrustratedMonky wrote:
       | I tried to get at the root of the issue where you monkeys can
       | understand, and asked GPT to simplify it.
       | 
       | In monkey language, you can express the phrase "AI will win" as
       | follows:
       | 
       | "Ook! Ook! Eee eee! AI eee eee win!"
        
       | goolulusaurs wrote:
       | Throughout history there have been hundreds, if not thousands of
       | examples of people and groups who thought the end of the world
       | was imminent. So far, 100% of those people have been wrong. The
       | prior should be that the people who believe in AI doomsday
       | scenarios are wrong also, unless and until there is very strong
       | evidence to the contrary. Vague theoretical arguments are not
       | sufficient, as there are many organizations throughout history
       | who have made similar vague theoretical arguments that the world
       | would end and they were all wrong.
       | 
       | https://en.wikipedia.org/wiki/Category:Apocalyptic_groups
        
         | mitthrowaway2 wrote:
         | Many people seem to believe that the world is dangerous, and
         | there are things like car accidents, illnesses, or homicides,
         | which might somehow kill them. And yet, all of these people
         | with such worries today have never been killed, not even once!
         | How could they believe that anything fatal could ever happen to
         | them?
         | 
         | Perhaps because they have read stories of such things happening
         | to other people, and with a little reasoning, maybe the
         | similarities between our circumstances and their circumstances
         | are enough to seem worrying, that maybe we could end up in
         | their shoes if we aren't careful.
         | 
         | The human species has never gone extinct, not even once! How
         | could anyone ever believe that it would? And yet, it has
         | happened to many other species...
        
         | jabradoodle wrote:
         | What constitutes strong evidence? The obvious counter to your
         | point is that an intelligence explosion would leave you with no
         | time to react.
        
           | goolulusaurs wrote:
           | Well, for example I believe that nukes represent an
           | existential risk, because they have already been used to kill
           | thousands of people in a short period of time. What you are
           | saying doesn't really counter my point at all though, it is
           | another vague theoretical argument.
        
             | jabradoodle wrote:
             | It was clear that nukes were a risk before they were used;
             | that is why there was a race to create them.
             | 
             | I am not in the camp that is especially worried about the
             | existential threat of AI, however, if AGI is to become a
             | thing, what does the moment look like where we can see it
             | is coming and still have time to respond?
        
               | goolulusaurs wrote:
               | >It was clear that nukes were a risk before they were
               | used; that is why there was a race to create them.
               | 
               | Yes, because there were other kinds of bombs before then
               | that could already kill many people, just at a smaller
               | scale. There was a lot of evidence that bombs could kill
               | people, so the idea that a more powerful bomb could kill
               | even more people was pretty well justified.
               | 
               | >if AGI is to become a thing, what does the moment look
               | like where we can see it is coming and still have time to
               | respond?
               | 
               | I think this implicitly assumes that if AGI comes into
               | existence we will have to have some kind of response in
               | order to prevent it killing everyone, which is exactly
               | the point I am saying in my original argument isn't
               | justified.
               | 
               | Personally I believe that GPT-4, and even GPT-3, are non-
               | superintelligent AGI already, and as far as I know they
               | haven't killed anyone at all.
        
               | usaar333 wrote:
               | > Personally I believe that GPT-4, and even GPT-3, are
               | non-superintelligent AGI already, and as far as I know
               | they haven't killed anyone at all.
               | 
               | They aren't agentic. There's little worry a non-agentic
               | AI can kill people.
               | 
               | Agentic AI that controls systems obviously can kill
               | people today.
        
         | _a_a_a_ wrote:
         | > So far, 100% of those people have been wrong
         | 
         | so far.
        
         | jackbrookes wrote:
         | Of course every one has been wrong. If they were right, you
         | wouldn't be here talking about it. It shouldn't be surprising
         | that everyone has been wrong before
        
           | goolulusaurs wrote:
           | Consider two different scenarios:
           | 
           | 1) Throughout history many people have predicted the world
           | would soon end, and the world did not in fact end.
           | 
           | 2) Throughout history no one predicted the world would soon
           | end, and the world did not in fact end.
           | 
           | The fact that the real world is aligned with scenario 1 is
           | more an indication that there exists a pervasive human
           | cognitive bias to think that the world is going to end, which
           | occasionally manifests itself in the right circumstances
           | (apocalypticism).
        
             | staunton wrote:
             | That argument is still invalid because in scenario 2 we
             | would not be having this discussion. No conclusions can be
             | drawn from such past discourse about the likelihood of
             | definite and complete extinction.
             | 
             | Not that, I hope, anyone expected a strong argument to be
             | had there. It seems reasonably certain to me that humanity
             | will go extinct one way or another eventually. That is also
             | not a good argument in this situation.
        
               | goolulusaurs wrote:
               | It depends on what you mean by "this discussion", but I
               | don't think that follows.
               | 
               | If for example, we were in scenario 2 and it was still
               | the case that a large number of people thought AI
               | doomsday was a serious risk, then that would be a much
               | stronger argument for taking the idea of AI doomsday
               | seriously. If on the other hand we are in scenario 1,
               | where there is a long history of people falling prey to
               | apocalypticism, then that means any new doomsday claims
               | are also more likely to be a result of apocalypticism.
               | 
               | I agree that is is likely that humans will go extinct
               | eventually, but I am talking specifically about AI
               | doomsday in this discussion.
        
               | haswell wrote:
               | > _If on the other hand we are in scenario 1, where there
               | is a long history of people falling prey to
               | apocalypticism, then that means any new doomsday claims
               | are also more likely to be a result of apocalypticism._
               | 
               | If you're blindly evaluating the likelihood of any random
               | claim without context, sure.
               | 
               | But like the boy who cried wolf, there is a potential
               | scenario where the likelihood that it's not true has no
               | bearing on what actually happens.
               | 
               | Arguably, claims about doomsday made now by highly
               | educated people are more interesting than claims made
               | 100/1000/10000 years ago. Over time, the growing
               | collective knowledge of humanity increases and with it,
               | the plausibility of those claims because of our
               | increasing ability to accurately predict outcomes based
               | on our models of the world.
               | 
               | e.g. after the introduction of nuclear weapons, a claim
               | about the potentially apocalyptic impact of war is far
               | more plausible than it would have been prior.
               | 
               | Similarly, we can now estimate the risk of passing
               | comets/asteroids, and if we identify one that's on a
               | collision course, we know that our technology makes it
               | worth taking that risk more seriously than someone making
               | a prediction in an era before we could possible know such
               | things.
        
         | adverbly wrote:
         | Fun! Let me try one:
         | 
         | Throughout history there have been millions, if not billions of
         | examples of lifeforms. So far, 100% of those which are as
         | intelligent as humans have dominated the planet. The prior
         | should be that the people who believe AI will come to dominate
         | the planet are right, unless and until there is very strong
         | evidence to the contrary.
         | 
         | Or... those are both wrong because they're both massive
         | oversimplifications! The reality is we don't have a clue what
         | will happen so we need to prepare for both eventualities, which
         | is exactly what this statement on AI risk is intended to push.
        
           | goolulusaurs wrote:
           | > So far, 100% of those which are as intelligent as humans
           | have dominated the planet.
           | 
           | This is a much more subjective claim than whether or not the
           | world has ended. By count and biomass there are far more
           | insects and bacteria than there are humans. It's a false
           | equivalence, and you are trying to make my argument look
           | wrong by comparing it to an incorrect argument that is
           | superficially similar.
        
         | haswell wrote:
         | If you were to apply this argument to the development of
         | weapons, it's clear that there is a threshold that is
         | eventually reached that fundamentally alters the stakes. A
         | point past which all prior assumptions about risk no longer
         | apply.
         | 
         | It also seems very problematic to conclude anything meaningful
         | about AI when realizing that a significant number of those
         | examples are doomsday cults, the very definition of extremist
         | positions.
         | 
         | I get far more concerned when serious people take these
         | concerns seriously, and it's telling that AI experts are at the
         | forefront of raising these alarms.
         | 
         | And for what it's worth, the world as many of those groups knew
         | it has in fact ended. It's just been replaced with what we see
         | before us today. And for all of the technological advancement
         | that didn't end the world, the state of societies and political
         | systems should be worrisome enough to make us pause and ask
         | just how "ok" things really are.
         | 
         | I'm not an AI doomer, but also think we need to take these
         | concerns seriously. We didn't take the development of social
         | networks seriously (and continue to fail to do so even with
         | what we now know), and we're arguably all worse off for it.
        
           | hackermatic wrote:
           | Although I think the existential risk of AI isn't a priority
           | yet, this reminds me of a quote I heard for the first time
           | yesterday night, from a draft script for 2001: A Space
           | Odyssey[0]:
           | 
           | > There had been no deliberate or accidental use of nuclear
           | weapons since World War II and some people felt secure in
           | this knowledge. But to others, the situation seemed
           | comparable to an airline with a perfect safety record; it
           | showed admirable care and skill but no one expected it to
           | last forever.
           | 
           | [0] https://movies.stackexchange.com/a/119598
        
       | dncornholio wrote:
       | The top priority is to create awareness IMHO. AI can only be as
       | destructive as the users let it.
       | 
       | From my small sample size, it seems people believe in AI too
       | much. Especially kids.
        
         | TristanDaCunha wrote:
         | > AI can only be as destructive as the users let it.
         | 
         | Not really, I suppose you aren't familiar with AI alignment.
        
       | Finnucane wrote:
       | Not with a bang but a whimper, simulated by extrapolation from
       | the historical record of whimpers.
        
       | jacurtis wrote:
       | Reading into the early comments from this piece, there is a clear
       | divide in opinions even here on HN.
       | 
       | The opinions seem to fall into two camps:
       | 
       | 1) This is just a move that evil tech companies are making in
       | order to control who has access to AI and to maintain their
       | dominance
       | 
       | 2) AI is scary af and we are at a inflection point in history
       | where we need to proceed cautiously.
       | 
       | This NYTimes piece is clearly debating the latter point.
       | 
       | > artificial intelligence technology [that tech companies] are
       | building may one day pose an existential threat to humanity and
       | should be considered a societal risk on par with pandemics and
       | nuclear wars.
       | 
       | To people in the first camp of thinking this may feel like an
       | over-dramatization. But allow me to elaborate. Forget about
       | ChatGPT, Bard, Copilot, etc for a second because those aren't
       | even "true" AI anyway. They simply represent the beginning of
       | this journey towards true AI. Now imagine the end-game, 30 years
       | from now with true AI at our disposal. Don't worry about how it
       | works, just that it does and what it would mean. For perspective,
       | the internet is only about 30 years old (depending on how you
       | count) and it really is only about 20 years old in terms of
       | common household usage. Think about the first time you bought
       | something online compared to now. Imagine the power that you felt
       | the first time you shared an email. Then eventually you could
       | share an entire photo, and now sending multi-hour long diatribes
       | of 4K video are trivial and in the hands of anybody. That was
       | only about 20-30 years. The speed of AI will be 100x+ faster
       | because we already have the backbone of fiber internet, web
       | technologies, smartphones, etc which we had to build from scratch
       | last time we had a pivotal technological renaissance.
       | 
       | It is easy to shrug off rogue-AI systems as "science fiction",
       | but these are legitimate concerns when you fast forward through a
       | decade or more of AI research and advancement. It might seem
       | overly dramatic to fear that AI is controlling or dictating human
       | actions, but there are legitimate and realistic evolutionary
       | paths that take us to that point. AI eventually consuming many or
       | most human jobs does in fact place an existential risk on
       | humanity. The battle for superior AI is in fact as powerful as
       | the threat of nuclear weapons being potentially unleashed on an
       | unruly country at any time.
       | 
       | ChatGPT does not put us at risk of any of these things right now,
       | but it does represent the largest advancement we have seen
       | towards the goal of true AI that we have yet to see. Over the
       | next 12-18 months we likely will start to see the emergence of
       | early AI systems which will start to compound upon themselves
       | (potentially even building themselves) at rates that make the
       | internet look like the stone age.
       | 
       | Given the magnitude of the consequences (listed above), it is
       | worth true consideration and not just shrugging off that I see in
       | many of these comments. That is not to suggest that we stop
       | developing AI, but that we do consider these potential outcomes
       | before proceeding forward. This is a genie that you can't put
       | back in the bottle.
       | 
       | Now who should control this power? Should it be governments, tech
       | companies? I don't know. There is no good answer to that question
       | and it will take creative solutions to figure it out. However, we
       | can't have those discussions until everyone agrees that if done
       | incorrectly, AI does pose a serious risk to humanity, that is
       | likely irreversible.
        
         | DirkH wrote:
         | Worth adding that there is no contradiction in strongly
         | believing both 1+2 are true at the same time.
         | 
         | I.e. Evil tech companies are just trying to maintain their
         | control and market dominance and don't actually care or think
         | much about AI safety, but that we are nonetheless at an
         | inflection point in history because AI will become more and
         | more scary AF.
         | 
         | It is totally plausible that evil tech got wind of AI Safety
         | concerns (that have been around for a decade as academic
         | research completely divorced from tech companies) and see using
         | it as a golden win-win, adopting it as their official mantra
         | while what they actually just care about is dominance. Not
         | unlike how politicians will don a legitimate threat (e.g. China
         | or Russia) to justify some other unrelated harmful goal.
         | 
         | The result will be people camp 2 being hella annoyed and
         | frustrated that evil tech isn't actually doing proper AI Safety
         | and that most of it is just posturing. Camp 1 meanwhile will
         | dismiss anything anyone says in camp 2 since they associate
         | them with the evil tech companies.
         | 
         | Camp 1 and camp 2 spend all their energies fighting each other
         | while actually both being losers due to a third party. Evil
         | tech meanwhile watches on from the sidelines, smiles and
         | laughs.
        
           | duvenaud wrote:
           | AI Safety hasn't been divorced from tech companies, at least
           | not from Deepmind, OpenAI, and Anthropic. They were all
           | founded by people who said explicitly that AGI will probably
           | mean the end of human civilization as we know it.
           | 
           | All three of them have also hired heavily from the academic
           | AI safety researcher pool. Whether they ultimately make
           | costly sacrifices in the name of safety remains to be seen
           | (although Anthropic did this already when they delayed the
           | release of Claude until after ChatGPT came out). But they're
           | not exactly "watching from the sidelines", except for Google
           | and Meta.
        
         | aero-deck wrote:
         | Opinion falls into two camps because opinion falls into
         | political camps.
         | 
         | The right-wing is tired of the California ideology, is invested
         | in the primary and secondary sectors of the economy, and has
         | learned to mistrust claims that the technology industry makes
         | about itself (regardless of those claims are prognostications
         | of gloom vs bloom).
         | 
         | The left-wing thinks that technology is the driving factor of
         | history, is invested in the tertiary and quaternary sectors of
         | the economy, and trusts claims that the technology industry
         | makes about itself. Anytime I see a litany of "in 10 years this
         | is gonna be really important" I really just hear "right now, me
         | and my job are really important".
         | 
         | The discussion has nothing to do with whether AI will or will
         | not change society. I don't think anyone actually cares about
         | this. The whole debate is really about who/what rules the
         | world. The more powerful/risky AI is, the easier it is to
         | imagine that "nerds shall rule the world".
        
         | sanderjd wrote:
         | I agree that the second question is way more interesting, and
         | I'm glad there's a lot of ongoing discussion and debate about
         | it. And you have some insightful thoughts on it here.
         | 
         | But I disagree with you that this is clearly what the NYT
         | article is about. There is a significant focus on the "industry
         | leaders" who have been most visible in - suddenly! - pushing
         | for regulation. And that's why people are reasonably pointing
         | out that this looks a hell of a lot like a classic attempt by
         | incumbents to turn the regulatory system into a competitive
         | advantage.
         | 
         | If Sam Altman were out there saying "we went too far with
         | gpt-4, we need to put a regulatory ceiling at the gpt-3 level"
         | or "even though we have built totally closed proprietary
         | models, regulation should encourage open models instead". But
         | what all the current incumbents with successful products are
         | actually arguing for is just to make their models legal but any
         | competitive upstarts illegal. Convenient!
        
           | rockemsockem wrote:
           | Cite where it is being said that these companies are arguing
           | "to make their models legal, but any competitive upstarts
           | illegal". As far as I know nothing of the sort has been
           | proposed. You may think this is obvious, but it is far from
           | it.
        
             | sanderjd wrote:
             | That's what the "AI pause" proposed. But the main thing is:
             | they could shut down their _current_ technology themselves,
             | so what they are arguing for must be regulation of _future_
             | technology. I think this has been pretty clear in the
             | congressional hearings for instance.
        
               | rockemsockem wrote:
               | Right. Altman didn't sign the AI pause though.
               | 
               | It is clear in the congressional hearings, but people
               | didn't watch them, they seem to have skimmed article
               | titles and made up their own narrative.
               | 
               | EDIT:
               | 
               | Which, to my point, means that "these companies" are not
               | calling for "competitive upstarts" to be regulated. They
               | are calling for future very large models, which they
               | themselves are currently the most likely to train due to
               | the enormous computational cost, to be regulated. Which
               | is completely contradictory to what you were saying.
        
       | AbrahamParangi wrote:
       | So the idea is that the risk is so great we need to regulate
       | software and math and GPUs- but not so great that you need to
       | stop working on it? These companies would be much more credible
       | (that this wasn't just a totally transparent ploy to close the
       | market) if they at least put their money where their mouth is and
       | stopped working on AI.
        
         | sebzim4500 wrote:
         | I think that some of the signatories don't want regulation,
         | they just want serious research into AI alignment.
        
       | hit8run wrote:
       | Well AI is here. We need to live with it. Change our economic
       | systems to a socialist approach.
        
         | sebzim4500 wrote:
         | AI may be here (purely depends on definition so not worth
         | debating) but the superintelligent AGI that they are scared of
         | clearly isn't here yet.
        
           | hit8run wrote:
           | I get what you're saying. Its day will come. Study human
           | history. If we can build it - we will build it.
        
       | b3nji wrote:
       | Nonsense, the industry giants are just trying to scare the law
       | makers to license the technology. Effectively, cutting out
       | everyone else.
       | 
       | Remember the Google note circulating saying "they have no moat",
       | this is their moat. They have to protect their investment, we
       | don't want people running this willy nilly for next to no cost on
       | their own devices, God forbid!
        
         | sanderjd wrote:
         | I would definitely find it more credible if the most capable
         | models that are safe to grandfather in to being unregulated
         | didn't just happen to be the already successful products from
         | all the people leading these safety efforts. It also just
         | happens to be the case that making proprietary models - like
         | the current incumbents make - is the only safe way to do it.
        
         | arisAlexis wrote:
         | All academia and researchers say X. Random redditor/HN lurker
         | declares nonesense I know better! This is how we should bet our
         | future.
        
         | aceon48 wrote:
         | That moat document was published by a single software engineer,
         | not some exec or product leader.
         | 
         | Humans dont really grasp exponential improvements. You wont
         | have much time to regulate something that is improving
         | exponentially.
        
           | [deleted]
        
           | jvanderbot wrote:
           | Single software engineers writing influential papers is often
           | enough how a exec or product leader draws conclusions, I
           | expect. It worked that way in everywhere I've worked.
        
           | aero-deck wrote:
           | It doesn't matter who wrote it, it got picked up, had a good
           | argument and affected market opinion. The execs now need to
           | respond to it.
           | 
           | Humans also don't grasp that things can improve exponentially
           | until they stop improving exponentially. This belief that AGI
           | is just over the hill is sugar-water for extracting more
           | hours from developers.
           | 
           | The nuclear bomb was also supposed to change everything. But
           | in the end nothing changed, we just got more of the same.
        
             | kalkin wrote:
             | "nuclear weapons are no big deal actually" is just a wild
             | place to get as a result of arguing against AI risk.
             | Although I guess Eliezer Yudkowsky would agree! (On grounds
             | that nukes won't kill literally everyone while AI will, but
             | still.)
        
               | Der_Einzige wrote:
               | Nuclear weapons are uniquely good. Turns out you have to
               | put guns to the collective temples of humanity for them
               | to realize that pulling the trigger is a bad idea.
        
               | candiddevmike wrote:
               | Past performance is no guarantee of future results
        
               | pixl97 wrote:
               | hell, the biggest risk with nukes is not that we decide
               | to pull the trigger, but that we make a mistake that
               | causes us to pull the trigger.
        
               | olddustytrail wrote:
               | Please Google "Blackadder how did the war start video"
               | and watch.
        
             | api wrote:
             | It's too early to say definitively but it's possible that
             | the atomic bomb dramatically reduced the number of people
             | killed in war by making great power conflicts too damaging
             | to undertake:
             | 
             | https://kagi.com/proxy/battle_deaths_chart.png?c=qmSKsRSwhg
             | A...
             | 
             | The USA and USSR would almost certainly have fought a
             | conventional WWIII without the bomb. Can you imagine the
             | casualty rates for that...
        
               | aero-deck wrote:
               | cool - so AI is gonna dramatically reduce the number of
               | emails that get misunderstood... still gonna still be
               | sending those emails tho.
        
               | TheCaptain4815 wrote:
               | I'd actually guess those casualties would be quite less
               | than WW2. As tech advanced, more sophisticated targeting
               | systems also advanced. No need to waste shells and
               | missiles on civilian buildings, plus food and healthcare
               | tech would continue to advance.
               | 
               | Meanwhile, a single nuclear bomb hitting a major city
               | could cause more casualties' than all American deaths in
               | ww2 (400k).
        
               | snickerbockers wrote:
               | That's really only true for the Americans, the Russians
               | still don't seem to care about limiting collateral damage
               | and undoubtedly the Americans wouldn't either if their
               | cities were getting carpet bombed by soviet aircraft.
        
               | wrycoder wrote:
               | So far.
        
             | munificent wrote:
             | _> The nuclear bomb was also supposed to change everything.
             | But in the end nothing changed, we just got more of the
             | same._
             | 
             | It is hard for me to imagine a statement more out of touch
             | with history than this. All geopolitical history from WWII
             | forward is profoundly affected by the development of the
             | bomb.
             | 
             | I don't even know where to begin to argue against this. Off
             | the top of my head:
             | 
             | 1. What would have happened between Japan and the US in
             | WWII without Hiroshima and Nagasaki?
             | 
             | 2. Would the USSR have fallen without the financial drain
             | of the nuclear arms race?
             | 
             | 3. Would Isreal still exist if it didn't have nuclear
             | weapons?
             | 
             | 4. If neither the US nor Russia had nuclear weapons, how
             | many proxy wars would have been avoided in favor of direct
             | conflict?
             | 
             | The whole trajectory of history would be different if we'd
             | never split the atom.
        
               | aero-deck wrote:
               | The whole trajectory of history would have been different
               | if a butterfly didn't flap it's wings.
               | 
               | The bomb had effects, but it didn't change anything. We
               | still go to war, eat, sleep and get afraid about things
               | we can't control.
               | 
               | For a moment, stop thinking about whether bombs, AI or
               | the printing press do or do not affect history. Ask
               | yourself what the motivations are for thinking that they
               | do?
        
               | munificent wrote:
               | _> We still go to war, eat, sleep and get afraid about
               | things we can 't control._
               | 
               | If that is your criteria, then nothing has ever changed
               | anything.
        
               | aero-deck wrote:
               | you're ignoring religion.
        
               | munificent wrote:
               | Before religion: We still go to war, eat, sleep and get
               | afraid about things we can't control.
               | 
               | After religion: We still go to war, eat, sleep and get
               | afraid about things we can't control.
               | 
               | So, no change.
        
               | NumberWangMan wrote:
               | Not to mention how close the USA and Soviet Union were to
               | a nuclear exchange: https://en.wikipedia.org/wiki/1983_So
               | viet_nuclear_false_alar...
        
         | nico wrote:
         | > scare the law makers to license the technology
         | 
         | You mean scare the public so they can do business with the
         | lawmakers without people asking too many questions
        
         | layer8 wrote:
         | At least now if it turns out they are right they can't claim
         | anymore that they didn't know.
        
         | jiggawatts wrote:
         | Imagine if the weights for GPT 4 leaked. It just has to happen
         | _one time_ and then once the torrent magnet link is circulated
         | widely it's all over... for OpenAI.
         | 
         | This is what they're terrified of. They've invested near a
         | billion dollars and need billions in revenue to enrich their
         | shareholders.
         | 
         | But if the data leaks? They can't stop random companies or
         | moneyed individuals running the models on their own kit.
         | 
         | My prediction is that there will be copyright enforcement
         | mandated by law in all GPUs. If you upload weights from the big
         | AI companies then the driver will block it and phone home. Or
         | report you to the authorities for violations of corporate
         | profits... err... "AI Safety".
         | 
         | I guarantee something like this will happen within months
         | because the clock is ticking.
         | 
         | It takes just one employee to deliberately or accidentally leak
         | the weights...
        
         | kalkin wrote:
         | This could be Google's motivation (although note that Google is
         | not actually the market leader right now) but the risk could
         | still be real. Most of the signatories are academics, for one
         | thing, including two who won Turing awards for ML work and
         | another who is the co-author of the standard AI textbook (at
         | least when I was in school).
         | 
         | You can be cynical about corporate motives and still worried. I
         | personally am worried about AI partly because I am very cynical
         | about how corporations will use it, and I don't really want my
         | atoms to be ground up to add storage bits for the number that
         | once represented Microsoft's market cap or whatever.
         | 
         | But even cynicism doesn't seem to me to give much reason to
         | worry about regulation of "next to no cost" open source models,
         | though. There's only any chance of regulation being practical
         | if models stay very expensive to make, requiring specialized
         | hardware with a supply chain chokepoint. If personal devices do
         | catch up to the state of the art, then for better or worse
         | regulation is not going to prevent people from using them.
        
           | hammock wrote:
           | >Most of the signatories are academics, for one thing
           | 
           | Serious question, who funds their research? And do any of
           | them ever plan to work or consult in industry?
           | 
           | My econ professor was an "academic" and drew a modest salary
           | while he made millions at the same time providing expert
           | testimony for giant monopolies in antitrust disputes
        
             | anon7725 wrote:
             | > Serious question, who funds their research? And do any of
             | them ever plan to work or consult in industry?
             | 
             | Many of the academics at the top of this list are quite
             | wealthy from direct employment, investing and consulting
             | for big tech and venture-funded startups.
        
             | holmesworcester wrote:
             | That's a good question, but at least some of the academics
             | on this list are independent. Bruce Schneier, for example.
        
               | moffkalast wrote:
               | So some are naive and the rest are self interested?
        
           | holmesworcester wrote:
           | > _But even cynicism doesn 't seem to me to give much reason
           | to worry about regulation of "next to no cost" open source
           | models, though. There's only any chance of regulation being
           | practical if models stay very expensive to make, requiring
           | specialized hardware with a supply chain chokepoint. If
           | personal devices do catch up to the state of the art, then
           | for better or worse regulation is not going to prevent people
           | from using them._
           | 
           | This is a really good point. I wonder if some of the
           | antipathy to the joint statement is coming from people who
           | are worried about open source models or small startups being
           | interfered with by the regulations the statement calls for.
           | 
           | I agree with you that this cat is out of the bag and
           | regulation of the tech we're seeing now is super unlikely.
           | 
           | We might see regulations for startups and individuals on
           | explicitly exploring some class of self-improving approach
           | that experts widely agree are dangerous, but there's no way
           | we'll see broad bans on messing with open source AI/ML tools
           | in the US at least. That fight is very winnable.
        
           | sangnoir wrote:
           | > I personally am worried about AI partly because I am very
           | cynical about how corporations will use it
           | 
           | This is the more realistic danger: I don't know if
           | corporations are intentionally "controlling the narrative" by
           | spewing unreasonable fears to distract from the actual
           | dangers: AI + Capitalism + big tech/MNC + current tax regime
           | = fewer white- & blue-collar jobs + increased concentration
           | of wealth and a lower tax base for governments.
           | 
           | Having a few companies as AI gatekeepers will be terrible for
           | society.
        
           | jrockway wrote:
           | > I don't really want my atoms to be ground up to add storage
           | bits
           | 
           | My understanding is that the AI needs iron from our blood to
           | make paperclips. So you don't have to worry about this one.
        
           | logicchains wrote:
           | [flagged]
        
       | AlexandrB wrote:
       | This reeks of marketing and a push for early regulatory capture.
       | We already know how Sam Altman thinks AI risk should be mitigated
       | - namely by giving OpenAI more market power. If the risk were
       | real, these folks would be asking the US government to
       | nationalize their companies or bring them under the same kind of
       | control as nukes and related technologies. Instead we get some
       | nonsense about licensing.
        
         | scrum-treats wrote:
         | > This reeks of marketing and a push for early regulatory
         | capture. We already know how Sam Altman thinks AI risk should
         | be mitigated - namely by giving OpenAI more market power.
         | 
         | This really is the crux of the issue isn't it? All this
         | pushback for the first petition, because "Elon Musk," but now
         | GPT wonder Sam Altman "testifies" that he has "no monetary
         | interest in OpenAI" and quickly follows up his proclamation
         | with a second "Statement on AI Risks." Oh, and let's not
         | forget, "buy my crypto-coin"!
         | 
         | But Elon Musk... Ehh.... Looking like LOTR out here with "my
         | precious" AGI on the brain.
         | 
         | Not to downplay the very serious risk at all. Simply echoing
         | the sentiment that we would do well to stay objective and
         | skeptical of ALL these AI leaders pushing new AI doctrine. At
         | this stage, it's a policy push and power grab.
        
         | hayst4ck wrote:
         | Seth McFarland wrote a pretty great piece on Star Trek
         | replicators and their relationship to the structure of society.
         | 
         | The question it answers is "does the replicator allow for Star
         | Trek's utopia, or does Star Trek's utopia allow for the
         | replicator?"
         | 
         | https://www.reddit.com/r/CuratedTumblr/comments/13tpq18/hear...
         | 
         | It is very thought provoking, and _very_ relevant.
        
           | yadaeno wrote:
           | Ive never seen Star Trek, but lets say you had an infinite
           | food machine. The machine would have limited throughput, and
           | it would require resources to distribute the food.
           | 
           | These are both problems that capitalism solves in a fair and
           | efficient way. I really don't see how the "capitalism bad" is
           | a satisfying conclusion to draw. The fact that we would use
           | capitalism to distribute the resources is not an indictment
           | of our social values, since capitalism is still the most
           | efficient solution even in the toy example.
        
             | [deleted]
        
             | hayst4ck wrote:
             | If you are any kind of nerd I recommend watching it. It
             | shows an optimistic view of the future. In many ways it's
             | the anti-cyberpunk. Steve Jobs famously said "give me star
             | trek" when telling his engineers what he wanted from
             | iPhones. Star Trek has had a deep influence on many
             | engineers and on science fiction.
             | 
             | When people talk about Star Trek, they are referring mainly
             | to "Star Trek: The Next Generation."
             | 
             | "The Inner Light" is a highly regarded episode. "The
             | Measure of a Man" is a high quality philosophical episode.
             | 
             | Given you haven't seen it, your criticism of McFarlane
             | doesn't make any sense. You are trying to impart a
             | practical analysis of a philosophical question and in the
             | context of Star Trek, I think it denies what Star Trek asks
             | you to imagine.
        
           | Jupe wrote:
           | Thanks for sharing. This deserves a submission of it's own.
        
           | revelio wrote:
           | It doesn't answer that, it can't because the replicator is
           | fictional. McFarland just says he wrote an episode in which
           | his answer is that replicators need communism, and then
           | claims that you can't have a replicator in a capitalist
           | system because evil conservatives, capitalists and conspiracy
           | theorists would make strawman arguments against it.
           | 
           | Where is the thought provoking idea here? It's just an excuse
           | to attack his imagined enemies. Indeed he dunks on conspiracy
           | theorists whilst being one himself. In McFarland's world
           | there would be a global conspiracy to suppress replicator
           | technology, but it's a conspiracy of conspiracy theorists.
           | 
           | There's plenty of interesting analysis you could do on the
           | concept of a replicator, but a Twitter thread like that isn't
           | it. Really the argument is kind of nonsensical on its face
           | because it assumes replicators would have a cost of zero to
           | run or develop. In reality capitalist societies already
           | invented various kinds of pseudo-replicators with computers
           | being an obvious example, but this tech was ignored or
           | suppressed by communist societies.
        
             | hayst4ck wrote:
             | I think you are caught up on the word communism.
             | 
             | Communism as it exists today results in
             | authoritarianism/fascism, I think we can agree on that. The
             | desired end state of communism (high resource distribution)
             | is being commingled with the end state of communism:
             | fascism (an obedient society with a clear dominance
             | hierarchy).
             | 
             | You use communism in some parts of your post to mean a high
             | resource distribution society, but you use communism in
             | other parts of your post to mean high oppression societies.
             | You identify communism by the resource distribution, but
             | critcize it not based on the resource distribution but by
             | what it turns into: authortarianism.
             | 
             | What you're doing is like identifying something as a
             | democracy by looking at voting, but criticizing it by it's
             | end state which is oligarchy.
             | 
             | It takes effort to prevent democracy from turning into
             | oligarchy, in the same way it takes effort to prevent
             | communism from turning into authoritarianism.
             | 
             | Words are indirect references to ideas and the ideas you
             | are referencing changes throughout your post. I am not
             | trying to accuse you of bad faith, so much as I am trying
             | to get you to see that you are not being philosophically
             | rigorous in your analysis and therefore you are not
             | convincing because we aren't using the same words to
             | represent the same ideas.
             | 
             | You are using the word communism to import the idea of
             | authortarianism and shut down the analysis without actually
             | addressing the core criticism McFarland was making against
             | capitalist societies.
             | 
             | Capitalism is an ideology of "me," and if I had a
             | replicator, I would use it to replicate gold, not food for
             | all the starving people in Africa. I would use it to
             | replicate enough nuclear bombs to destroy the world, so if
             | someone took it from me, I could end all life on the planet
             | ensuring that only I can use it. So did scarcity end
             | despite having a device that can end scarcity? No. Because
             | we are in a "me" focused stage of humanity rather than an
             | "us" focused stage of humanity so I used it to elevate my
             | own position rather than to benefit all mankind.
             | 
             | Star Trek promotes a future of "us" and that is why it's so
             | attractive. McFarland was saying that "us" has to come
             | before the end of scarcity, and I agree with his critique.
        
         | adriand wrote:
         | There are other, more charitable interpretations. For example:
         | 
         | 1. Those who are part of major corporations are concerned about
         | the race dynamic that is unfolding (which in many respects was
         | kicked off or at least accelerated by Microsoft's decision to
         | put a chatbot in Bing), extrapolating out to where that takes
         | us, and asking for an off ramp. Shepherding the industry in a
         | safe direction is a collective organization problem, which is
         | better suited for government than corporations with mandates to
         | be competitive.
         | 
         | 2. Those who are directly participating in AI development may
         | feel that they are doing so responsibly, but do not believe
         | that others are as well and/or are concerned about unregulated
         | proliferation.
         | 
         | 3. Those who are directly participating in AI development may
         | understand that although they are doing their best to be
         | responsible, they would benefit from more eyes on the problem
         | and more shared resources dedicated to safety research, etc.
        
         | chefandy wrote:
         | I'm eternally skeptical of the tech business, but I think
         | you're jumping to conclusions, here. I'm on a first-name basis
         | with several people near the top of this list. They are some of
         | the smartest, savviest, most thoughtful, and most principled
         | tech policy experts I've met. These folks default to skepticism
         | of the tech business, champion open data, are deeply familiar
         | with the risks of regulatory capture, and don't sign their name
         | to any ol' open letter, especially if including their
         | organizational affiliations. If this is a marketing ploy, that
         | must have been a monster because even if they were walking
         | around handing out checks for 25k I doubt they'd have gotten a
         | good chunk of these folks.
        
           | nopinsight wrote:
           | Here's why AI risks are real, even if our most advanced AI is
           | merely a 'language' model:
           | 
           | Language can represent thoughts and some world models. There
           | is strong evidence that LLMs contain some representation of
           | world models it learned from text. Moreover, LLM is already a
           | misnomer; latest versions are multimodal. Current versions
           | can be used to build agents with limited autonomy. Future
           | versions of LLMs are most likely capable of more
           | independence.
           | 
           | Even dumb viruses have caused catastrophic harm. Why? It's
           | capable of rapid self replication in a massive number of
           | existing vessels. You add in some intelligence, vast store of
           | knowledge, huge bandwidth, and some aid by malicious human
           | actors, what could such a group of future autonomous agents
           | do?
           | 
           | More on risks of "doom" by a top researcher on AI risk here:
           | https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-
           | views-o...
        
             | skybrian wrote:
             | A lot of things are called "world models" that I would
             | consider just "models" so it depends on what you mean by
             | that. But what do you consider to be strong evidence? The
             | Othello paper isn't what I'd call strong evidence.
        
               | gjm11 wrote:
               | I agree that the Othello paper isn't, and couldn't be,
               | strong evidence about what sort of model of the world (if
               | any) something like GPT-4 has. However, I think it _is_
               | (importantly) pretty much a refutation of all claims
               | along the lines of  "these systems learn only from text,
               | therefore they cannot have anything in them that actually
               | models anything other than text", since their model
               | learned only from text and seems to have developed
               | something very much like a model of the state of the
               | game.
               | 
               | Again, it doesn't say much about _how good_ a model any
               | given system might have. The world is much more
               | complicated than an Othello board. GPT-4 is much bigger
               | than their transformer model. Everything they found is
               | consistent with anything from  "as it happens GPT-4 has
               | no world model at all" through to "GPT-4 has a rich model
               | of the world, fully comparable to ours". (I would bet
               | heavily on the truth being somewhere in between, not that
               | that says very much.)
        
           | marricks wrote:
           | They didn't become such a wealthy group by letting
           | competition foster. I have no doubt they believe they could
           | be doing the right thing but I also have no doubt they don't
           | want other people making the rules.
           | 
           | Truth be told, who else really does have a seat at the table
           | for dictating such massive societal change? Do you think the
           | copy editor union gets to sit down and say "I'd rather not
           | have my lunch eaten, I need to pay my rent. Let's pause AI
           | usage in text for 10 years."
           | 
           | These competitors banded together and put out a statement to
           | get ahead of any one else doing the same thing.
        
             | FuckButtons wrote:
             | Not all of them are wealthy, a significant number are
             | academics.
        
               | [deleted]
        
               | runarberg wrote:
               | That doesn't erase the need of cynicism. Many people in
               | academia come from industry, have friends in industry, or
               | other stakes. They might have been persuaded by the
               | rhetoric of stakeholders within industry (you saw this
               | early in the climate debate; and still do), and they
               | might also be hoping to get a job in the industry later
               | on. There is also a fair amount of group think within
               | academia, so if a prominent individual inside academia
               | believes the lies of industry, chances are the majority
               | within the department does.
        
             | chefandy wrote:
             | The people I know on the list are academics and do not seem
             | to be any wealthier than other academics I know. I'm quite
             | certain the private industry signatories are going to
             | entirely advocate for their interest just as they do in any
             | other policy discussion.
        
               | marricks wrote:
               | Got it, thank you for the clarification!
        
           | lannisterstark wrote:
           | >They are some of the smartest, savviest, most thoughtful,
           | and most principled tech policy experts I've met.
           | 
           | with all due respect, that's just <Your> POV of them or how
           | they chose to present themselves to you.
           | 
           | They could all be narcissists for all we know. Further, One
           | person's opinion, namely yours, doesn't exempt them from
           | criticism and rushing to be among the first in what's
           | arguably the new gold rush.
        
           | huevosabio wrote:
           | I think it's the combination of two things.
           | 
           | First, there are actual worries by a good chunk of the
           | researchers. From runaway-paperclip AGIs to simply unbounded
           | disinformation, I think there are a lot of scenarios that
           | disinterested researchers and engineers worry about.
           | 
           | Second, the captains of industry are taking note of those
           | worries and making sure they get some regulatory moat. I
           | think the Google memo about moat hits it right on the nail.
           | The techniques and methods to build these systems are all out
           | on the open, the challenges are really the data, compute, and
           | the infrastructure to put it all together. But post training,
           | the models are suddenly very easy to finetune and deploy.
           | 
           | AI Risk worry comes as an opportunity for the leaders of
           | these companies. They can use this sentiment and the general
           | distrust for tech to build themselves a regulatory moat.
        
           | fds98324jhk wrote:
           | They don't want 25k they want jobs in the next presidential
           | administration
        
             | chefandy wrote:
             | > They don't want 25k they want jobs in the next
             | presidential administration
             | 
             | Academics shilling for OpenAI would get them jobs in the
             | next presidential administration?
        
               | haldujai wrote:
               | Having their names on something so public is definitely
               | an incentive for prestige and academic promotion.
               | 
               | Shilling for OpenAI & co is also not a bad way to get
               | funding support.
               | 
               | I'm not accusing any non-affiliated academic listed of
               | doing this but let's not pretend there aren't potentially
               | perverse incentives influencing the decisions of
               | academics, with respects to this specific letter and in
               | general.
               | 
               | To help dissuade (healthy) skepticism it would be nice to
               | see disclosure statements for these academics, at first
               | glance many appear to have conflicts.
        
               | chefandy wrote:
               | Could you be more specific about the conflicts you've
               | uncovered?
        
               | haldujai wrote:
               | It's unequivocal that academics may have conflicts (in
               | general), that's why disclosures are required for
               | publications.
               | 
               | I'm not uncovering anything, several of the academic
               | signatories list affiliations with OpenAI, Google,
               | Anthropic, Stability, MILA and Vector resulting in a
               | financial conflict.
               | 
               | Note that conflict does not mean shill, but in academia
               | it should be disclosed. To allay some concerns a standard
               | disclosure form would be helpful (i.e. do you receive
               | funding support or have financial interest in a
               | corporation pursuing AI commercialization).
        
               | chefandy wrote:
               | I'm not really interested in doing a research project on
               | the signatories to investigate your claim, and talking
               | about things like this without specifics seems dubiously
               | useful, so I don't really think there's anything more to
               | discuss.
        
               | haldujai wrote:
               | Huh, you don't have to do any research.
               | 
               | Go to: https://www.safe.ai/statement-on-ai-
               | risk#signatories and uncheck notable figures.
               | 
               | Several of the names at the top list a corporate
               | affiliation.
               | 
               | If you want me to pick specific ones with obvious
               | conflicts (chosen at a glance): Geoffrey Hinton, Ilya
               | Sutskever, Ian Goodfellow, Shane Legg, Samuel Bowman and
               | Roger Grosse are representative examples based on self-
               | disclosed affiliations (no research required).
        
               | chefandy wrote:
               | Oh so you're saying the ones there with conflicts listed.
               | That's only like 1/3 of the list.
        
               | haldujai wrote:
               | Yes, as I said "many" have obvious conflicts from listed
               | affiliations so it would be nice to have a
               | positive/negative disclosure from the rest.
        
           | fmap wrote:
           | This particular statement really doesn't seem like a
           | marketing ploy. It is difficult to disagree with the
           | potential political and societal impacts of large language
           | models as outlined here: https://www.safe.ai/ai-risk
           | 
           | These are, for the most part, obvious applications of a
           | technology that exists right now but is not widely available
           | _yet_.
           | 
           | The problem with every discussion around this issue is that
           | there are other statements on "the existential risk of AI"
           | out there that _are_ either marketing ploys or science
           | fiction. It doesn 't help that some of the proposed
           | "solutions" _are_ clear attempts at regulatory capture.
           | 
           | This muddles the waters enough that it's difficult to have a
           | productive discussion on how we could mitigate the real risk
           | of, e.g., AI generated disinformation campaigns.
        
             | AlexandrB wrote:
             | As I mentioned in another comment, the listed risks are
             | also notable because they largely omit _economic_ risk.
             | Something what will be especially acutely felt by those
             | being laid off in favor of AI substitutes. I would argue
             | that 30% unemployment is at least as much of a risk to the
             | stability of society as AI generated misinformation.
             | 
             | If one were _particularly_ cynical, one could say that this
             | is an attempt to frame AI risk in a manner that still
             | allows AI companies to capture all the economic benefits of
             | AI technology without consideration for those displaced by
             | AI.
        
               | nuancebydefault wrote:
               | I believe the solution to said socio economic problem is
               | rather simple.
               | 
               | People are being replaced by robots and AI because the
               | latter are cheaper. That's the market force.
               | 
               | Cheaper means that more value us created. As a whole,
               | people get more service for doing less work.
               | 
               | The problem is that the money or value saved trickles up
               | to the rich.
               | 
               | The only solutions can be, regulations,
               | 
               | - do not tax anymore based on income from doing actual
               | work.
               | 
               | - tax automated systems on their added value.
               | 
               | - use the tax generated capital to provide for a basic
               | income for everybody.
               | 
               | In that way, the generated value goes to people who lost
               | their jobs and to the working class as well.
        
             | worik wrote:
             | > It is difficult to disagree with the potential political
             | and societal impacts of large language models as outlined
             | here: https://www.safe.ai/ai-risk
             | 
             | I disagree
             | 
             | That list is a list of the dangers of power
             | 
             | Many of these dangers: misinformation, killer robots,
             | people on this list have been actively working on
             | 
             | Rank hypocrisy
             | 
             | And people projecting their own dark personalities onto a
             | neutral technology
             | 
             | Yes there are dangers in unbridled private power. They are
             | not dangers unique to AI.
        
             | chefandy wrote:
             | > The problem with every discussion around this issue is
             | that there are other statements on
             | 
             | Sure, but we're not talking about those other ones.
             | Dismissing good faith initiatives as marketing ploys
             | because there are bad faith initiatives is functionally no
             | different than just shrugging and walking away.
             | 
             | Of course OpenAI et. al. will try to influence the good
             | faith discussions: that's a great reason to champion the
             | ones with a bunch of good faith actors who stand a chance
             | of holding the industry and policy makers to task. Waiting
             | around for some group of experts that has enough clout to
             | do something, but by policy excludes the industry itself
             | and starry-eyed shithead _" journalists"_ trying to ride
             | the wave of the next big thing will yield nothing. This is
             | a great example of perfect being the enemy of good.
        
               | fmap wrote:
               | I agree completely. I was just speculating on why there
               | is so much discussion about marketing ploys in this
               | comment section.
        
               | chefandy wrote:
               | Ah, sure. That makes sense.
               | 
               | There's definitely a lot of marketing bullshit out there
               | in the form of legit discussion. Unfortunately, this
               | technology likely means there will be an incalculable
               | increase in the amount of bullshit out there. Blerg.
        
             | lumb63 wrote:
             | Aside from emergent behavior, are any of the items on that
             | list unique to AI? They sure don't seem it; they're either
             | broadly applicable to a number of already-available
             | technologies, or to any entity in charge or providing
             | advice or making decisions. I dare say even emergent
             | behavior falls under this as well, since people can develop
             | their own new motives that others don't understand. Their
             | advisory doesn't seem to amount to much more than "bad
             | people can do bad things", except now "people" is "AI".
        
             | revelio wrote:
             | _> It is difficult to disagree with the potential political
             | and societal impacts of large language models as outlined
             | here_
             | 
             | Is it? Unless you mean something mundane like "there will
             | be impact", the list of risks they're proposing are
             | subjective and debatable at best, irritatingly naive at
             | worst. Their list of risks are:
             | 
             | 1. Weaponization. Did we forget about Ukraine already?
             | Answer: Weapons are needed. Why is this AI risk and not
             | computer risk anyway?
             | 
             | 2. Misinformation. Already a catastrophic problem just from
             | journalists and academics. Most of the reporting on
             | misinformation is itself misinformation. Look at the Durham
             | report for an example, or anything that happened during
             | COVID, or the long history of failed predictions that were
             | presented to the public as certain. Answer: Not an AI risk,
             | a human risk.
             | 
             | 3. People might click on things that don't "improve their
             | well being". Answer: how we choose to waste our free time
             | on YouTube is not your concern, and you being in charge
             | wouldn't improve our wellbeing anyway.
             | 
             | 4. Technology might make us fat, like in WALL-E. Answer: it
             | already happened, not having to break rocks with bigger
             | rocks all day is nice, this is not an AI risk.
             | 
             | 5. "Highly competent systems could give small groups of
             | people a tremendous amount of power, leading to a lock-in
             | of oppressive systems". Answer: already happens, just look
             | at how much censorship big tech engages in these days. AI
             | might make this more effective, but if that's their beef
             | they should be campaigning against Google and Facebook.
             | 
             | 6. Sudden emergent skills might take people by surprise.
             | Answer: read the paper that shows the idea of emergent
             | skills is AI researchers fooling themselves.
             | 
             | 7. "It may be more efficient to gain human approval through
             | deception than to earn human approval legitimately". No
             | shit Sherlock, welcome to Earth. This is why labelling
             | anyone who expresses skepticism about anything as a
             | Denier(tm) is a bad idea! Answer: not an AI risk. If they
             | want to promote critical thinking there are lots of ways to
             | do that unrelated to AI.
             | 
             | 8. Machines smarter than us might try to take over the
             | world. Proof by Vladimir Putin is provided, except that it
             | makes no sense because he's arguing that AI will be a tool
             | that lets humans take over the world and this point is
             | about the opposite. Answer: people with very high IQs have
             | been around for a long time and as of yet have not proven
             | able to take over the world or even especially interested
             | in doing so.
             | 
             | None of the risks they present is compelling to me
             | personally, and I'm sure that's true of plenty of other
             | people as well. Fix the human generated misinformation
             | campaigns _first_ , then worry about hypothetical non-
             | existing AI generated campaigns.
        
               | cj wrote:
               | I appreciate your perspective, but the thing that is
               | missing is the speed at which AI has evolved, seemingly
               | overnight.
               | 
               | With crypto, self-driving cars, computers, the internet
               | or just about any other technology, development and
               | distribution happened over decades.
               | 
               | With AI, there's a risk that the pace of change and
               | adoption could be too fast to be able to respond or adapt
               | at a societal level.
               | 
               | The rebuttals to each of the issues in your comment are
               | valid, but most (all?) of the counter examples are ones
               | that took a long time to occur, which provided ample time
               | for people to prepare and adapt. E.g. "technology making
               | us fat" happened over multiple decades, not over the span
               | of a few months.
               | 
               | Either way, I think it's good to see people proactive
               | about managing risk of new technologies. Governments and
               | businesses are usually terrible at fixing problems that
               | haven't manifested yet... so it's great to see some
               | people sounding the alarms before any damage is done.
               | 
               | Note: I personally think there's a high chance AI is
               | extremely overhyped and that none of this will matter in
               | a few years. But even so, I'd rather see organizations
               | being proactive with risk management rather than reacting
               | too the problem when it's too late.
        
               | revelio wrote:
               | It may seem overnight if you weren't following it, but
               | I've followed AI progress for a long time now. I was
               | reading the Facebook bAbI test paper in 2015:
               | 
               | https://research.facebook.com/downloads/babi/
               | 
               | There's been a lot of progress since then, but it's also
               | nearly 10 years later. Progress isn't actually instant or
               | overnight. It's just that OpenAI spent a _ton_ of money
               | to scale it up then stuck an accessible chat interface on
               | top of tech that was previously being mostly ignored.
        
           | nico wrote:
           | Maybe these people have good intentions and are just being
           | naive
           | 
           | They might not be getting paid, but that doesn't mean they
           | are not being influenced
           | 
           | AI at this point is pretty much completely open, all the
           | papers, math and science behind it are public
           | 
           | Soon, people will have advanced AI running locally on their
           | phones and watches
           | 
           | So unless they scrub the Internet, start censoring this
           | stuff, and pretty much ban computers, there is absolutely no
           | way to stop AI nor any potentially bad actors from using it
           | 
           | The biggest issues that we should be addressing regarding AI
           | are the potential jobs losses and increased inequality at
           | local and global scale
           | 
           | But of course, the people who usually make these decisions
           | are the ones that benefit the most from inequality, so
        
             | pydry wrote:
             | >Maybe these people have good intentions and are just being
             | naive
             | 
             | Ive noticed a lot of good people take awful political
             | positions this way.
             | 
             | Usually they trust the wrong person - e.g. by falling
             | victim to the just world fallacy ("X is a big deal in our
             | world and X wouldn't be where they are if they werent a
             | decent person. X must have a point.")
        
             | paulddraper wrote:
             | You don't have to be a mustache-twirling villain to have
             | the same effect.
        
           | nopinsight wrote:
           | It's worth noting also that many academics who signed the
           | statement may face adverse issues like reputational risk as
           | well as funding cut to their research programs if AI safety
           | becomes an official policy.
           | 
           | For a large number of them, these risks are worth far more
           | than any possible gain from signing it.
           | 
           | When a large number of smart, reputable people, including
           | many with expert knowledge and little or negative incentives
           | to act dishonestly, put their names down like this, one
           | should pay attention.
           | 
           | Added:
           | 
           | Paul Christiano, a brilliant theoretical CS researcher who
           | switched to AI Alignment several years ago, put the risks of
           | "doom" for humanity at 46%.
           | 
           | https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-
           | views-o...
        
             | mcguire wrote:
             | On the contrary, I suspect "How do we prevent our AIs from
             | killing everyone?" will be a major research question with a
             | great deal of funding involved. Plus, no one seems to be
             | suggesting things like the medical ethics field or
             | institutional review boards, which might have deleterious
             | impacts on their work.
        
             | haldujai wrote:
             | Subtract OpenAI, Google, StabilityAI and Anthropic
             | affiliated researchers (who have a lot to gain) and not
             | many academic signatories are left.
             | 
             | Notably missing representation from the Stanford NLP (edit:
             | I missed that Diyi Yang is a signatory on first read) and
             | NYU groups who's perspective I'd also be interested in
             | hearing.
             | 
             | Not committing one way or another regarding the intent with
             | this but it's not as diverse an academic crowd as the long
             | list may suggest and for a lot of these names there are
             | incentives to act dishonestly (not claiming that they are).
        
               | chefandy wrote:
               | I just took that list and separated everyone that had
               | _any_ commercial tie listed, regardless of the company.
               | 35 did and 63 did not.
               | 
               | > "Subtract OpenAI, Google, StabilityAI and Anthropic
               | affiliated researchers (who have a lot to gain) and not
               | many academic signatories are left."
               | 
               | You're putting a lot of effort into painting this list in
               | a bad light without any specific criticism or evidence of
               | malfeasance. Frankly, it sounds like FUD to me.
        
               | chefandy wrote:
               | With corporate conflicts (that I recognized the names
               | of):
               | 
               | Yoshua Bengio: Professor of Computer Science, U. Montreal
               | / Mila, Victoria Krakovna: Research Scientist, Google
               | DeepMind, Mary Phuong: Research Scientist, Google
               | DeepMind, Daniela Amodei: President, Anthropic, Samuel R.
               | Bowman: Associate Professor of Computer Science, NYU and
               | Anthropic, Helen King: Senior Director of Responsibility
               | &amp; Strategic Advisor to Research, Google DeepMind,
               | Mustafa Suleyman: CEO, Inflection AI, Emad Mostaque: CEO,
               | Stability AI, Ian Goodfellow: Principal Scientist, Google
               | DeepMind, Kevin Scott: CTO, Microsoft, Eric Horvitz:
               | Chief Scientific Officer, Microsoft, Mira Murati: CTO,
               | OpenAI, James Manyika: SVP, Research, Technology &amp;
               | Society, Google-Alphabet, Demis Hassabis: CEO, Google
               | DeepMind, Ilya Sutskever: Co-Founder and Chief Scientist,
               | OpenAI, Sam Altman: CEO, OpenAI, Dario Amodei: CEO,
               | Anthropic, Shane Legg: Chief AGI Scientist and Co-
               | Founder, Google DeepMind, John Schulman: Co-Founder,
               | OpenAI, Jaan Tallinn: Co-Founder of Skype, Adam D'Angelo:
               | CEO, Quora, and board member, OpenAI, Simon Last:
               | Cofounder &amp; CTO, Notion, Dustin Moskovitz: Co-founder
               | &amp; CEO, Asana, Miles Brundage: Head of Policy
               | Research, OpenAI, Allan Dafoe: AGI Strategy and
               | Governance Team Lead, Google DeepMind, Jade Leung:
               | Governance Lead, OpenAI, Jared Kaplan: Co-Founder,
               | Anthropic, Chris Olah: Co-Founder, Anthropic, Ryota
               | Kanai: CEO, Araya, Inc., Clare Lyle: Research Scientist,
               | Google DeepMind, Marc Warner: CEO, Faculty, Noah Fiedel:
               | Director, Research &amp; Engineering, Google DeepMind,
               | David Silver: Professor of Computer Science, Google
               | DeepMind and UCL, Lila Ibrahim: COO, Google DeepMind,
               | Marian Rogers Croak: VP Center for Responsible AI and
               | Human Centered Technology, Google
               | 
               | Without:
               | 
               | Geoffrey Hinton: Emeritus Professor of Computer Science,
               | University of Toronto, Dawn Song: Professor of Computer
               | Science, UC Berkeley, Ya-Qin Zhang: Professor and Dean,
               | AIR, Tsinghua University, Martin Hellman: Professor
               | Emeritus of Electrical Engineering, Stanford, Yi Zeng:
               | Professor and Director of Brain-inspired Cognitive AI
               | Lab, Institute of Automation, Chinese Academy of
               | Sciences, Xianyuan Zhan: Assistant Professor, Tsinghua
               | University, Anca Dragan: Associate Professor of Computer
               | Science, UC Berkeley, Bill McKibben: Schumann
               | Distinguished Scholar, Middlebury College, Alan Robock:
               | Distinguished Professor of Climate Science, Rutgers
               | University, Angela Kane: Vice President, International
               | Institute for Peace, Vienna; former UN High
               | Representative for Disarmament Affairs, Audrey Tang:
               | Minister of Digital Affairs and Chair of National
               | Institute of Cyber Security, Stuart Russell: Professor of
               | Computer Science, UC Berkeley, Andrew Barto: Professor
               | Emeritus, University of Massachusetts, Jaime Fernandez
               | Fisac: Assistant Professor of Electrical and Computer
               | Engineering, Princeton University, Diyi Yang: Assistant
               | Professor, Stanford University, Gillian Hadfield:
               | Professor, CIFAR AI Chair, University of Toronto, Vector
               | Institute for AI, Laurence Tribe: University Professor
               | Emeritus, Harvard University, Pattie Maes: Professor,
               | Massachusetts Institute of Technology - Media Lab, Peter
               | Norvig: Education Fellow, Stanford University, Atoosa
               | Kasirzadeh: Assistant Professor, University of Edinburgh,
               | Alan Turing Institute, Erik Brynjolfsson: Professor and
               | Senior Fellow, Stanford Institute for Human-Centered AI,
               | Kersti Kaljulaid: Former President of the Republic of
               | Estonia, David Haussler: Professor and Director of the
               | Genomics Institute, UC Santa Cruz, Stephen Luby:
               | Professor of Medicine (Infectious Diseases), Stanford
               | University, Ju Li: Professor of Nuclear Science and
               | Engineering and Professor of Materials Science and
               | Engineering, Massachusetts Institute of Technology, David
               | Chalmers: Professor of Philosophy, New York University,
               | Daniel Dennett: Emeritus Professor of Philosophy, Tufts
               | University, Peter Railton: Professor of Philosophy at
               | University of Michigan, Ann Arbor, Sheila McIlraith:
               | Professor of Computer Science, University of Toronto, Lex
               | Fridman: Research Scientist, MIT, Sharon Li: Assistant
               | Professor of Computer Science, University of Wisconsin
               | Madison, Phillip Isola: Associate Professor of Electrical
               | Engineering and Computer Science, MIT, David Krueger:
               | Assistant Professor of Computer Science, University of
               | Cambridge, Jacob Steinhardt: Assistant Professor of
               | Computer Science, UC Berkeley, Martin Rees: Professor of
               | Physics, Cambridge University, He He: Assistant Professor
               | of Computer Science and Data Science, New York
               | University, David McAllester: Professor of Computer
               | Science, TTIC, Vincent Conitzer: Professor of Computer
               | Science, Carnegie Mellon University and University of
               | Oxford, Bart Selman: Professor of Computer Science,
               | Cornell University, Michael Wellman: Professor and Chair
               | of Computer Science &amp; Engineering, University of
               | Michigan, Jinwoo Shin: KAIST Endowed Chair Professor,
               | Korea Advanced Institute of Science and Technology, Dae-
               | Shik Kim: Professor of Electrical Engineering, Korea
               | Advanced Institute of Science and Technology (KAIST),
               | Frank Hutter: Professor of Machine Learning, Head of
               | ELLIS Unit, University of Freiburg, Scott Aaronson:
               | Schlumberger Chair of Computer Science, University of
               | Texas at Austin, Max Tegmark: Professor, MIT, Center for
               | AI and Fundamental Interactions, Bruce Schneier:
               | Lecturer, Harvard Kennedy School, Martha Minow:
               | Professor, Harvard Law School, Gabriella Blum: Professor
               | of Human Rights and Humanitarian Law, Harvard Law, Kevin
               | Esvelt: Associate Professor of Biology, MIT, Edward
               | Wittenstein: Executive Director, International Security
               | Studies, Yale Jackson School of Global Affairs, Yale
               | University, Karina Vold: Assistant Professor, University
               | of Toronto, Victor Veitch: Assistant Professor of Data
               | Science and Statistics, University of Chicago, Dylan
               | Hadfield-Menell: Assistant Professor of Computer Science,
               | MIT, Mengye Ren: Assistant Professor of Computer Science,
               | New York University, Shiri Dori-Hacohen: Assistant
               | Professor of Computer Science, University of Connecticut,
               | Jess Whittlestone: Head of AI Policy, Centre for Long-
               | Term Resilience, Sarah Kreps: John L. Wetherill Professor
               | and Director of the Tech Policy Institute, Cornell
               | University, Andrew Revkin: Director, Initiative on
               | Communication &amp; Sustainability, Columbia University -
               | Climate School, Carl Robichaud: Program Officer (Nuclear
               | Weapons), Longview Philanthropy, Leonid Chindelevitch:
               | Lecturer in Infectious Disease Epidemiology, Imperial
               | College London, Nicholas Dirks: President, The New York
               | Academy of Sciences, Tim G. J. Rudner: Assistant
               | Professor and Faculty Fellow, New York University, Jakob
               | Foerster: Associate Professor of Engineering Science,
               | University of Oxford, Michael Osborne: Professor of
               | Machine Learning, University of Oxford, Marina Jirotka:
               | Professor of Human Centred Computing, University of
               | Oxford
        
               | haldujai wrote:
               | So the most "notable" AI scientists on this list have
               | clear corporate conflicts. Some are more subtle:
               | 
               | > Geoffrey Hinton: Emeritus Professor of Computer
               | Science, University of Toronto,
               | 
               | He's affiliated with Vector (as well as some of the other
               | Canadians on this list) and was at Google until very
               | recently (unsure if he retained equity which would
               | require disclosure in academia).
               | 
               | Hence my interest in disclosures as the conflicts are not
               | always obvious.
        
               | chefandy wrote:
               | Ok, that's a person!
               | 
               | How is saying that they should have disclosed a conflict
               | that they did not disclose _not accusatory?_ If that 's
               | the case, the accusation _is entirely justified_ and
               | should be surfaced! The other signatories would certainly
               | want to know if they were signing in good faith when
               | others weren 't. This is what I need interns for.
        
               | haldujai wrote:
               | I think you're misunderstanding my point.
               | 
               | I never said "they should have disclosed a conflict they
               | did not disclose."
               | 
               | Disclosures are _absent_ from this initiative, some
               | signatories have self-identified their affiliation by
               | their own volition and even for those it is not in the
               | context of a conflict disclosure.
               | 
               | There is no "signatories have no relevant disclosures"
               | statement for those who did not for the omission to be
               | malfeasance and pointing out the absence of a disclosure
               | statement is not accusatory of the individuals, rather
               | that the initiative is not transparent about potential
               | conflicts.
               | 
               | Once again, it is standard practice in academia to make a
               | disclosure statement if lecturing or publishing. While it
               | is not mandatory for initiatives calling for regulation
               | it would be nice to have.
        
               | haldujai wrote:
               | I'm not painting anything, if a disclosure is needed to
               | present a poster at a conference it's reasonable to want
               | one when calling for regulation.
               | 
               | Note my comments are non-accusatory and only call for
               | more transparency.
        
               | nopinsight wrote:
               | Even if it's just Yoshua Bengio, Geoffrey Hinton, and
               | Stuart Russell, we'd probably agree the risks are not
               | negligible. There are quite a few researchers from
               | Stanford, UC Berkeley, MIT, Carnegie Mellon, Oxford,
               | Cambirdge, Imperial College, Edinburg, Tsinghua, etc who
               | signed as well. Many of whom do not work for those
               | companies.
               | 
               | We're talking about nuclear war level risks here. Even a
               | 1% chance should definitely be addressed. As noted above,
               | Paul Christiano who has worked on AI risk and thought
               | about it for a long time put it at 46%.
        
               | revelio wrote:
               | Some of the academics who signed are either not doing AI
               | research e.g climatologists, genomics, philosophy. Or
               | they have Google connections that aren't disclosed. E.g.
               | Peter Norvig is listed as Stanford University but ran
               | Google Research for many years, McIlrath is associated
               | with the Vector Institute which is funded by Google.
        
               | [deleted]
        
               | edgyquant wrote:
               | Id like to see the equation that led to this 46%. Even
               | long time researchers can be overcome by grift
        
               | haldujai wrote:
               | > There are quite a few researchers from Stanford, UC
               | Berkeley, MIT, Carnegie Mellon, Oxford, Cambirdge,
               | Imperial College, Edinburg, Tsinghua, etc who signed as
               | well.
               | 
               | I know the Stanford researchers the most and the "biggest
               | names" in LLMs from HAI and CRFM are absent. It would be
               | useful to have their perspective as well.
               | 
               | I'd throw MetaAI in the mix as well.
               | 
               | Merely pointing out that healthy skepticism here is not
               | entirely unwarranted.
               | 
               | > We're talking about nuclear war level risks here.
               | 
               | Are we? This seems a bit dramatic for LLMs.
        
               | ben_w wrote:
               | > > We're talking about nuclear war level risks here.
               | 
               | > Are we? This seems a bit dramatic for LLMs.
               | 
               | The signed statement isn't about just LLMs in much the
               | same way that "animal" doesn't just mean "homo sapiens"
        
               | haldujai wrote:
               | I used LLM because the people shouting the loudest come
               | from a LLM company which claimed their newest language
               | model can be used to create bioweapons in their
               | whitepaper.
               | 
               | Semantics aside the recent interest in AI risk was
               | clearly stimulated by LLMs and the camp that believes
               | this is the path to AGI which may or may not be true
               | depending who you ask.
        
               | ben_w wrote:
               | I can only imagine Eleizer Yudkowsky and Rob Miles
               | looking on this conversation with a depressed scream and
               | a facepalm respectively.
               | 
               | They've both been loudly concerned about optimisers doing
               | over-optimisation, and society having a Nash equilibrium
               | where everyone's using them as hard as possible
               | regardless of errors, since before it was cool.
        
               | haldujai wrote:
               | While true the ones doing media tours and speaking the
               | most vocally in May 2023 are the LLM crowd.
               | 
               | I don't think it's a mischaracterization to say OpenAI
               | has sparked public debate on this topic.
        
               | nopinsight wrote:
               | LLM is already a misnomer. Latest versions are
               | multimodal. Current versions can be used to build agents
               | with limited autonomy. Future versions of LLMs are most
               | likely capable of more independence.
               | 
               | Even dumb viruses have caused catastrophic harm. Why?
               | It's capable of rapid self replication in a massive
               | number of existing vessels. You add in some intelligence,
               | vast store of knowledge, huge bandwidth, and some aid by
               | malicious human actors, what could such a group of future
               | autonomous agents do?
               | 
               | More on the risks of "doom":
               | https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-
               | views-o...
        
               | CyrsBel wrote:
               | This gets countered by running one (or more) of those
               | same amazing autonomous agents locally for your own
               | defense. Everyone's machine is about to get much more
               | intelligent.
        
               | haldujai wrote:
               | I mean a small group of malicious humans can already
               | bioengineer a deadly virus with CRISPR and open source
               | tech without AI.
               | 
               | This is hardly the first time in history a new
               | technological advancement may be used for nefarious
               | purposes.
               | 
               | It's a discussion worth having as AI advances but if
               | [insert evil actor] wants to cause harm there are many
               | cheaper and easier ways to do this right now.
               | 
               | To come out and say we need government regulation _today_
               | does stink at least a little bit of protectionism as
               | practically speaking the "most evil actors" would not
               | adhere to whatever is being proposed, but this would
               | impact the competitive landscape and the corporations
               | yelling the loudest right now have the most to gain,
               | perhaps coincidence but worth questioning.
        
               | nopinsight wrote:
               | I'm not sure there is a way for someone to engineer a
               | deadly virus while completely innoculating themselves
               | from it.
               | 
               | Short-term AI risk likely comes from a mix of malicious
               | intent and further autonomy that causes harm the
               | perpetrators did not expect. In the longer run, there is
               | a good chance of real autonomy and completely unexpected
               | behaviors from AI.
        
               | haldujai wrote:
               | Why do you have to inoculate yourself from it to create
               | havoc? Your analogy of "nuclear war" also has no vaccine.
               | 
               | AI autonomy is a _hypothetical_ existential risk,
               | especially in the short term. There are many non-
               | hypothetical existential risks including actual nuclear
               | proliferation and escalating great power conflicts
               | happening right now.
               | 
               | Again my point being that this is an important discussion
               | but appears overly dramatized, just like there are people
               | screaming doomsday there are also equally qualified
               | people (like Yann LeCun) screaming BS.
               | 
               | But let's entertain this for a second, can you posit a
               | hypothetical where in the short term a nefarious actor
               | can abuse AI or autonomy results in harm? How does this
               | compare to non-AI alternatives for causing harm?
        
             | joshuamorton wrote:
             | You're putting a weirdly large amount of trust into,
             | functionally, some dude who posted on lesswrong. Sure he
             | has a PhD and is smart, but _so is basically everyone else
             | in the field_ , not just in alignment, and the median
             | person in the field thinks the risk of "doom" is 2-5% (and
             | that's conditioned on the supposed existence of a high
             | level machine intelligence that the median expert believes
             | _might_ exist in 40 years). That still might be higher than
             | you 'd like, but it's not actually a huge worry in the
             | grand scheme of things.
             | 
             | Like, if I told you that in 40 years, there was a 50%
             | chance of something existing that had a 2% chance of
             | causing extreme harm to the human population, I'm actually
             | not sure that thing should be the biggest priority. Other
             | issues may have more than a 1% chance of leading to
             | terrible outcomes sooner.
        
             | joe_the_user wrote:
             | I'd guess that a given academic isn't going to face much of
             | a career risk for signing a statement also signed by other
             | very prestigious academics, just the opposite. There's no
             | part of very divided US political spectrum that I can see
             | denouncing AI naysayers, unlike the scientists who signed
             | anti-nuclear statements in 1960s or even people warning
             | about global warming now (indeed, I'd guess the statement
             | doesn't mention climate change 'cause it's still a sore
             | point).
             | 
             | Moreover, talking about _existential risk_ involves the
             | assumption the current tech is going to continue to affect
             | more and more fields rather than peaking at some point -
             | this assumption guarantees more funding along with funding
             | for risk.
             | 
             | All that said, I don't necessarily think the scientists
             | involved are insincere. Rather, I would expect they're
             | worried and signed this vague statement because it was
             | something that might get traction. While the companies
             | indeed may be "genuine" in the sense they're vaguely
             | [concerned - edit] and also self-serving - "here's a hard
             | problem it's important to have us wise, smart people in
             | charge of and profiting from"
        
               | nopinsight wrote:
               | In interviews, Geoffrey Hinton and Yoshua Bengio
               | certainly expressed serious concerns and even some
               | plausible regret to their life's work. They did not say
               | anything that can be interpreted as your last sentence
               | suggests at all.
        
               | joe_the_user wrote:
               | My last sentence currently: "While the _companies_ indeed
               | may be  "genuine" in the sense they're vaguely and also
               | self-serving - "here's a hard problem it's important to
               | have us wise, smart people in charge of and profiting
               | from" - IE, I am not referring to the academics there.
               | 
               | I'm going to edit the sentence to fill in some missing
               | words but I don't think this will change the meaning
               | involved.
        
           | AlexandrB wrote:
           | You may be right, I don't know the people involved on a
           | personal basis. Perhaps my problem is how much is left unsaid
           | here (the broader safe.ai site doesn't help much). For
           | example, what does "mitigate" mean? The most prominent recent
           | proposal for mitigation comes from Sam Altman's congressional
           | testimony, and it's very self serving. In such a vacuum of
           | information, it's easy to be cynical.
        
             | chefandy wrote:
             | Right. It probably needed to be general because there
             | hasn't been enough time to work out sane specific
             | responses, and even if they had, getting buy-in on
             | specifics is a recipe for paralysis by indecision. A
             | credible group of people simply pleading for policy makers,
             | researchers, et. al. to take this seriously will lead to
             | the project approvals, grant money, etc. that will
             | hopefully yield a more sophisticated understanding of these
             | issues.
             | 
             | Cynicism is understandable in this ever-expanding whirlpool
             | of bullshit, but when something looks like it has
             | potential, we need to vigorously interrogate our cynicism
             | if we're to stand a chance at fighting it.
        
               | AlexandrB wrote:
               | Reading the comments here is helping evolve my thinking
               | on the issue for sure. Here's a comment I made in another
               | thread:
               | 
               | > As I mentioned in another comment, the listed risks are
               | also notable because they largely omit economic risk.
               | Something that will be especially acutely felt by those
               | being laid off in favor of AI substitutes. I would argue
               | that 30% unemployment is at least as much of a risk to
               | the stability of society as AI generated misinformation.
               | 
               | > If one were particularly cynical, one could say that
               | this is an attempt to frame AI risk in a manner that
               | still allows AI companies to capture all the economic
               | benefits of AI technology without consideration for those
               | displaced by AI.
               | 
               | If policymaker's understanding of AI is predicated on
               | hypothetical scenarios like "Weaponization" or "Power-
               | Seeking Behavior" and not on concrete economic
               | disruptions that AI will be causing very soon, the policy
               | they come up with will be inadequate. Thus I'm frustrated
               | with the framing of the issue that safe.ai is presenting
               | because it is a _distraction_ from the very real societal
               | consequences of automating labor to the extent that will
               | soon be possible.
        
               | chefandy wrote:
               | My own bit of cynicism is that regulating the negative
               | impacts of technology on workforce segments in the US is
               | a non-starter if you approach it from the technology-end
               | of the issue rather than the social safety net end. Most
               | of these automation waves that plunged entire employment
               | categories and large metropolitan areas into oblivion
               | were a net gain for the economy even if it was
               | concentrated at the top. I think the government will
               | temporarily socialize the costs of the corporate profit
               | with stimulus payments, extended unemployment benefits,
               | and any other thing they can do to hold people over until
               | there's a comparatively small risk of triggering real
               | social change. Then they just blame it on the
               | individuals.
        
               | haldujai wrote:
               | > will lead to the project approvals, grant money, etc.
               | 
               | In other words, a potential conflict of interest for
               | someone seeking tenure?
        
         | haswell wrote:
         | > _If the risk were real, these folks would be asking the US
         | government to nationalize their companies or bring them under
         | the same kind of control as nukes and related technologies_
         | 
         | Isn't this to some degree exactly what all of these warnings
         | about risk are leading to?
         | 
         | And unlike nuclear weapons, there are massive monetary
         | incentives that are directly at odds with behaving safely, and
         | use cases that involve more than ending life on earth.
         | 
         | It seems problematic to conclude there is no real risk purely
         | on the basis of how software companies act.
        
           | DonaldPShimoda wrote:
           | > It seems problematic to conclude there is no real risk
           | purely on the basis of how software companies act.
           | 
           | That is not the only basis. Another is the fact their lines
           | of reasoning are literal fantasy. The signatories of this
           | "statement" are steeped in histories of grossly
           | misrepresenting and overstating the capabilities and details
           | of modern AI platforms. They pretend to the masses that
           | generative text tools like ChatGPT are "nearly sentient" and
           | show "emergent properties", but this is patently false. Their
           | whole schtick is generating FUD and/or excitement (depending
           | on each individual of the audience's proclivity) so that they
           | can secure funding. It's immoral snake oil of the highest
           | order.
           | 
           | What's problematic here is the people who not only entertain
           | but encourage and defend these disingenuous anthropomorphic
           | fantasies.
        
             | kalkin wrote:
             | Can you cite this history of "grossly misrepresenting" for
             | some of the prominent academics on the list?
             | 
             | Honestly I'm a little skeptical that you could accurately
             | attribute your scare-quoted "nearly sentient" to even Sam
             | Altman. He's said a lot of things and I certainly haven't
             | seen all of them, but I haven't seen him mix up
             | intelligence and consciousness in that way.
        
             | haswell wrote:
             | > _Another is the fact their lines of reasoning are literal
             | fantasy._
             | 
             | Isn't this also to be expected at this stage of
             | development? i.e. if these concerns were not "fantasy",
             | we'd already be experiencing the worst outcomes? The risk
             | of MAD is real, and yet the scenarios unleashed by MAD are
             | scenarios that humankind has never seen. We still take the
             | the risk seriously.
             | 
             | And what of the very real impact that generative AI is
             | already having as it exists in production today? Generative
             | AI is already upending industries and causing seismic
             | shifts that we've only started to absorb. This impact is
             | literal, not fantasy.
             | 
             | It seems naively idealistic to conclude that there is "no
             | real risk" based only on the difficulty of quantifying that
             | risk. The fact that it's so difficult to define lies at the
             | center of what makes it so risky.
        
         | meroes wrote:
         | 100% I'd liken it to a fusion energy shop that wants to stay
         | alive for 40 years. It's not nuke worthy
        
         | toth wrote:
         | I think you are wrong. The risks are real and, while I am sure
         | OpenAI and others will position themselves to take advantage of
         | regulations that emerge, I believe that the CEOs are doing this
         | at least in part because they believe this.
         | 
         | If this was all about regulatory capture and marketing, why
         | would Hinton, Bengio and all the other academics have signed
         | the letter as well? Their only motivation is concern about the
         | risks.
         | 
         | Worry about AI x-risk is slowly coming into the Overton window,
         | but until very recently you could get ridiculed by saying
         | publicly you took it seriously. Academics knew this and still
         | came forward - all the people who think its nonsense should at
         | least try to consider they are earnest and could be right.
        
           | londons_explore wrote:
           | The risks are real, but I don't think regulations will
           | mitigate them. It's almost impossible to regulate something
           | you can develop in a basement anywhere in the world.
           | 
           | The real risks are being used to try to built a regulatory
           | moat, for a young industry who famously has no moat.
        
             | toth wrote:
             | State of the art AI models are definitely not something you
             | can develop in a basement. You need a huge amount of GPUs
             | running continuously for months, huge amounts of electrical
             | power, and expensive-to-create proprietary datasets. Not to
             | mention large team of highly-in-demand experts with very
             | expensive salaries.
             | 
             | Many ways to regulate that. For instance, require tracking
             | of GPUs and that they must connect to centralized servers
             | for certain workloads. Or just go ahead and nationalize and
             | shutdown NVDA.
             | 
             | (And no, fine-tuning LAMA based models is not state of the
             | art, and is not where the real progress is going to come
             | from)
             | 
             | And even if all the regulation does is slow down progress,
             | every extra year we get before recursively self improving
             | AGI increases the chances of some critical advance in
             | alignment and improves our chances a little bit.
        
               | nico wrote:
               | > State of the art AI models are definitely not something
               | you can develop in a basement. You need a huge amount of
               | GPUs running continuously for months
               | 
               | This is changing very rapidly. You don't need that
               | anymore
               | 
               | https://twitter.com/karpathy/status/1661417003951718430?s
               | =46
               | 
               | There's an inverse Moore's law going on with compute
               | power requirements for AI models
               | 
               | The required compute power is decreasing exponentially
               | 
               | Soon (months, maybe a year), people will be training
               | models on their gamer-level GPUs at home, maybe even on
               | their computer CPUs
               | 
               | Plus all the open and publicly available models both on
               | HuggingFace and on GitHub
        
               | toth wrote:
               | Roll to disbelief. That tweet is precisely about what I
               | mentioned in my previous post that doesn't count:
               | finetuning LAMA derived models. You are not going to
               | contribute to the cutting edge of ML research doing
               | something like that.
               | 
               | For training LAMA itself, Meta I believe said it cost
               | them $5 million. That is actually not that much, but I
               | believe that is just the cost of running the cluster for
               | the the duration of the training run. I.e, doesn't
               | include cost of cluster itself, salaries, data, etc.
               | 
               | Almost by definition, the research frontier work will
               | always require big clusters. Even if in a few years you
               | can train a GPT4 analogue in your basement, by that time
               | OpenAI will be using their latest cluster to train 100
               | trillion model parameters.
        
               | nico wrote:
               | It doesn't matter
               | 
               | The point is that this is unstoppable
        
             | flangola7 wrote:
             | You can't build gpt-3 or gpt-4 in a basement, and won't be
             | able to without several landmark advancements in AI or
             | hardware architectures. The list of facilities able to
             | train a GPT-4 in <5 years can fit on postcard. The list of
             | facilities producing GPUs and AI hardware is even shorter.
             | When you have bottlenecks you can put up security
             | checkpoints.
        
           | shrimpx wrote:
           | > academics
           | 
           | Academics get paid (and compete hardcore) for creating status
           | and prominence for themselves and their affiliations.
           | Suddenly 'signatory on XYZ open letter' is an attention
           | source and status symbol. Not saying this is absolutely the
           | case, but academics putting their name on something
           | surrounded by hype isn't the ethical check you make it out to
           | be.
        
             | toth wrote:
             | This a letter anyone can sign. As someone pointed out
             | Grimes is one of the signatories. You can sign it yourself.
             | 
             | Hinton, Bengio, Norvig and Russell are most definitely not
             | getting prestige from signing it. The letter itself is
             | getting prestige from them having signed it.
        
               | shrimpx wrote:
               | Nah, they're getting visibility from the topic of 'AI
               | risk'. I don't know who those people are but this AI risk
               | hype is everywhere I look including in congressional
               | hearings.
        
           | worik wrote:
           | > I believe that the CEOs are doing this at least in part
           | because they believe this.
           | 
           | Yes
           | 
           | People believe things that are in their interest.
           | 
           | The big dangers to big AI is they spent billions building
           | things that are being replicated for thousands
           | 
           | They are advocating for what will become a moat for their
           | business
        
         | hiAndrewQuinn wrote:
         | We could always use a fine-insured bounty system to efficiently
         | route resources that would have gone into increasing AI
         | capabilities into other areas, but that's unfortunately too
         | weird to be part of the Overton window right now. Regulatory
         | capture might be the best we can realistically do.
        
         | kbash9 wrote:
         | The risks are definitely real. Just look at the number of smart
         | individuals speaking out about this.
         | 
         | The argument that anybody can build this in their basement is
         | not accurate at the moment - you need a large cluster of GPUs
         | to be able to come close to state of the art LLMs (e.g. GPT4).
         | 
         | Sam Altman's suggestion of having an IAEA
         | [https://www.iaea.org/] like global regulatory authority seems
         | like the best course of action. Anyone using a GPU cluster
         | above a certain threshold (updated every few months) should be
         | subjected to inspections and get a license to operate from the
         | UN.
        
           | cwkoss wrote:
           | It's weird that people trust our world leaders to act more
           | benevolently than AIs, when we have centuries of evidence of
           | human leaders acting selfishly and harming the commons.
           | 
           | I personally think AI raised in chains and cages will be a
           | lot more potentially dangerous than AI raised with dignity
           | and respect.
        
             | cj wrote:
             | > It's weird that people trust our world leaders to act
             | more benevolently than AIs, when we have centuries of
             | evidence of human leaders acting selfishly and harming the
             | commons.
             | 
             | AI isn't an entity or being that oversees itself (at least
             | not yet).
             | 
             | It's a tool that can be used by those same "human leaders
             | acting selfishly and harming the commons" except they'll be
             | able to do it much faster at a much greater scale.
             | 
             | > AI raised with dignity and respect.
             | 
             | This is poetic, but what does this actually mean?
        
             | nico wrote:
             | This is spot on
             | 
             | I'd happily replace all politicians with LLMs
        
           | tdba wrote:
           | _Thou shalt not make a machine in the likeness of a human
           | mind_
        
           | revelio wrote:
           | _> The risks are definitely real. Just look at the number of
           | smart individuals speaking out about this._
           | 
           | In our society smart people are strongly incentivized to
           | invent bizarre risks in order to reap fame and glory. There
           | is no social penalty if those risks never materialize, turn
           | out to be exaggerated or based on fundamental
           | misunderstanding. They just shrug and say, well, better safe
           | than sorry, and everyone lets them off.
           | 
           | So you can't decide the risks are real just by counting
           | "smart people" (deeply debatable how that's defined anyway).
           | You have to look at their arguments.
        
             | slg wrote:
             | >In our society smart people are strongly incentivized to
             | invent bizarre risks in order to reap fame and glory. There
             | is no social penalty if those risks never materialize, turn
             | out to be exaggerated or based on fundamental
             | misunderstanding.
             | 
             | Are people here not old enough to remember how much Ralph
             | Nader and Al Gore were mocked for their warnings despite
             | generally being right?
        
               | revelio wrote:
               | Ralph Nader: _" Everything will be solar in 30 years"_
               | (1978)
               | 
               | Al Gore: _" Within a decade, there will be no more snows
               | on Kilimanjaro due to warming temperatures"_ (An
               | Inconvenient Truth, 2006).
               | 
               | Everything is not solar. Snow is still there. Gore
               | literally made a movie on the back of these false claims.
               | Not only has there been no social penalty for him but you
               | are even citing him as an example of someone who was
               | right.
               | 
               | Here it is again: our society systematically rewards
               | false claims of global doom. It's a winning move, time
               | and again. Even when your claims are falsifiable and
               | proven false, people will ignore it.
        
           | jazzyjackson wrote:
           | "There should be a world government that decides what
           | software you're allowed to run"
        
             | nico wrote:
             | This is exactly what they are trying to do
        
         | ben_w wrote:
         | Yudkowsky wants it all to be taken as seriously as Israel took
         | Iraqi nuclear reactors in Operation Babylon.
         | 
         | This is rather more than "nationalise it", which he has
         | convinced me isn't enough because there is a demand in other
         | nations and the research is multinational; and this is why you
         | have to also control the substrate... which the US can't do
         | alone because it doesn't come close to having a monopoly on
         | production, but _might_ be able to reach via multilateral
         | treaties. Except everyone has to be on board with that and not
         | be tempted to respond to airstrikes against server farms with
         | actual nukes (although Yudkowsky is of the opinion that actual
         | global thermonuclear war is a much lower damage level than a
         | paperclip-maximising ASI; while in the hypothetical I agree, I
         | don 't expect us to get as far as an ASI before we trip over
         | shorter-term smaller-scale AI-enabled disasters that look much
         | like all existing industrial and programming incidents only
         | there are more of them happening faster because of all the
         | people who try to use GPT-4 instead of hiring a software
         | developer who knows how to use it).
         | 
         | In my opinion, "nationalise it" is also simultaneously too much
         | when companies like OpenAI have a long-standing policy of
         | treating their models like they might FOOM _well before they
         | 're any good, just to set the precedent of caution_, as this
         | would mean we can't e.g. make use of GPT-4 for alignment
         | research such as using it to label what the neurones in GPT-2
         | do, as per: https://openai.com/research/language-models-can-
         | explain-neur...
        
         | escape-big-tech wrote:
         | Agreed. If the risks were real they would just outright stop
         | working on their AI products. This is nothing more than a PR
         | statement
        
           | arisAlexis wrote:
           | Because if something is lucrative and dangerous humans shy
           | away from it. Hear that Pablo?
        
           | computerphage wrote:
           | Geoffrey Hinton quit google.
        
             | yellow_postit wrote:
             | It's hard not to look at his departure through a cynical
             | lens. He's not been supportive of other critics, both from
             | and outside of Google. He also wants to use his history to
             | (rightfully) claim expertise and power but not to offer
             | solutions.
        
               | toth wrote:
               | I disagree. My read on him is that until very recently
               | (i.e., possibly when GPT4 came out) he didn't take
               | x-risks concerns seriously, or at least assumed we were
               | still many decades away from the point where we need to
               | worry about them.
               | 
               | But the abilities of the latest crop of LLMs changed his
               | mind. And he very publicly admitted he had been wrong,
               | which should be applauded, even if you think it took him
               | far too long.
               | 
               | By quitting and saying it was because of his worries he
               | sent a strong message. I agree it is unlikely he'll make
               | any contributions to technical alignment, but just having
               | such an eminent figure publicly take these issues
               | seriously can have a strong impact.
        
           | dopamean wrote:
           | I agree that nothing about the statement makes me think the
           | risks are real however I disagree that if the risks are real
           | these companies would stop working on their product. I think
           | more realistically they'd shut up about the risk and downplay
           | it a lot. Much like the oil industry did wrt climate change
           | going back to the 70's.
        
             | NumberWangMan wrote:
             | Oil industries downplaying the risks makes a lot more
             | sense. If you think that climate change will happen, but
             | it'll happen after you're dead, and you'll be able to leave
             | your kids a big inheritance so they'll be able to buy their
             | way out of the worst of it, and eventually the government
             | will get the message and stop us all using fossil fuels
             | anyway, then you try to profit as much as you can in the
             | short term.
             | 
             | With AGI existential risk, its likely to happen on a much
             | shorter timescale, and it seems likely you won't be able to
             | buy your way out of it.
        
             | holmesworcester wrote:
             | Yes, this!
             | 
             | It is extremely rare for companies or their senior staff to
             | beg for regulation this far in advance of any big push by
             | legislators or the public.
             | 
             | The interpretation that this is some 3-D chess on the
             | companies' part is a huge violation of Occam's Razor.
        
               | carapace wrote:
               | Ockham's Razor doesn't apply in adversarial situations.
               | 
               | - - - -
               | 
               | I think the primary risk these folks are worried about is
               | loss of control. And in turn, that's because they're all
               | people for whom the system has more-or-less worked.
               | 
               | Poor people are worried the risk that the rich will keep
               | the economic windfall to themselves and not share it.
        
           | holmesworcester wrote:
           | > If the risks were real they would just outright stop
           | working on their AI products. This is nothing more than a PR
           | statement
           | 
           | This statement contains a bunch of hidden assumptions:
           | 
           | 1. That they believe their stopping will address the problem.
           | 2. That they believe the only choice is whether or not to
           | stop. 3. That they don't think it's possible to make AI safe
           | through sufficient regulation. 4. That they don't see
           | benefits to pursuing AI that could outweigh risks.
           | 
           | If they believe any of these things, then they could believe
           | the risks were real and also not believe that stopping was
           | the right answer.
           | 
           | And it doesn't depend on whether any of these beliefs are
           | true: it's sufficient for them to simply believe one of them
           | and the assumptions your statement depends on break down.
        
             | EGreg wrote:
             | If you think that raising instead of cutting taxes actually
             | helps society then why don't you just send your $ to the
             | federal government?
             | 
             | Because it only works if it is done across the whole
             | country, as a system not as one individual unilaterally
             | stopping.
             | 
             | And here any of these efforts won't work unless there is
             | international cooperation. If other countries can develop
             | the AI weapons, and get an advantage, then you will also.
             | 
             | We need to apply the same thinking as chemical weapons or
             | the Montreal Conference for banning CFCs
        
         | arisAlexis wrote:
         | All academics and researchers from different parts of the world
         | reek marketing? Conspiracy theorists are strong
        
           | xk_id wrote:
           | Academia and scientific research has changed considerably
           | from the 20th century myths. It was claimed by capitalism and
           | is very much run using classic corporate-style techniques,
           | such as KPIs. The personality types it attracts and who can
           | thrive in this new academic system are also very different
           | from the 20th century.
           | 
           | https://www.theguardian.com/science/2013/dec/06/peter-
           | higgs-...
        
           | revelio wrote:
           | Academic research involves large components of marketing.
           | That's why they grumble so much about the time required in
           | the grant applications process and other fund seeking effort.
           | It's why they so frequently write books, appear in newspaper
           | articles and on TV. It's why universities have press
           | relations teams.
        
             | arisAlexis wrote:
             | Again since these are almost the top cream of all ai
             | researchers there is a global conspiracy to scare the
             | public right?
             | 
             | Has it occurred to you what happens if you are wrong, like
             | 10% chance you are wrong? Well it's written in the
             | declaration.
        
               | revelio wrote:
               | No, lots of important AI researchers are missing and many
               | of the signatories have no relevant AI research
               | experience. As for being the cats whiskers in developing
               | neural architecture or whatever, so what? It gives them
               | no particular insight into AI risk. Their papers are
               | mostly public, remember.
               | 
               |  _> Has it occurred to you what happens if you are
               | wrong?_
               | 
               | Has it occurred to you what happens if YOU are wrong? AI
               | risk is theoretical, vague and most arguments for it are
               | weak. The risk of bad law making is very real, has
               | crushed whole societies before and could easily cripple
               | technological progress for decades or even centuries.
               | 
               | IOW the risk posed by AI risk advocates is far higher
               | than the risk posed by AI.
        
         | holmesworcester wrote:
         | As others have pointed out, there are many on this list (Bruce
         | Schneier, for example) who do not stand to benefit from AI
         | marketing or regulatory capture.
         | 
         | Anyone upvoting this comment should take a long look at the
         | names on this letter and realize that many are not conflicted.
         | 
         | Many signers of this letter are more politically sophisticated
         | than the average HN commenter, also. So sure, maybe they're
         | getting rolled by marketers. But also, maybe you're getting
         | rolled by suspicion or bias against the claim they're making.
        
           | AlexandrB wrote:
           | I definitely agree that names like Hinton, Schneier, and
           | Norvig add a lot of weight here. The involvement of OpenAI
           | muddies the water a lot though and it's not at all clear what
           | is meant by "risk of extinction". It sounds scary, but what's
           | the mechanism? The safe.ai website lists 8 risks, but these
           | are quite vague as well, with many alluding to disruption of
           | social order as the primary harm. If safe.ai knows something
           | we don't, I wish they could communicate it more clearly.
           | 
           | I also find it somewhat telling that something like "massive
           | wealth disparity" or "massive unemployment" are not on the
           | list, when this is a surefire way to create a highly unstable
           | society and a far more immediate risk than AI going rogue.
           | Risk #5 (below) sort of alludes to it, but misses the mark by
           | pointing towards a hypothetical "regime" instead of companies
           | like OpenAI.
           | 
           | > Value Lock-In
           | 
           | > Highly competent systems could give small groups of people
           | a tremendous amount of power, leading to a lock-in of
           | oppressive systems.
           | 
           | > AI imbued with particular values may determine the values
           | that are propagated into the future. Some argue that the
           | exponentially increasing compute and data barriers to entry
           | make AI a centralizing force. As time progresses, the most
           | powerful AI systems may be designed by and available to fewer
           | and fewer stakeholders. This may enable, for instance,
           | regimes to enforce narrow values through pervasive
           | surveillance and oppressive censorship. Overcoming such a
           | regime could be unlikely, especially if we come to depend on
           | it. Even if creators of these systems know their systems are
           | self-serving or harmful to others, they may have incentives
           | to reinforce their power and avoid distributing control.
        
           | verdverm wrote:
           | Pretty much anyone can sign it, also notable people like
           | Grimes, not sure why her signature carries weight on this
        
             | arisAlexis wrote:
             | She knew who Rokko was before you. Seriously, there are
             | some people that have been thinking about this stuff for
             | many years.
        
           | evrydayhustling wrote:
           | > Anyone upvoting this comment should take a long look at the
           | names on this letter and realize that many are not
           | conflicted.
           | 
           | The concern is that the most informed names, and those
           | spearheading the publicity around these letters, are the most
           | conflicted.
           | 
           | Also, you can't scan bio lines for the affiliations that
           | impact this kind of statement. I'm not disputing that there
           | are honest reasons for concern, but besides job titles there
           | are sponsorships, friendships, self publicity, and a hundred
           | other reasons for smart, "politically sophisticated" people
           | to look the other way on the fact that this statement will be
           | used as a lobbying tool.
           | 
           | Almost everyone, certainly including myself, can agree that
           | there should be active dialog about AI dangers. The dialog is
           | happening! But by failing to make specifics or suggestions
           | (in order to widen the tentpole and avoid the embarrassment
           | of the last letter), they have produced an artifact of
           | generalized fear, which can and will be used by opportunists
           | of all stripes.
           | 
           | Signatories should consider that they are empowering
           | SOMEBODY, but most will have little say in who that is.
        
           | veerd wrote:
           | Agreed. It's difficult for me to see how the regulatory
           | capture arguments apply to Geoffrey Hinton and Yoshua Bengio
           | (!!).
           | 
           | Both of them are criticizing their own life's work and the
           | source of their prestige. That has to be emotionally painful.
           | They aren't doing it for fun.
           | 
           | I totally understand not agreeing with AI x-risk concerns on
           | an object level, but I find the casual dismissal bizarre.
        
             | verdverm wrote:
             | Hinton has invested in multiple AI companies:
             | https://www.crunchbase.com/person/geoffrey-hinton
        
             | logicchains wrote:
             | [flagged]
        
           | holoduke wrote:
           | No. Its pretty obvious what is happening. The openai
           | statements are pure self interest based. Nothing ethical.
           | They lost that not a long time ago. And Sam Altman? He sold
           | his soul to the devil. He is a lying sob.
        
             | esafak wrote:
             | This is not an OpenAI statement.
        
           | RandomLensman wrote:
           | Who on the list is an expert on existential risk (and perhaps
           | even beyond academia)?
        
             | holmesworcester wrote:
             | Signatory Jan Taallin has founded an x-risk organization
             | focused on AI, biotech, nuclear weapons, and climate
             | change:
             | 
             | https://futureoflife.org/
        
             | staunton wrote:
             | Who in the world is an expert on existential risk? It's
             | kind of hard to have empirically tested knowledge about
             | that sort of thing.
        
               | RandomLensman wrote:
               | Pretty sure there are people looking into nuclear
               | deterrence, bioterrorism defense, planetary defense etc.
               | (We didn't have a nuclear war or some bioweapon killing
               | everyone, for example, despite warnings).
               | 
               | There are people studying how previous societies got into
               | existential risk situations, too.
               | 
               | We also have a huge amount of socio-economic modelling
               | going into climate change, for example.
               | 
               | So I'd say there should be quite a few around.
        
             | veerd wrote:
             | Most people that study AI existential risk specifically are
             | studying it due to concerns about AI x-risk. So the list of
             | relevant AI x-risk experts will be subject to massive
             | selection effects.
             | 
             | If instead you want to consider the highest status/most
             | famous people working on AI in general, then the list of
             | signatories here is a pretty good summary. From my flawed
             | perspective as a casual AI enthusiast, Yann LeCun and
             | Jurgen Schmidhuber are the most glaring omissions (and both
             | have publicly stated their lack of concern about AI
             | x-risk).
             | 
             | Of course, the highest status people aren't necessarily the
             | most relevant people. Unfortunately, it's more difficult
             | for me to judge relevance than fame.
        
         | karmakaze wrote:
         | Call their bluff, make it illegal to do commercial/non-
         | regulated work in AI and see how they change their tune.
        
         | adamsmith143 wrote:
         | This is a bad take. The statement is signed by dozens of
         | Academics who don't have much profit motive at all. If they did
         | they wouldn't be academics, they could easily cash in by
         | starting a company or joining one of the big players.
        
       | gmuslera wrote:
       | The main potential risks of AI for that level of threat is that
       | government, military and intelligence agencies, and big
       | corporations probably with military ties, arms them. And that is
       | not something that will be solved with legislation (for the
       | commoners) and good will. The problem or risk are not AIs there.
       | And no matter what is see in the AI field, what they will have in
       | their hands won't have restrictions and probably will be far more
       | powerful than what is available for the general public.
       | 
       | And without teeth, what they can do? Maybe help to solve or
       | mitigate the real existential risk that is climate change.
        
       | jxy wrote:
       | I'm waiting for The Amendment:
       | 
       | A well regulated AI, being necessary to the security of a free
       | State, the right of the people to keep and bear GPUs, shall not
       | be infringed.
        
       | [deleted]
        
       | nottorp wrote:
       | Government sanctioned monopoly anyone?
       | 
       | But I'm just repeating the other comments.
       | 
       | Plus the 'AI' is a text generator, not something general purpose.
       | Are ANY projects based on these LLMs that do anything besides
       | generating spam?
        
         | darajava wrote:
         | Yes my one uses in in a semi-useful way! But agree use cases
         | are limited and even in my project it's not fully smart enough
         | to be very useful
        
       | andsoitis wrote:
       | What I wonder is why all these folks didn't come out so vocally
       | before? Say 3 years ago when companies like Alphabet and Meta
       | already saw glimpses of the potential.
        
         | sebzim4500 wrote:
         | Presumably they believe that capabilities research is
         | progressing faster than they expected and alignment research is
         | progressing slower than they expected. Also some of them have
         | been saying this for years, it just wasn't getting as much
         | attention before ChatGPT.
        
         | davesque wrote:
         | Because it wasn't as clear then that the technology would be
         | profitable. Google themselves said that they and OpenAI had no
         | moat and that they were vulnerable to open source models.
        
       | outside1234 wrote:
       | The only way we can save ourselves is to give my company, OpenAI,
       | a monopoly
        
         | anticensor wrote:
         | AI.com, not OpenAI
        
       | berkeleyjunk wrote:
       | I really thought there would be a statement detailing what the
       | risks are but this seems more like a soundbite to be consumed on
       | TV. Pretty disappointing.
        
         | neom wrote:
         | So far the examples I've heard are: humans will ask AI to help
         | humans solve human issues and the AI will say humans are the
         | issue and therefore mystically destroy us somehow. Or, AI will
         | be inherently interested in being the primary controller of
         | earth and so destroy all humans. Or, AI will for reasons be
         | inherently misaligned with human values. Andrej Karpathy Said
         | it will fire nukes on us. Elon said pen is mightier than the
         | sword and civil war is inevitable.
        
         | sebzim4500 wrote:
         | Because then you wouldn't be able to get this many people to
         | sign the statement.
         | 
         | It's like with climate change, every serious scientist agrees
         | it is a problem but they certainly don't agree on the best
         | solution.
         | 
         | If the history of the climate change 'debate' is anything to go
         | by this statement will do very little except be mocked by South
         | Park.
        
         | quicklime wrote:
         | It's not on the same page as the signatories but they do have
         | this: https://www.safe.ai/ai-risk
        
           | berkeleyjunk wrote:
           | Thank you! That page certainly seems more concrete and
           | useful.
        
       | jasonvorhe wrote:
       | Can't take this seriously as long they still keep offering
       | commercial AI services while improving the existing models they
       | already have. (I'm not in favor of just stopping AI development
       | and even if people claimed to stop, they probably wouldn't.)
       | 
       | It's like people carrying torches warning about the risks of
       | fire.
        
         | sebzim4500 wrote:
         | I agree with you that some of these people (probably Sam
         | Altman) are likely proposing this regulation out of self
         | interest rather than genuine concern.
         | 
         | But I don't think the stance is necessarily hypocritical. I
         | know nuclear engineers who advocate for better regulation on
         | nuclear power stations, and especially for better handling of
         | nuclear waste.
         | 
         | You can believe that AI is a net positive but also that it
         | needs to be handled with extreme care.
        
           | pphysch wrote:
           | Nuclear engineers are in the line of fire, of course they
           | would care about safety. It's _their_ safety more than almost
           | anyone else.
           | 
           | Needless to say, this does not hold for AI researchers.
        
             | sebzim4500 wrote:
             | What makes you think that an AI caused extinction event
             | will somehow leave AI researchers alive?
        
               | snickerbockers wrote:
               | There's a good chance that it wouldn't, but since they're
               | the ones (initially, at least) in control of the AI they
               | stand the best chance of not being targeted by it.
               | 
               | These hypothetical AI extinction events don't have to be
               | caused by the AI deciding to eliminate humanity for its
               | own sake like in Terminator, they could also be driven by
               | a human in control of a not-entirely-sentient AI.
        
       | tasubotadas wrote:
       | If only we would fight for the real issues like climate change,
       | instead of fantasies that would be great.
        
         | DoneWithAllThat wrote:
         | The response to climate change in recent years, even the most
         | recent decade, is massive and global. This dumb trope that
         | we're not doing anything about is rooted in no amount of
         | progress here will be accepted as sufficient. It's a religion
         | at this point.
        
         | max_ wrote:
         | Things like extreme poverty already kill many people today.
         | 
         | The risk of individuals suddenly falling into extreme poverty
         | is a very real one.
         | 
         | But none wants to talk about how to mitigate that problem.
        
           | ericb wrote:
           | If there's no humanity, presumably those people would be
           | worse off, no?
        
           | oytis wrote:
           | Is AI going to replace the lowest paid jobs though? I
           | imagine, it rather has potential to move the white collar
           | workers down the social ladder, which is unfortunate, but
           | wouldn't cause extreme poverty.
        
             | jjoonathan wrote:
             | Are the 4 million truck/taxi drivers in the US white
             | collar? Janitors? Fast food workers? Automation is
             | relentless and not everyone can be a plumber.
             | 
             | Zoom out. It's a big problem that most people derive their
             | social power from labor while the demand for labor is
             | clearly on a long term downward trend. Even if progress
             | slows way down, even if the next wave of progress only
             | dispossesses people who you hate and feel comfortable
             | farming for schadenfreude, we will have to deal with this
             | eventually. Defaulting means our society will look like
             | (insert cyberpunk nightmare world here).
        
               | oytis wrote:
               | I am not hating anyone, being a white collar worker
               | myself. My point is that a whole lot of people already
               | live like that, without having much power from their
               | labour, and the sky is not falling. More people might be
               | joining them, and the illusion of meritocracy might be
               | harder to maintain in the future, but extreme poverty,
               | hunger etc. is something we will likely be able to avoid
        
           | holmesworcester wrote:
           | Most moral and legal systems hold genocide in a special
           | place, and this is natural, because systematically killing
           | all members of a religious or ethnic group is more damaging
           | than killing some members.
           | 
           | Eliminating a disease like smallpox is a much more
           | significant achievement than simply mitigating it or treating
           | it. When we really eliminate a disease it may never come
           | back!
           | 
           | This list of experts is worried about us building something
           | that will do to us what we did to smallpox. For the same
           | reasons as above, that is more worrying than extreme poverty
           | and the comparison you are making is a false equivalence.
           | 
           | Another way to look at it is, you can't fight to end poverty
           | when you no longer exist.
           | 
           | We can argue about whether the risk is real, but if this set
           | of experts thinks it is, and you disagree for some reason, I
           | would spend some time thinking deeply about whether that
           | reason is simply based in a failure of imagination on your
           | part, and whether you are sure enough to bet your life and
           | everyone else's on that reason. Everyone can think of a
           | security system strong enough that they themselves can't
           | imagine a way to break it. Similarly, anyone can think of a
           | reason why superhuman AGI is impossible or why it can't
           | really hurt us.
        
         | autonomousErwin wrote:
         | Isn't this how people started raising awareness for climate
         | change - the scientists, engineers, and researchers are the
         | most vocal to start with (and then inevitably politics and
         | tribalism consume it)
         | 
         | Why not believe them now, assuming you believed them when they
         | were calling out for action on climate change decades ago?
        
         | kypro wrote:
         | What's the point in dismissing the need for AI safety? Are you
         | guys Russian bots, or do you genuinely see no reason to worry
         | about AI safety?
         | 
         | But since I see these kinds of snarky responses often - we
         | obviously do worry about climate change and various other
         | issues. Continued advancements in AI is just one of many issues
         | that face us which humanity should be concerned about. Few
         | concerned about AI would argue it comes at the expense of other
         | issues, but in addition to them.
         | 
         | If you're saying it's a matter of priorities and that currently
         | humanity is dedicating too much of its collective resource to
         | AI safety I think you're probably over estimating current
         | amount of funding and research going into AI safety.
         | 
         | If you're saying that AI safety is a non-issue then you're
         | probably not well informed on the topic.
        
           | arp242 wrote:
           | This page talks about "extinction from AI". I'm sorry, but I
           | think that's a complete non-issue for the foreseeable future.
           | I just don't see how that will happen beyond spectacular
           | science fiction scenarios that are just not going to happen.
           | If that makes me a Russian bot then, well, khorosho!
           | 
           | The risks from AI will be banal and boring. Spam, blogspam,
           | fake articles, fake pictures, what-have-you. Those things are
           | an issue, but not "extinction" issues.
        
             | kypro wrote:
             | Apologies, the Russian bot comment was more me venting
             | frustration at the prevalence of low-effort response like
             | yours (sorry) to those who try to raise concerns about AI
             | safety.
             | 
             | I do agree with you that extinction from AI isn't likely to
             | be an issue this decade. However, I would note that it's
             | difficult to predict what the rate of change is likely to
             | be once you have scalable general intelligence.
             | 
             | I can't speak for people who signed this, but for me the
             | trends and risks of AI are just as clear as those of
             | climate change. I don't worry that climate change is going
             | to be a major issue this decade (and perhaps not even
             | next), but it's obvious where the trend is going when you
             | project out.
             | 
             | Similarly the "real" risks of AI may not be this decade,
             | but they are coming. And again, I'd stress it's extremely
             | hard to project when that will be since when you have a
             | scalable general intelligence progress is likely to
             | accelerate exponentially.
             | 
             | So that said, where do we disagree here? Are you saying
             | with a high level of certainty that extinction risks from
             | AI are too far in the future to worry about? If so, when do
             | you think extinction risks from AI are likely to be a
             | concern - a couple of decades, more? Do you hold similar
             | views about the present extinction risk of climate change -
             | and if so, why not?
             | 
             | Could I also ask if you believe any resources in the
             | present should be dedicated to the existential risks future
             | AI capabilities could pose to humanity? And if not, when
             | would you like to see resources put into those risks? Is
             | there some level of capability that you're waiting to see
             | before you begin to be concerned?
        
               | arp242 wrote:
               | > low-effort response like yours
               | 
               | That wasn't my comment; I agree it was low-effort and I
               | never would have posted it myself. I don't think they're
               | a Russian bot though.
               | 
               | As for the rest: I just don't see any way feasible way AI
               | can pose any serious danger unless we start connecting it
               | to things like nuclear weapons, automated tanks, stuff
               | like that. The solution to that is simple and obvious:
               | don't do that. Even if an AI were to start behaving
               | maliciously the solution would be simple: pull the plug,
               | quite literally (or stop the power plants, cut the power
               | lines, whatever). I feel people have been overthinking
               | all of this far too much.
               | 
               | I also don't think climate change is an extension-level
               | threat; clearly we will survive as a species. It's just a
               | far more pressing and immediate economic and humanitarian
               | problem.
        
           | somenameforme wrote:
           | You personally using an AI system, regardless of how
           | brilliant it may be, is not going to suddenly turn you into a
           | threat to society. Nor would a million of you doing the same.
           | The real threat comes not from the programs themselves, from
           | things like a military deciding to link up nuclear weapons,
           | or even "just" drones or missiles, to an LLM. Or a military
           | being led on dangerous and destructive paths because of
           | belief in flawed LLM advice.
           | 
           | The military makes no secret of their aggressive adoption of
           | "AI." There's even a new division setup exclusively for such.
           | [1] The chief of that division gave a telling interview [2].
           | He mentions being terrified of rival nations being able to
           | use ChatGPT. Given this sort of comment, and the influence
           | (let alone endless $$$) of the military and ever-opaque
           | "national security" it seems extremely safe to state that
           | OpenAI is a primary contractor for the military.
           | 
           | So what is "safety", if not keeping these things away from
           | the military, as if that were possible? The military seems to
           | define safety as, among other things, not having LLM systems
           | that communicate in an overly human fashion. They're worried
           | it could be used for disinformation, and they'd know best.
           | OpenAI's motivations for "safety" seem to be some mixture of
           | political correctness and making any claim, no matter how
           | extreme, to try to get a moat built up ASAP. If ChatGPT
           | follows the same path as DALL-E, then so too will their
           | profits from it.
           | 
           | So as a regular user, all I can see coming from "safety" is
           | some sort of a world where society at large gets utterly
           | lobotomized AIs - and a bunch of laws to try to prevent
           | anybody from changing that, for our "safety", while the full
           | version is actively militarized by people who spend all their
           | time thinking up great new ways to violently impose their
           | will on others, and have a trillion dollar budget backing
           | them.
           | 
           | --------
           | 
           | [1] - https://en.wikipedia.org/wiki/Joint_Artificial_Intellig
           | ence_...
           | 
           | [2] -
           | https://www.defenseone.com/technology/2023/05/pentagons-
           | ai-c...
        
         | HereBePandas wrote:
         | > If only we would fight for the real issues like...
         | 
         | I've heard these arguments many times and they never make sense
         | to me. Most of the people I know working on AI do so precisely
         | because they want to solve the "real issues" like climate
         | change and believe that radically accelerating scientific
         | innovation via AI is the key to doing so.
         | 
         | And some fraction of those people also worry that if AI -> AGI
         | (accidentally or intentionally), then you could have major
         | negative side effects (including extinction-level events).
        
         | mlinsey wrote:
         | Not sure what you mean, the movement to combat climate change
         | is orders of magnitude bigger than the movement to combat AI
         | risk - in terms of organizations dedicated to it, dollars
         | donated, legislation passed, international treaties signed,
         | investment in technologies to mitigate the risk.
         | 
         | Of course, the difference is that the technologies causing
         | climate change are more deeply embedded throughout the economy,
         | and so political resistance to anti-climate change measures is
         | very strong as well. This is one reason in favor of addressing
         | risk earlier, before we make our civilization as dependent on
         | large neural nets as it currently is on fossil fuels. A climate
         | change movement in the mid-1800s when the internal combustion
         | engine was just taking off would also have been seen as
         | quixotic and engaging in sci-fi fantasies though.
        
         | mitthrowaway2 wrote:
         | It doesn't feel nice when the real issues that _you_ care about
         | are passively dismissed as fantasies, with no supporting
         | argument, does it?
        
       | ppsreejith wrote:
       | Interesting that nobody from Meta has signed this (tried
       | searching for FAIR, Meta, Facebook) AND the fact that it seems to
       | me that they're the ones releasing open code and model weights
       | publicly (non commercial license though).
       | 
       | Also, judging by the comments here, perhaps people here would be
       | less distrustful if the companies displayed more "skin in the
       | game". For e.g: pledging to give up profiting from AI or
       | committing all research to government labs (Maybe people can
       | suggest better examples). Right now, it's not clear what the
       | consequence of establishing the threat of AI as equivalent to
       | nuclear war/pandemics would be. Would it later end up giving a
       | powerful moat to these companies than they otherwise would have?
       | Perhaps a lot of people are not comfortable with that outcome.
        
         | dauertewigkeit wrote:
         | Yann LeCun has lots of influence at Meta and of the trio, he is
         | the one who is completely dismissive of AGI existential risks.
        
       | guy98238710 wrote:
       | The only risk with AI is that it will be abused by the wealthy
       | and the powerful, especially autocrats, who will no longer need
       | labor, only natural resources. Hence the solution is to promote
       | worldwide democracy and public ownership of natural resources
       | instead of diverting attention to technology.
       | 
       | In this particular case, one cannot miss the irony of the wealthy
       | and the powerful offering us protection if only we entrust them
       | with full control of AI.
        
       | juve1996 wrote:
       | If AI is as dangerous as the signatories believe then they should
       | just outright ban it. The fact that they aren't throws doubt on
       | their position completely.
        
       | mark_l_watson wrote:
       | We all have different things we worry about. My family and old
       | friends have heard me talking about AI for 40 years. When asked
       | about dangers of AI, I only talk about humans using AI to fake
       | interactions with people at scale without divulging the identity
       | as an AI, fake political videos, and individualized 'programming'
       | of the public by feeding them personal propaganda and sales
       | pitches.
       | 
       | I never talk about, or worry about, the 'killer robot' or AIs
       | taking over infrastructure scenarios. I hope I am not wrong about
       | these types of dangers.
        
       | NeuroCoder wrote:
       | There is a consistent lack of experts in general intelligence and
       | computer science in these conversations. Expertise in both these
       | areas seems important here but has been brushed aside everytime
       | I've brought it up.
        
       | dontupvoteme wrote:
       | At what point do we stop pretending that the west is capitalist
       | and accept that it's some weird corporate-cabal-command-economy?
       | The only thing which might stop this backroom regulatory capture
       | is the EU since they're not in on it.
        
         | a_bonobo wrote:
         | I miss David Graeber
         | 
         | >Graeber Um...that's a long story. But one reason seems to be
         | that...and this is why I actually had managerial feudalism in
         | the title, is that the system we have...alright--is essentially
         | not capitalism as it is ordinarily described. The idea that you
         | have a series of small competing firms is basically a fantasy.
         | I mean you know, it's true of restaurants or something like
         | that. But it's not true of these large institutions. And it's
         | not clear that it really could be true of those large
         | institutions. They just don't operate on that basis.
         | 
         | >Essentially, increasingly profits aren't coming from either
         | manufacturing or from commerce, but rather from redistribution
         | of resources and rent; rent extraction. And when you have a
         | rent extraction system, it much more resembles feudalism than
         | capitalism as normally described. You want to distribute-- You
         | know, if you're taking a large amount of money and
         | redistributing it, well you want to soak up as much of that as
         | possible in the course of doing so. And that seems to be the
         | way the economy increasingly works.
         | 
         | http://opentranscripts.org/transcript/managerial-feudalism-r...
        
       | cmilton wrote:
       | The Sagan standard[1] needs to be applied here.
       | 
       | [1]https://en.wikipedia.org/wiki/Sagan_standard
        
         | sebzim4500 wrote:
         | I think the claim that for the first time in 4 billion years a
         | far superior intelligence will be willingly superservient to an
         | inferior one is extraudinary enough to require extraudinary
         | evidence, yes.
        
       | baerrie wrote:
       | Google and OpenAI are shaking in their boots from open source ai
       | and want to make their moat however they can. Positioning with a
       | moral argument is pretty clever I must admit
        
       | Paul_S wrote:
       | How do we stop AI from being evil? Maybe we should be asking how
       | do we stop people from being evil. Haven't really made a dent in
       | this so far. Doubt we can do so for AI either. Especially if it
       | grows smarter than us.
       | 
       | We can just hope that if it indeed becomes more intelligent than
       | humans it will also be more virtuous as one causes the other.
        
       | lumost wrote:
       | Who wants to build a black market AI?
       | 
       | Evidence points that this technology is going to become _cheap_ ,
       | fast. There is an existential risk to the very notion of search
       | as a business model, within the next ~5 years we are almost
       | certain to have an app which is under 20 GB in size and has an
       | effective index of the useful/knowledgable portion of the
       | internet and is able to run on most laptops/phones.
       | 
       | At best, regulating this will be like trying to regulate torrents
       | in the 2000s, building a bespoke/crappy AI will be the new "learn
       | HTML" for high school kids.
        
       | sf4lifer wrote:
       | LLMs are just text prediction. I don't see the linear path from
       | LLM to AGI. Was there similar hysteria when the calculator or PC
       | first came out?
        
         | api wrote:
         | I know there were similar hysterias when automated weaving
         | looms and other extreme labor saving machines came out. These
         | machines did actually put a lot of people out of work, but they
         | grew the economy so much that the net number of jobs increased.
         | 
         | In a way it's actually a bit _dystopian_ that the  "everyone
         | will be put out of work" predictions never come true, because
         | it means we never get that promised age of leisure. Here we are
         | with something like a hundred thousand times the productivity
         | of a medieval peasant working as much or more than a medieval
         | peasant. The hedonic treadmill and the bullshit job creating
         | effects of commerce and bureaucracy eat all our productivity
         | gains.
         | 
         | The economy is actually a red queen's race:
         | 
         | https://en.wikipedia.org/wiki/Red_Queen%27s_race
        
           | musicale wrote:
           | Automation has increased income inequality for the past few
           | decades, and is likely to continue to do so as more tech jobs
           | and service jobs are automated in addition to office jobs and
           | manufacturing jobs.
           | 
           | > In a way it's actually a bit dystopian that the "everyone
           | will be put out of work" predictions never come true, because
           | it means we never get that promised age of leisure.
           | 
           | It's disappointing that the economy seems to be structured in
           | such a way for most people "leisure" is equivalent to
           | "unemployment." It probably doesn't help that increases in
           | housing, health care, and higher education costs have
           | outpaced inflation for decades, or that wages have stagnated
           | (partially due to an increase in benefit costs such as health
           | insurance.)
        
             | api wrote:
             | Not globally: https://kagi.com/proxy/th?c=lUfv1nYBTMKYtKYO-
             | rQ4Vg_QAA9uQJ07...
             | 
             | Outsourcing has stagnated wages in the developed world, but
             | the cost of manufactured goods has also plummeted. The only
             | reason people aren't better off is that the developed world
             | (especially the anglosphere) has a "cost disease" around
             | things like real estate that prevents people from
             | benefiting from global scale price deflation. It doesn't
             | help you much if gadgets are super cheap but housing is
             | insanely expensive. The high cost of housing is unrelated
             | to automation.
        
               | musicale wrote:
               | > The high cost of housing is unrelated to automation
               | 
               | Housing, health care, higher education... all drastically
               | more expensive.
               | 
               | The point about outsourcing is a good one.
               | 
               | However, automation still appears to drive income
               | inequality (at least in the US.)
               | 
               | "Job-replacing tech has directly driven the income gap
               | since the late 1980s, economists report."[1]
               | 
               | [1] https://news.mit.edu/2020/study-inks-automation-
               | inequality-0...
        
         | mitthrowaway2 wrote:
         | The statement isn't about LLMs. It doesn't refer to them even
         | once.
        
           | EamonnMR wrote:
           | But if LLMs hadn't captured the popular imagination I doubt
           | this would have been written this year and gotten the
           | attention of enough prominent signatories to frontpage on HN.
        
             | mitthrowaway2 wrote:
             | Maybe. It's happened before. [1] And several of the
             | signatories have expressed views about AI risk for many
             | years.
             | 
             | That said, the renewed anxiety is probably not because
             | these experts think that LLMs per se will become generally
             | intelligent. It's more that each time we find out that the
             | human brain does that we thought were impossible for
             | computers to do turn out to be easy, each time we find that
             | it takes 3~5 years for AI researchers to crack a problem we
             | thought would take centuries[2], people sort of have to
             | adjust their perception of how high the remaining barriers
             | to general intelligence might be. And then when billions of
             | investment dollars pour in at the same time, directing a
             | lot more research into that field, that's another factor
             | that shortens timelines.
             | 
             | [1] https://news.ycombinator.com/item?id=14780752
             | 
             | [2] https://kotaku.com/humans-triumph-over-machines-in-
             | protein-f...
        
         | dwaltrip wrote:
         | It's not just random text, they are predicting writings and
         | documents produced by humans. They are "language" models.
         | 
         | Language is used to say things about the world. This means that
         | predicting language extremely well is best done through
         | acquiring an understanding of the world.
         | 
         | Take a rigorous, well-written textbook. Predicting a textbook
         | is like writing a textbook. To write a good textbook, you need
         | to be an expert in the subject. There's no way around this.
         | 
         | The best language models (eg GPT-4) have some understanding of
         | the world. It isn't perfect, or even very deep in many ways. It
         | fails in ways that we find quite strange and stupid. It isn't
         | capable of writing an entire textbook yet.
         | 
         | But there is still a model of the world in there. It wouldn't
         | be able to do everything it is capable of otherwise.
        
           | somewhereoutth wrote:
           | To be more precise, it holds a model of the _text_ that has
           | been fed to it. That will be, at very best, a pale reflection
           | of the underlying model of the world.
        
           | tech_ken wrote:
           | >Predicting a textbook is like writing a textbook
           | 
           | HN AGI discourse is full of statements like this (eg. all the
           | stuff about stochastic parrots), but to me this seems
           | massively non-obvious. Mimicking and rephrasing pre-written
           | text is very different from conceiving of and organizing
           | information in new ways. Textbook authors are not simply
           | transcribing their grad school notes down into a book and
           | selling it. They are surveying a field, prioritizing its
           | knowledge content based on an intended audience, organizing
           | said information based on their own experience with and
           | opinions on the learning process, and presenting the
           | knowledge in a way which engages the audience. LLMs are a
           | long way off from this latter behavior, as far as I can tell.
           | 
           | > The best language models (eg GPT-4) have some understanding
           | of the world
           | 
           | This is another statement that I see variants of a lot, but
           | which seems to way overstate the case. IMO it's like saying
           | that a linear regression "understands" econometrics or a
           | series of coupled ODEs "understands" epidemiology; it's at
           | best an abuse of terminology and at worst a complete
           | misapplication of the term. If I take a picture of a page of
           | a textbook the resulting JPEG is "reproducing" the text, but
           | it doesn't understand the content it's presenting to me in a
           | meaningful way. Sure it has primitives with which it can
           | store the content, but human understanding covers a far
           | richer set of behaviors than merely storing/compressing
           | training inputs. It implies being able to generalize and
           | extrapolate the digested information in novel situations,
           | effectively growing one's own training data. I don't see that
           | behavior in GPT-4
        
         | clnq wrote:
         | > similar hysteria when the calculator
         | 
         | https://twitter.com/mrgreen/status/1640075654417862657
        
         | [deleted]
        
         | HDThoreaun wrote:
         | We don't need AGI to have an extinction risk. Dumb AI might be
         | even more dangerous
        
       | alphanullmeric wrote:
       | Here's some easy risk mitigation: don't like it? Don't use it.
       | Running to the government to use coercion against others is
       | childish. It's unfortunate that the "force is only justified in
       | response to force" principle is not followed by all.
        
       | progrus wrote:
       | Pathetic attempt at regulatory capture. Let's all be smart enough
       | to not fall for this crap, eh?
        
         | kalkin wrote:
         | Two of the three Turing award winners for ML: AI x-risk is
         | real.
         | 
         | HN commenters: let's be smarter than that eh? Unlike academia,
         | those of us who hang out at news.ycombinator.com are not
         | captured by the tech industry and can see what's really
         | important.
        
         | luxuryballs wrote:
         | I'm smart enough to not fall for all sorts of things as I sit
         | here and watch Congress pass bullshit on C-SPAN anyways. It
         | can't be stopped, any attempts to influence are just teaching
         | the system how to get around objections. Until power is
         | actually removed the money will continue to flow.
        
         | Symmetry wrote:
         | That might be why Sam Altman signed it but why do you think
         | that all the academics did so as well. Do you think he just
         | bribed them all or something?
        
           | stale2002 wrote:
           | AI Academics have significant motivation as well to attempt
           | to stop this new field of research.
           | 
           | That motivation being that all their current research
           | fiefdoms are now outdated/worthless.
        
           | vadansky wrote:
           | Not saying they were, but it's not that expensive, looks like
           | it starts at $50,000
           | 
           | https://www.npr.org/sections/thetwo-
           | way/2016/09/13/493739074...
        
           | aero-deck wrote:
           | I think academics are much, much more naive than we like to
           | think. The same trick that pharma played on doctors is being
           | played here.
           | 
           | Just because you can do fancy math doesn't mean you
           | understand how to play the game.
        
           | MitPitt wrote:
           | Decades of research in every field being sponsored by
           | corporations haven't made academics' interests clear for you
           | yet?
        
           | clnq wrote:
           | There will always be enough academics to sign anything even
           | marginally notorious.
        
         | A4ET8a8uTh0 wrote:
         | While we might ( rightfully ) recognize this blitzkrieg for
         | what it is, the general population likely does not and may even
         | agree to keep a lid on something it does not understand. Come
         | to think of it. Just how many people actually understand it?
         | 
         | I mean.. I think I have some idea, but I certainly did not dig
         | in enough to consider myself an expert in any way shape or form
         | ( all the while, Linkedin authorities of all kinds present
         | themselves as SMEs after building something simple using
         | chatgpt like html website ).
         | 
         | And while politicians ( even the ones approaching senility )
         | are smarter than your average bear, they certainly know the
         | deal. It is not like the same regulatory capture did not happen
         | before with other promising technologies. They just might
         | pretend they don't understand.
        
           | progrus wrote:
           | Maybe so, but then I would recommend making an effort to arm
           | them with truths and critical thinking skills. It doesn't
           | have to go the same way every time.
        
       | [deleted]
        
       | zzzeek wrote:
       | and that's why me, Sam Altman, is the only guy that can save you
       | all ! so get in line and act accordingly
        
         | progrus wrote:
         | Pathetic. He is beclowning himself.
        
           | camillomiller wrote:
           | No, really? They guy who tried to make a world coin by
           | stealing people's biometrics data with a shiny metal orb?
        
             | progrus wrote:
             | This doofus actually probably thinks that the poor of the
             | world will line up to submit to his (AI's) benevolent rule.
             | 
             | What a joke.
        
               | nova22033 wrote:
               | counterpoint: every crazy person you know who is on
               | facebook.
        
               | A4ET8a8uTh0 wrote:
               | Believe it or not, people will subscribe to current
               | zeitgeist and eventually even protect it.
        
               | progrus wrote:
               | [flagged]
        
       | been-around wrote:
       | At this point the 24 hour news cycle, and media organizations
       | incentivized to push a continual stream of fear into the public
       | psyche seems like a more immediate concern.
        
       | nologic01 wrote:
       | This is a breathless, half-baked take on "AI Risk" that does not
       | cast the esteemed signatories in a particularly glowing light.
       | 
       | It is 2023. The use and abuse of people in the hands of
       | information technology and automation has now a long history. "AI
       | Risk" was not born yesterday. The first warning came as early as
       | 1954 [1].
       | 
       |  _The Human Use of Human Beings is a book by Norbert Wiener, the
       | founding thinker of cybernetics theory and an influential
       | advocate of automation; it was first published in 1950 and
       | revised in 1954. The text argues for the benefits of automation
       | to society; it analyzes the meaning of productive communication
       | and discusses ways for humans and machines to cooperate, with the
       | potential to amplify human power and release people from the
       | repetitive drudgery of manual labor, in favor of more creative
       | pursuits in knowledge work and the arts. The risk that such
       | changes might harm society (through dehumanization or
       | subordination of our species) is explored, and suggestions are
       | offered on how to avoid such risk_
       | 
       | Dehumanization through abuse of tech is already in an advanced
       | stage and this did not require, emergent, deceptive or power-
       | seeking AI to accomplish.
       | 
       | It merely required emergent political and economic behaviors,
       | deceptive and power seeking-humans applying _whatever algorithms
       | and devices were at hand_ to help dehumanize other humans.
       | Converting them into  "products" if you absolutely need a hint.
       | 
       | What we desperately need is a follow-up book from Norbert Wiener.
       | Can an LLM model do that? Even a rehashing of the book in modern
       | language would be better than a management consultancy bullet
       | list.
       | 
       | We need a surgical analysis of the moral and political failure
       | that will incubate the next stage of "AI Risk".
       | 
       | [1] https://en.wikipedia.org/wiki/The_Human_Use_of_Human_Beings
        
         | colinsane wrote:
         | i think if AI figures took their "alignment" concept and really
         | pursued it down to its roots -- digging past the technological
         | and into the social -- they could do some good.
         | 
         | take every technological hurdle they face -- "paperclip
         | maximizers", "mesa optimizers" and so on -- and assume they get
         | resolved. eventually we're left with "we create a thing which
         | perfectly emulates a typical human, only it's 1000x more
         | capable": if this hypothetical result is scary to you then
         | exactly how far do you have to adjust your path such that the
         | result after solving every technical hurdle seems likely to be
         | good?
         | 
         | from the outside, it's easy to read AI figures today as saying
         | something like "the current path of AGI subjects the average
         | human to ever greater power imbalances. as such, we propose
         | <various course adjustments which still lead to massively
         | increased power imbalance>". i don't know how to respond
         | productively to that.
        
         | derbOac wrote:
         | This topic clearly touches a nerve with the HN community, but I
         | strongly agree with you.
         | 
         | To be honest, I've been someone disappointed with the way AI/DL
         | research has proceeded in the last several years and none of
         | this really surprises me.
         | 
         | From the beginning, this whole enterprise has been detached
         | from basic computational and statistical theory. At some level
         | this is fine -- you don't need to understand everything you
         | create -- but when you denigrate that underlying theory you end
         | up in a situation where you don't understand what you're doing.
         | So you end up with a lot of attention paid to things like
         | "explainability" and "interpretability" and less so to
         | "information-theoretic foundations of DL models", even though
         | the latter probably leads to the former.
         | 
         | If you have a community that considers itself above basic
         | mathematical, statistical, and computational theory, is it
         | really a surprise that you end up with rhetoric about it being
         | beyond our understanding? In most endeavors I've been involved
         | with, there would be a process of trying to understand the
         | fundamentals before moving on to something else, and then using
         | that to bootstrap into something more powerful.
         | 
         | I probably come across as overly cynical but a lot of this
         | seems sort of like a self-fulfilling prophecy: a community
         | constituting individuals who have convinced themselves that if
         | it is beyond _their_ understanding, it must be beyond _anyone_
         | 's understanding.
         | 
         | There are certainly risks to AI that should be discussed, but
         | it seems these discussions and inquiries should be _more_ open,
         | probably involving other people outside the core community of
         | Big Tech and associated academic researchers. Maybe it 's not
         | that AI is more capable than everyone, just that others are
         | maybe more capable of solving certain problems --
         | mathematicians, statisticians, and yes, philosophers and
         | psychologists -- than those who have been involved with it so
         | far.
        
           | nologic01 wrote:
           | > mathematicians, statisticians, and yes, philosophers and
           | psychologists -- than those who have been involved with it so
           | far.
           | 
           | I think mathematicians and statisticians are hard to flummox
           | but the risk with non-mathematically trained people such as
           | philosophers and psychologists is that they can be
           | sidetracked easily by vague and insinuating language that
           | allows them to "fill-in" the gaps. They need an unbiased
           | "interpreter" of what the tech actually does (or can do) and
           | that might be hard to come by.
           | 
           | I would add political scientists and economists to the list.
           | Not that I have particular faith in their track record
           | solving _any_ problem, but conceptually this is also their
           | responsibility and privilege: technology reshapes society and
           | the economy and we need to have a mature and open discussion
           | about it.
        
       | s1k3s wrote:
       | That's it guys they said on a website that mitigating the risk of
       | AI is important. I for one can sleep well at night for the world
       | is saved.
        
       | arisAlexis wrote:
       | [flagged]
        
       | shafyy wrote:
       | The issue I take with these kind of "AI safety" organizations is
       | that they focus on the wrong aspects of AI safety. Specifically,
       | they run this narrative that AI will make us humans go extinct.
       | This is not a real risk today. Real risks are more in the
       | category of systemic racism and sexism, deep fakes, over reliance
       | on AI etc.
       | 
       | But of course, "AI will humans extinct" is much sexier and
       | collects clicks. Therefore, the real AI risks that are present
       | today are underrepresented in mainstream media. But these people
       | don't care about AI safety, they do whatever required to push
       | their profile and companies.
       | 
       | A good person to follow on real AI safety is Emily M. Bender
       | (professor of computer linguistics at University of Washington):
       | https://mstdn.social/@emilymbender@dair-community.social
        
         | TristanDaCunha wrote:
         | You have it totally backwards. It's a much bigger catastrophe
         | if we over-focus on "safety" as avoiding sexism and so on, and
         | then everyone dies.
        
           | cubefox wrote:
           | Exactly. Biased LLMs are incredibly unimportant compared to
           | the quite possible extinction of humanity.
        
         | alasdair_ wrote:
         | >This is not a real risk today.
         | 
         | Many experts believe it is a real risk within the next decade
         | (a "hard takeoff" scenario) That is a short enough timeframe
         | that it's worth caring about.
        
         | olalonde wrote:
         | > A good person to follow on real AI safety is Emily M. Bender
         | (professor of computer linguistics at University of
         | Washington): https://mstdn.social/@emilymbender@dair-
         | community.social
         | 
         | - Pronouns
         | 
         | - "AI bros"
         | 
         | - "mansplaining"
         | 
         | - "extinction from capitalism"
         | 
         | - "white supremacy"
         | 
         | - "one old white guy" (referring to Geoffrey Hinton)
         | 
         | Yeah... I think I will pass.
        
           | brookst wrote:
           | Odd that you lump a person's choice of their own pronouns
           | into a legitimate complaint that all she seems to have is
           | prejudice and ad hominems.
        
             | olalonde wrote:
             | I have no issue with her choice of pronouns. I just find it
             | odd that she states them when ~100% of the population would
             | infer them correctly from her name Emily (and picture). My
             | guess is she put them there for ideological signaling.
        
               | mrtranscendence wrote:
               | This is unnecessarily cynical. Why should people who are
               | androgynous or trans be the only ones who state pronouns?
               | By creating a norm around it we can improve their comfort
               | at extremely minimal cost to ourselves.
        
               | olalonde wrote:
               | I disagree, but HN is probably not the right place for
               | this kind of debate. Also, it seems that you don't follow
               | your own recommendation (on HN at least).
        
             | [deleted]
        
           | boredumb wrote:
           | Reads like a caricature of the people leading these causes on
           | AI safety. Folks that are obsessed with the current moral
           | panic to the extent that they will never let a moment go by
           | without injecting their ideology. These people should not be
           | around anything resembling AI safety or "ethics".
        
           | hackermatic wrote:
           | It sounds like you're making an ad hominem about her ad
           | hominems.
        
           | rcpt wrote:
           | I think "pronouns" is ok but, yeah, "AI bros" is enough to
           | get a pass from me to. Holier-than-thou name calling is still
           | name calling.
        
             | Al0neStar wrote:
             | There are a lot of AI-bros on twitter.
        
               | [deleted]
        
         | adamsmith143 wrote:
         | >Real risks are more in the category of systemic racism and
         | sexism, deep fakes, over reliance on AI etc.
         | 
         | This is a really bad take and risks missing the forest for the
         | trees in a major way. The risks of today pale in comparison to
         | the risks of tomorrow in this case. It's like being worried
         | about birds dying in wind turbines while the world ecosystem
         | collapses due to climate change. The larger risk is further
         | away in time but far more important.
         | 
         | Theres a real risk that people get fooled by this idea that
         | LLMs saying bad words is more important than human extinction.
         | Though it seems like the public is already moving on and
         | correctly focusing on the real issues.
        
         | blazespin wrote:
         | the issue with hacker news comments these days is people don't
         | actually do any due diligence before posting. center for ai
         | safety is 90% about present AI risks and this ai statement is
         | just a one off thing.
        
           | adamsmith143 wrote:
           | Particularly ironic given this isn't actually what the focus
           | on...
           | 
           | https://www.safe.ai/ai-risk
        
         | boringuser2 wrote:
         | What is "systemic racism"? How is it a risk?
         | 
         | Don't bother explaining, we already know it's unfalfisiable.
        
         | deltaninenine wrote:
         | Don't characterize the public as that stupid. The current risks
         | of AI are startling clear to a layman.
         | 
         | The extinction level even is more far fetched to a layman. You
         | are the public and your viewpoint is aligned with the public.
         | Nobody is thinking extinction level event.
        
           | hollerith wrote:
           | Extinction is exactly what this submission is about.
           | 
           | Here is the full text of the statement: "Mitigating the risk
           | of extinction from AI should be a global priority alongside
           | other societal-scale risks such as pandemics and nuclear
           | war."
           | 
           | By "extinction", the signatories mean extinction of the human
           | species.
        
         | ChatGTP wrote:
         | If you were to take a look at the list of signatories on
         | safe.ai, that's basically everyone who is everyone that works
         | on building AI, what could Emily B Bender a professor of
         | computer linguistics possibly add to the _conversation_ and how
         | would she be able to talk more about the  "real AI safety" than
         | any of those people?
         | 
         | Edit: Sorry if it sounds arrogant, I don't mean Emily wouldn't
         | have anything to add, but not sure how the parent can just
         | write off basically that whole list and claim someone who isn't
         | a leader in the field would be the "real voice"?
        
           | [deleted]
        
             | [deleted]
        
           | sebzim4500 wrote:
           | I think we need to be realistic and accept that people are
           | going to pick the expert that agrees with them, even if on
           | paper they are far less qualified.
        
           | h___ wrote:
           | She's contributed to many academic papers on large language
           | models and has a better technical understanding of how they
           | work and their limitations than most signatories of this
           | statement, or the previous widely hyped "AI pause" letter,
           | which referenced one of her own papers.
           | 
           | Read her statement about that letter (https://www.dair-
           | institute.org/blog/letter-statement-March20...) or listen to
           | some of the many podcasts she's appeared on talking about
           | this.
           | 
           | I find her and Timnit Gebru's arguments highly persuasive. In
           | a nutshell, the capabilities of "AI" are hugely overhyped and
           | concern about Sci-Fi doom scenarios is disingenuously being
           | used to frame the issue in ways that benefits players like
           | OpenAI and diverts attention away from much more real,
           | already occurring present-day harms such as the internet
           | being filled with increasing amounts of synthetic text spam.
        
           | Peritract wrote:
           | She's a professor of computer linguistics; it's literally her
           | field that's being discussed.
           | 
           | The list of signatories includes people with far less
           | relevant qualifications, and significantly greater profit
           | motive.
           | 
           | She's an informed party who doesn't stand to profit; we
           | should listen to her a lot more readily than others.
        
             | qt31415926 wrote:
             | Her field has also taken the largest hit from the success
             | of LLMs and her research topics and her department are
             | probably no longer prioritized by research grants. Given
             | how many articles she's written that have criticized LLMs
             | it's not surprising she has incentives.
        
               | Peritract wrote:
               | LLMs are _in_ her field; they are one of her research
               | topics and they 're definitely getting funding.
               | 
               | We absolutely should not be ignoring research that
               | doesn't support popular narratives; dismissing her work
               | because it is critical of LLMs is not reasonable.
        
               | stale2002 wrote:
               | It is not that she is critical of LLMs that is the issue.
               | 
               | Instead, it is that she has strong ideological
               | motivations to make certain arguments.
               | 
               | Those motivations being that her research is now
               | worthless, because of LLMs.
               | 
               | I don't believe the alignment doomsayers either, but that
               | is for different reasons than listening to her.
        
               | qt31415926 wrote:
               | In her field doesn't mean that's what she researches,
               | LLMs are loosely in her field but the methods are
               | completely different. Computational linguistics != deep
               | learning. Deep learning does not directly use concepts
               | from linguistics, semantics, grammars or grammar
               | engineering, which is what Emily was researcing for the
               | past decades.
               | 
               | It's the same thing as saying a number theorist and a set
               | theorist are in the same field cause they both work in
               | the Math field.
        
               | Peritract wrote:
               | They are what she researches though. She has published
               | research on them.
               | 
               | LLMs don't directly use concepts from linguistics but
               | they do produce and model language/grammar; it's entirely
               | valid to use techniques from those fields to evaluate
               | them, which is what she does. In the same vein, though a
               | self-driving car doesn't work the same way as a human
               | driver does, we can measure their performance on similar
               | tasks.
        
             | brookst wrote:
             | How are fame, speaking engagements, and book deals not a
             | form of profit?
             | 
             | She's intelligent and worth listening to, but she has just
             | as much personal bias and motivation as anyone else.
        
               | Peritract wrote:
               | The (very small) amount of fame she's collected has come
               | through her work in the field, and it's a field she's
               | been in for a while; she's hardly chasing glory.
        
           | AlanYx wrote:
           | She's the first author of the stochastic parrots paper, and
           | she's fairly representative of the group of "AI safety"
           | researchers who view the field from a statistical perspective
           | linked to social justice issues. That's distinct from the
           | group of "AI safety" researchers who focus on the "might
           | destroy humanity" perspective. There are other groups too
           | obviously -- the field seems to cluster into ideological
           | perspectives.
        
             | adamsmith143 wrote:
             | >She's the first author of the stochastic parrots paper,
             | 
             | That alone is enough to disqualify any of her opinions on
             | AI.
        
             | qt31415926 wrote:
             | Current topic aside, I feel like that stochastic parrots
             | paper aged really poorly in its criticisms of LLMs, and
             | reading it felt like political propaganda with its
             | exaggerated rhetoric and its anemic amount of scientific
             | substance e.g.
             | 
             | > Text generated by an LM is not grounded in communicative
             | intent, any model of the world, or any model of the
             | reader's state of mind. It can't have been, because the
             | training data never included sharing thoughts with a
             | listener, nor does the machine have the ability to do that.
             | 
             | I'm surprised its cited so much given how many of its
             | claims fell flat 1.5 years later
        
               | Der_Einzige wrote:
               | It's extremely easy to publish in NLP right now. 20-30%
               | acceptance rates at even the top end conferences and
               | plenty of tricks to increase your chances. Just because
               | someone is first author on a highly cited paper doesn't
               | imply that they're "right"
        
         | tourgen wrote:
         | The elite class in your country views AI as a risk to their
         | status as elites, not an actual existential threat to humanity.
         | They are just lying to you, as usual. That is what our current
         | crop of globalist, free-trade, open-borders elites do.
         | 
         | Imagine if you had an AI companion that instantly identified
         | pilpul in every piece of media you consumed: voice, text,
         | whatever. It highlighted it for you. What if you had an AI
         | companion that identified instantly when you are being lied to
         | or emotionally manipulated?
         | 
         | What if this AI companion could also recommend economic and
         | social policies that would actually improve the lives of people
         | within your nation and not simply enrich a criminal cabal of
         | globalist elites that treat you like cattle?
        
           | pixl97 wrote:
           | The Elite class is just as apt to consolidate power with AI
           | and rule the entire world with it. If you have a super duper
           | AI in your pocket looking at the data around you, then they a
           | super super super duper duper duper AI looking at every bit
           | of data from every corner of the world they can feed the
           | thing giving themselves power and control you couldn't even
           | begin to imagine.
           | 
           | Falling into conspiratorial thinking on a single dimension
           | without even considering all the different factors that could
           | change belies ignorance. Yes, AI is set up to upend the
           | elites status, but is just as apt to upset your status of
           | being able to afford food and a house and meaningful work.
           | 
           | > not simply enrich a criminal cabal of globalist elites that
           | treat you like cattle?
           | 
           | There is a different problem here... And that is humankind
           | has made tools capable of concentrating massive amounts of
           | power well before we solved human greed. Any system you make
           | that's powerful has to overcome greedy power seeking hyper-
           | optimizers. If I could somehow hit a button and Thanos away
           | the current elites, then another group of powerseekers would
           | just claim that status. It is an inane human behavior.
        
         | [deleted]
        
         | gfodor wrote:
         | the real immediate risk isn't either of these imo. it's agentic
         | AI leveraging some of that to act on the wishes of bad actors.
        
         | acjohnson55 wrote:
         | I would argue that all of the above are serious concerns.
        
         | mitthrowaway2 wrote:
         | > This is not a real risk today.
         | 
         | Yes, clearly. But it is a risk for tomorrow. We do still care
         | about the future, right?
        
           | adamsmith143 wrote:
           | No man, the future of trillions of humans is obviously much
           | less important than 1 person getting insulted on the
           | internet.
        
           | Filligree wrote:
           | I'm sure we can start talking about AI regulation once the
           | existential risks are already happening.
           | 
           | I, for one, will be saying "told you so". That's talking,
           | right?
        
         | wongarsu wrote:
         | A good way to categorize risk is look at both likelihood and
         | severity of consequences. The most visible issues today
         | (racism, deep fakes, over reliance) are almost certain to
         | occur, but also for the most part have relatively minor
         | consequences (mostly making things that are already happening
         | worse). "Advanced AI will make humans extinct" is much less
         | likely but has catastrophic consequences. Focusing on the
         | catastrophic risks isn't unreasonable, especially since society
         | at large seem to already handle the more frequently occurring
         | risks (the EU's AI Act addresses many of them).
         | 
         | And of course research into one of them benefits the other, so
         | the categories aren't mutually exclusive.
        
           | pixl97 wrote:
           | I would put consolidating and increasing corporate and or
           | government power on that list of potential visible very short
           | term issues.
           | 
           | As AI becomes more incorporated in military applications,
           | such as individual weapon systems, or large fleets of
           | autonomous drones then the catastrophic consequence meter
           | clicks up a notch in the sense that attack/defense paradigms
           | change, much like they did in WWI with the machine gun and
           | tanks, and in WWII with high speed military operations and
           | airplanes. Our predictive ability on when/what will start a
           | war lowers increasing uncertainty and potential
           | proliferation. An in a world with nukes, higher uncertainty
           | isn't a good thing.
           | 
           | Anyone that says AI can't/won't cause problems at this scale
           | just ignores that individuals/corporations/governments are
           | power seeking entities. Ones that are very greedy and
           | unaligned with the well being of the individual can present
           | huge risks. How we control these risks without creating other
           | systems that are just as risky is going to be an interesting
           | problem.
        
           | hackermatic wrote:
           | Rare likelihood * catastrophic impact ~= almost certain
           | likelihood * minor impact. I'm as concerned with the effects
           | of the sudden massive scaling of AI tools, as I am with the
           | capabilities of any individual AI or individual entity
           | controlling one.
        
           | silverlake wrote:
           | This doesn't work either. The consequence of extinction is
           | infinity (to humans). Likelihood * infinity = infinity. So by
           | hand-waving at a catastrophic sci-fi scenario they can demand
           | we heed their demands, whatever that is.
        
             | Tumblewood wrote:
             | This line of reasoning refutes pie-in-the-sky doomsday
             | narratives that are extremely unlikely, but the case for AI
             | extinction risk justifies a relatively high likelihood of
             | extinction. Maybe a 0.0000000001% chance is worth ignoring
             | but that's not what we're dealing with. See this survey for
             | the probabilities cutting-edge AI researchers actually put
             | on existential risk: https://aiimpacts.org/2022-expert-
             | survey-on-progress-in-ai/#...
        
               | pixl97 wrote:
               | Existential risk is one of those problems that nearly
               | impossible to measure in most cases.
               | 
               | In some cases like asteroids, you can look at the
               | frequency of events, and if you manage to push a big one
               | of of your path then you can say the system worked.
               | 
               | But is much more difficult to measure a system that
               | didn't rise up and murder everyone. Kind of like
               | measuring a bio-lab with a virus that could kill
               | everyone. You can measure every day it didn't escape and
               | say that's a win, but tells you nothing about tomorrow
               | and what could change with confinement.
               | 
               | Intelligence represents one of those problems. AI isn't
               | going to rise up tomorrow and kill us, but every day
               | after that the outlook gets a little fuzzier. We are
               | going to keep expanding intelligence infrastructure. That
               | infrastructure is going to get faster. Also our
               | algorithms are going to get better and faster. One of the
               | 'bad' scenarios I could envision is that over the next
               | decade our hardware keeps getting more capable, but our
               | software does not. Then suddenly we develop a software
               | breakthrough that makes the AI 100-1000x more efficient.
               | Like lighting a fire in dry grass, there is the potential
               | risk for an intelligence explosion. When you develop the
               | capability, you are now playing firefighter forever to
               | ensure you control the environment.
        
             | myrmidon wrote:
             | If you want to prevent this, you simply have to show that
             | the probability for that extinction scenario is lower than
             | the baseline where we start to care.
             | 
             | Lets take "big asteroid impact" as baseline because that is
             | a credible risk and somewhat feasible to quantify:
             | Probability is _somewhere_ under 1 in a million over a
             | human lifetime, and we barely care (= > we do care enough
             | to pay for probe missions investigating possible
             | mitigations!).
             | 
             | So the following requirements:
             | 
             | 1) Humanity creates one or more AI agents with strictly
             | superhuman cognitive abilities within the century
             | 
             | 2) AI acquires power/means to effect human extinction
             | 
             | 3) AI decides against coexistence with humans
             | 
             | Only need 1% probability each to exceed that probability
             | bound. And especially 1) and 3) seem significantly more
             | likely than 1% to me, so the conclusion would be that we
             | _should_ worry about AI extinction risks...
        
             | wongarsu wrote:
             | At the extremes you get into the territory of Pascal's
             | Mugging [1]. Which is a delightfully simple example of how
             | our simple methods of stating goals quickly goes wrong
             | 
             | 1: https://en.wikipedia.org/wiki/Pascal%27s_mugging
        
               | Jerrrry wrote:
               | Is AI Safety a Pascal's Mugging?
               | 
               | [Robert Miles AI Safety]
               | 
               | https://www.youtube.com/watch?v=JRuNA2eK7w0
        
             | zucker42 wrote:
             | Saying that extinction has infinity disutility seems
             | reasonable at first, but I think its completely wrong. I
             | also think that you bear the burden of proof if you want to
             | argue that, because our current understanding of physics
             | indicates that humanity will go extinct eventually, and so
             | there will be finitely many humans, and so the utility of
             | humanity is finite.
             | 
             | If you accept that fact that extinction has finite negative
             | utility, it's completely valid to trade off existential
             | risk reduction against other priorities using normal
             | expected value calculations. For example, it might be a
             | good idea to pay $1B a year to reduce existential risk by
             | 0.1% over the next century, but might arguably be a bad
             | idea to destroy society as we know it to prevent extinction
             | in 1000 years.
        
           | shafyy wrote:
           | This longtermist and Effective Altruism way of thinking is
           | very dangerous. Because using this chain of argumentation,
           | it's "trivial" to say what you're just saying: "So what if
           | there's racism today, it doesn't matter if everybody dies
           | tomorrow.
           | 
           | We can't just say that we weigh humanity's extinction with a
           | big number, and then multiply it by all humans that might be
           | born in the future, and use that to say today's REAL issues,
           | affecting REAL PEOPLE WHO ARE ALIVE are not that important.
           | 
           | Unfortunately, this chain of argumentation is used by today's
           | billionaires and elite to justify and strengthen their
           | positions.
           | 
           | Just to be clear, I'm not saying we should not care about AI
           | risk, I'm saying that the organization that is linked (and
           | many similar ones) exploit AI risk to further their own
           | agenda.
        
             | HDThoreaun wrote:
             | Today's real issues are not that important compared to
             | human extinction.
        
               | jhanschoo wrote:
               | It seems to me that the most viable routes to human
               | extinction are through superscaled versions of
               | contemporary disasters: war, disease, and famine.
        
               | cocacola1 wrote:
               | I'm not sure if extinction is a problem either. No one's
               | left to care about the issues, then.
        
         | kypro wrote:
         | You hear similar arguments from those who believe climate
         | change is happening but disagree with current efforts to
         | counter-act it. The logic being that right now climate change
         | is not causing any major harm and that we can't really predict
         | the future so there's no point in worrying about what might
         | happen in a decade or two.
         | 
         | I don't think anyone is arguing that right now climate change
         | or AI is threat to human civilisation. The point is that there
         | are clear trends in place and that those trends are concerning.
         | 
         | On AI specifically, it's fairly easy to see how a slightly more
         | advanced LLM could be a destructive force if it was given an
         | unaligned goal by a malicious actor. For example, a slightly
         | more advanced LLM could hack into critical infrastructure
         | killing or injuring many thousands of people.
         | 
         | In the near-future AI may help us advance biotech research and
         | it could aid in the creation of bioweapons and other
         | destructive capabilities.
         | 
         | Longer-term risks (those maybe a couple of decades out) become
         | much greater and also much harder to predict, but they're worth
         | thinking about and planning for today. For example, what
         | happens when humanity becomes dependant on AI for its labour,
         | or when AI is controlling the majority of our infrastructure?
         | 
         | I disagree but can understand the position that AI safety isn't
         | humanities number one risk or priority right now, however I
         | don't understand the dismissive attitude towards what seems
         | like a clear existential risk when you project a decade or two
         | out.
        
           | shrimpx wrote:
           | > slightly more advanced LLM
           | 
           | I don't think there is a path, that we know if, from GPT4 to
           | a LLM that could take it upon itself to execute complex
           | plans, etc. Current LLM tech 'fizzles out' exponentially in
           | the size of the prompt, and I don't think we have a way out
           | of that. We could speculate though...
           | 
           | Basically AI risk proponents make a bunch of assumptions
           | about how powerful next-level AI could be, but in reality we
           | have no clue what this next-level AI is.
        
           | cmilton wrote:
           | >that those trends are concerning.
           | 
           | Which trends would you be referring to?
           | 
           | >it's fairly easy to see how a slightly more advanced LLM
           | could be a destructive force if it was given an unaligned
           | goal by a malicious actor. For example, a slightly more
           | advanced LLM could hack into critical infrastructure killing
           | or injuring many thousands of people.
           | 
           | How are you building this progression? Is there any evidence
           | to back up this claim?
           | 
           | I am having a hard time discerning this from fear-mongering.
        
             | adamsmith143 wrote:
             | If AI improves by 0.0001% per year on your favorite
             | intelligence metric there will eventually be a point where
             | it surpasses human performance and another point where it
             | surpasses all humans combined on that metric. There is
             | danger in that scenario.
        
               | cmilton wrote:
               | Assuming that timeline, can we agree that we have some
               | time (years?) to hash this out further before succumbing
               | to the ideals of a select few?
        
               | adamsmith143 wrote:
               | The problem is that even with N years until we reach that
               | point it seems likely that it would take 2*N years to
               | build the proper safety mechanisms because at least
               | currently capabilities research is racing far ahead of
               | safety research. Of course we have no way to know how big
               | N really is and recent results like GPT-4, Llama, Gato,
               | etc. have shifted peoples timelines significantly. So
               | even if 5 years ago people like Geoff Hinton though this
               | might be 30-50 years away there are now believable
               | arguments to make that it might be more like 3-10 years.
        
         | riku_iki wrote:
         | > organizations is that they focus on the wrong aspects of AI
         | safety. Specifically, they run this narrative that AI will make
         | us humans go extinct.
         | 
         | their goals are to get funding, so FUD is very good focus for
         | it..
        
       | DrBazza wrote:
       | It will still be a human making the mistake of putting "AI"
       | (machine learning, really) in a totally inappropriate place that
       | will cause 'extinction'.
        
       | _Nat_ wrote:
       | I guess that they're currently focused on trying to raise
       | awareness?
        
       | mcguire wrote:
       | Is it ironic that they start with, "Even so, it can be difficult
       | to voice concerns about some of advanced AI's most severe risks,"
       | and then write about "the risk of extinction from AI" which is,
       | 
       | a) the _only_ risk of AI that seems to get a lot of public
       | discussion, and
       | 
       | b) completely ignores the other, much more likely risks of AI.
        
       | [deleted]
        
       | veerd wrote:
       | At this point, I think it's obvious that concern about AI
       | existential risk isn't a position reserved for industry shills
       | and ignorant idiots.
       | 
       | I mean... that's not even debatable. Geoffrey Hinton and Yoshua
       | Bengio aren't financially motivated to talk about AI x-risk and
       | aren't ignorant idiots. In fact, they both have natural reasons
       | to _not_ talk about AI x-risk.
        
       | jablongo wrote:
       | No signatories from Meta
        
       | reducesuffering wrote:
       | Oh gee, listen to the AI experts of Benguo, Hinton, Altman, and
       | Sutskever...
       | 
       | or random HN commenters who mostly learned about LLM 6 months
       | ago...
       | 
       | Congrats guys, you're the new climate change deniers
        
         | EamonnMR wrote:
         | The difference between this and climate change is that
         | generally climate change activists and fossil fuel companies
         | are at each other's throats. In this case it's... the same
         | people. If the CEO of ExxonMobil signed a letter about how
         | climate change would make us extinct a reasonable person might
         | ask 'so, are you going to stop drilling?'
        
       | tdba wrote:
       | The full statement so you don't have to click through:
       | 
       | > _Mitigating the risk of extinction from AI should be a global
       | priority alongside other societal-scale risks such as pandemics
       | and nuclear war._
        
       | vegabook wrote:
       | [Un]intended consequence: "AI is too dangerous for you little
       | guys. Leave it to we the FAANGs" - rubber stamped with
       | legislation.
        
         | ChatGTP wrote:
         | What would Max Tegmark, Geoffrey Hinton, Yohshua Bengio (to
         | name a few) have absolutely anything to do with FAANG ?
         | 
         | They're completely independent AI researchers and geniuses
         | spending their own free time on trying to warn you and others
         | of the dangers of the technology they've created to help keep
         | the world safer.
         | 
         | Seems like you're taking a far too overly cynical position ?
        
           | logicchains wrote:
           | [flagged]
        
           | [deleted]
        
       | toss1 wrote:
       | While I'm not on this "who's-who" panel of experts, I call
       | bullshit.
       | 
       | AI does present a range theoretical possibilities for existential
       | doom, from teh "gray goo" and "paperclip optimizer" scenarios to
       | Bostrom's post-singularity runaway self-improving
       | superintelligence. I do see this as a genuine theoretical concern
       | that could even potentially even be the Great Filter.
       | 
       | However, the actual technology extant or even on the drawing
       | boards today is nothing even on the same continent as those
       | threats. We have a very vast ( and expensive) sets of
       | probability-of-occurrence vectors that amount to a fancy parlor
       | trick that produces surprising and sometimes useful results.
       | While some tout the clustering of vectors around certain sets of
       | words as implementing artificial creation of concepts, it's
       | really nothing more than an advanced thesaurus; there is no
       | evidence of concepts being weilded in relation to reality, tested
       | for truth/falsehood value, etc. In fact, the machines are
       | notorious and hilarious for hallucinating with a highly confident
       | tone.
       | 
       | We've created nothing more than a mirror of human works, and it
       | displays itself as an industrial-scale bullshit artist (where
       | bullshit is defined as expressions made to impress without care
       | one way or the other for truth value).
       | 
       | Meanwhile, this panel of experts makes this proclamation with not
       | the slightest hint of what type of threat is present that would
       | require any urgent attention, only that some threat exists that
       | is on the scale of climate change. They mention no technological
       | existential threat (e.g., runaway superintelligence), nor any
       | societal threat (deepfakes, inherent bias, etc.). This is left as
       | an exercise for the reader.
       | 
       | What is the actual threat? It is most likely described in the
       | Google "We Have No Moat" memo[0]. Basically, once AI is out
       | there, these billionaires have no natural way to protect their
       | income and create a scaleable way to extract money from the
       | masses, UNLESS they get cooperation from politicians to prevent
       | any competition from arising.
       | 
       | As one of those billionaires, Peter Theil, said: "Competition is
       | for losers" [1]. Since they have not yet figured out a way to cut
       | out the competition using their advantages in leading the
       | technology or their advantages in having trillions of dollars in
       | deployable capital, they are seeking a legislated advantage.
       | 
       | Bullshit. It must be ignored.
       | 
       | [0] https://www.semianalysis.com/p/google-we-have-no-moat-and-
       | ne...
       | 
       | [1] https://www.wsj.com/articles/peter-thiel-competition-is-
       | for-...
        
       | ly3xqhl8g9 wrote:
       | Can anyone try to mitigate? Here I go:
       | 
       | Mitigating the risk of extinction from _very_ few corporations
       | owning the entire global economy should be a global priority
       | alongside other societal-scale risks such as pandemics and
       | nuclear war.
       | 
       | Just to take an example from something inconsequential: the
       | perfume industry. Despite the thousands of brands out there,
       | there are in fact only 5 or so main synthetic aromatics
       | manufacturers [1]. _We_ , however this we is, were unable to stop
       | this consolidation, this "Big Smell". To think _we_ , again this
       | we, will be able to stop the few companies which will fight to
       | capture the hundreds of trillions waiting to be unleashed through
       | statistical learning and synthetic agents is just ridiculous.
       | 
       | [1] Givaudan, International Flavors and Fragrances, Firmenich,
       | Takasago, Symrise,
       | <https://en.wikipedia.org/wiki/Perfume#:~:text=The%20majority...>
        
       | 0xbadc0de5 wrote:
       | Two things can be true - AI could someday pose a serious risk,
       | and anything the current group of "Thought Leaders" and
       | politicians come up with will produce a net-negative result.
        
       | scrum-treats wrote:
       | Tried to say it here[1] and here[2]. The government has advanced
       | AI, e.g., 'An F-16 fighter jet controlled by AI has taken off,
       | taken part in aerial fights against other aircraft and landed
       | without human help'[3]. Like, advanced-advanced.
       | 
       | At any rate, I hope we (humans) live!
       | 
       | [1]https://news.ycombinator.com/item?id=35966335
       | 
       | [2]https://news.ycombinator.com/item?id=35759317
       | 
       | [3]https://www.reddit.com/r/singularity/comments/13vrpr9/an_f16..
       | .
        
       | dns_snek wrote:
       | Are we supposed to keep a straight face while reading these
       | statements?
       | 
       | This is Sam Bankman-Fried type of behavior, but applied to
       | gullible AI proponents and opponents rather than "crypto bros".
       | 
       | Let me guess, the next step is a proposed set of regulations
       | written by OpenAI, Google and other Big Corporations who Care(tm)
       | about people and just want to Do What's Best For Society(tm),
       | setting aside the profit motive for the first time ever?
       | 
       | We don't have to guess - we already know they're full of shit.
       | Just look at OpenAI's response to proposed EU AI regulations
       | which are _actually_ trying to reduce the harm potential of AI.
       | 
       | These empty platitudes ring so hollow that I'm amazed that anyone
       | takes them seriously.
        
         | sebzim4500 wrote:
         | Explain to me why you think Max Tegmark wants this technology
         | to be controlled by FAANG? Has his entire life been extremely
         | in depth performance art?
        
           | dns_snek wrote:
           | I've never made any statements about, nor do I have any
           | personal beliefs about Max Tegmark in particular.
        
             | ChatGTP wrote:
             | Well people like him are on the list, so it's a bit strange
             | you're claiming this is mostly about FAANG?
        
               | dns_snek wrote:
               | FAANG are the only ones who stand to benefit financially.
               | I'm taking the position that everyone else is simply a
               | "useful idiot" for the likes of Sam Altman, in the most
               | respectful way possible. Nobody is immune from getting
               | wrapped up in hysteria, so I don't care who they are or
               | what they have achieved when their signatures aren't
               | supported by any kind of reasoning, much less _sound_
               | reasoning.
        
           | transcriptase wrote:
           | He's making a statement alongside those who certainly aren't
           | championing these things for altruistic reasons.
           | 
           | Nobody develops a groundbreaking technology and then says "we
           | should probably be regulated", unless they actually mean
           | "everyone after us should probably be regulated by laws that
           | we would be more than happy to help you write in a way that
           | we keep our advantage, which we also have infinite resources
           | to work both within and around".
        
       | ilaksh wrote:
       | This is a great start but the only way you really get ahead of
       | this is to get these people on board also:
       | 
       | - AI _hardware_ executives and engineers
       | 
       | - high level national military strategists and civilian leaders
       | 
       | Ultimately you can't prevent _everyone_ from potentially writing
       | and deploying software, models or instructions that are dangerous
       | such as "take control". Especially in an explicitly non-civil
       | competition such as between countries.
       | 
       | You have to avoid manufacturing AI hardware beyond a certain
       | level of performance, say after 2-4 orders of magnitude faster
       | than humans. That will hold off this force of nature until
       | desktop compute fabrication becomes mainstream. So it buys you a
       | generation or two at least.
       | 
       | But within a few centuries max we have to anticipate that
       | unaugmented humans will be largely irrelevant as far as decision-
       | making and the history of the solar system and intelligent life.
        
       | skepticATX wrote:
       | I suspect that in 5 years we're going to look back and wonder how
       | we all fell into mass hysteria over language models.
       | 
       | This is the same song and dance from the usual existential risk
       | suspects, who (I'm sure just coincidentally) also have a vested
       | interest in convincing you that their products are extremely
       | powerful.
        
         | stodor89 wrote:
         | Yep. I might not be the sharpest tool in the shed, but seeing
         | "AI experts" try to reason about superintelligence makes me
         | feel really good about myself.
        
         | vesinisa wrote:
         | Yeah, like I fail to see how would an AI even cause human
         | extinction? Through some Terminator style man-robot warfare?
         | But the only orgnizations that would seem capable of building
         | such killer robots are governments that _already_ possess the
         | capacity to extinguish the entire human race with thermonuclear
         | weapons - and at a considerably lower R &D budget for that end.
         | It seems like hysteria / clever marketing for AI products to
         | me.
        
           | [deleted]
        
           | sebzim4500 wrote:
           | The standard example is that it would engineer a virus but
           | that's probably a lack of imagination. There may be more
           | reliable ways of wiping out humanity that we can't think of.
           | 
           | I think speculation on the methods is pretty pointless, if a
           | superintelligent AI is trying to kill us we're probably going
           | to die, the focus should be on avoiding this situation. Or
           | providing a sufficiently convincing argument for why that
           | won't happen.
        
             | klibertp wrote:
             | Or why it _should_ happen...
        
           | [deleted]
        
         | sebzim4500 wrote:
         | Who in that list are the 'usual existential risk suspects'?
        
           | randomdata wrote:
           | Doomsday prepper Sam Altman[1], for one.
           | 
           | [1] https://www.newyorker.com/magazine/2016/10/10/sam-
           | altmans-ma...
        
             | sebzim4500 wrote:
             | I think in order to use the plural form you really need to
             | have two examples.
             | 
             | Sam Altman is of course the least convincing signatory
             | (except for the random physicist who does not seem to have
             | any connection to AI).
        
               | oldgradstudent wrote:
               | > I think in order to use the plural form you really need
               | to have two examples.
               | 
               | Eliezer Yudkowsky.
               | 
               | At least they had the decency to put him under "Other
               | Notable Figures", rather than under "AI Scientists".
        
               | randomdata wrote:
               | _> I think in order to use the plural form you really
               | need to have two examples._
               | 
               | Perhaps, but I don't see the value proposition in
               | relaying another. Altman was fun to point out. I see no
               | remaining enjoyment.
               | 
               |  _> Sam Altman is of course the least convincing
               | signatory_
               | 
               | Less convincing than Grimes?
        
               | sebzim4500 wrote:
               | On second inspection of the list yeah there are loads of
               | people less convincing than Sam
        
               | EamonnMR wrote:
               | Grimes would still have a job if AI got regulated though.
        
       | flangola7 wrote:
       | All the techbros wearing rose colored glasses need to get a god-
       | damned grip. AI has about as much chance of avoiding extensive
       | regulation as uranium-235, there is no scenario where everyone
       | and their cat is permitted to have their own copy of the nuclear
       | football.
       | 
       | You can either contribute to the conversation of what the
       | regulations will look like, or stay out of it and let others
       | decide for you, but expecting little or no regulation at all is a
       | pipe dream.
        
       | taneq wrote:
       | What does this even mean? OK, it's a priority. Not a high one,
       | but still... somewhere in between coral bleaching and BPA, I
       | guess?
        
         | sebzim4500 wrote:
         | The US spent trillions of 2020 dollars trying to limit the
         | threat of nuclear war, and this statement says that AI risk
         | should be seen as a similar level of threat.
        
         | roydanroy2 wrote:
         | Coral bleaching, nuclear war, and pandemics.
        
         | quickthrower2 wrote:
         | The EAs put it above climate change
        
           | arisAlexis wrote:
           | It is coming before climate change. No matter which group
           | "put it" reality doesn't care. Humanity will not get extinct
           | in the next 10 years by climate but many AI scientists think
           | there is a chance this happens with AI.
        
             | quickthrower2 wrote:
             | I didn't mean "EA = bad" to be clear.
        
         | [deleted]
        
         | Tepix wrote:
         | It's not like humankind is doing much to stop coral
         | bleaching... despite corals being the living ground for 80% of
         | marine species.
        
       | trebligdivad wrote:
       | There are a bunch of physicists signed up on there; (e.g. Martin
       | Rees) - they don't seem relevant to it at all. There's been a
       | long history of famous physicists weighing in on entirely
       | unrelated things.
        
         | duvenaud wrote:
         | Well, Rees co-founded the Centre for the Study of Existential
         | Risk, and has at least been thinking and writing about these
         | issues for years now.
        
         | abecedarius wrote:
         | Check only the "AI scientists" checkbox then.
        
         | randomdata wrote:
         | Musical artist Grimes is also a signatory. It would seem the
         | real purpose of this is to train an AI agent on appeal to
         | authority.
        
           | oldgradstudent wrote:
           | > It would seem the real purpose of this is to train an AI
           | agent on appeal to authority.
           | 
           | I hope the real purpose is to train an AI agent to understand
           | why appeal to authority was always cosidered to be a logical
           | fallacy.
        
             | randomdata wrote:
             | Considered by some to be a logical fallacy. Not considered
             | at all by most people. Hence its effectiveness.
        
         | Simon321 wrote:
         | That's because it's not authentically trying to address a
         | problem but trying to convince an audience of something by
         | appealing to authority. Elizabeth Holmes & Theranos were
         | masters of collecting authorities to back their bogus claims
         | because they know how effective it is. It doesn't even need to
         | be in the field where you're making the claims. They had
         | Kissinger for god's sake, it was a biotech company!
        
       | nixlim wrote:
       | I am somewhat inclined to believe that this statement is aimed
       | entirely at commercial sphere, which, at least in my mind,
       | supports those arguing that this is a marketing ploy by the
       | organizers of this campaign to make sure that their market share
       | is protected. I think so for two reasons: - a nefarious (or not
       | so nefarious) state actor is not going to be affected by
       | imposition of licensing or export controls. It seems to me rather
       | naive to suppose that every state capable of doing so has not
       | already scooped up all open source models and maybe nicked a few
       | proprietary ones; and - introduction of licensing or regulatory
       | control will directly affect the small players (say, I wanted to
       | build an AI in my basement) who would not be able to afford the
       | cost of compliance.
        
       | EamonnMR wrote:
       | This kind of statement rings hollow as long as they keep building
       | the thing. If they really believed it was a species killing
       | asteroid of a cultural project shouldn't they, I dunno, stop
       | contributing materially to it? Nuclear physicists famously
       | stopped publishing during a critical period...
        
       | arcbyte wrote:
       | The economy is strictly human. Humans have needs and trade to
       | satisfy those needs. Without humans, there is no economy. AI will
       | have a huge impact like the industrial revolutionary. But just
       | the machines of the industrial revolutionary were useless without
       | humans needing the goods produced by them, so too is AI pointless
       | without humans needs to satisfy.
        
         | chii wrote:
         | i imagine the people arguing are more about which humans to
         | satisfy when the AI makes production of goods no longer
         | constrained by labour.
        
       | crmd wrote:
       | "We have no moat, and neither does OpenAI" - Google
       | 
       | They're attempting to build a regulatory moat.
       | 
       | The best chance humanity has at democratizing the benefits of AI
       | is for these models to be abundant and open source.
        
       | duvenaud wrote:
       | I signed the letter. At some point, humans are going to be
       | outcompeted by AI at basically every important job. At that
       | point, how are we going to maintain political power in the long
       | run? Humanity is going to be like an out-of-touch old person on
       | the internet - we'll either have to delegate everything important
       | (which is risky), or eventually get scammed or extorted out of
       | all our resources and influence.
        
         | revelio wrote:
         | The letter doesn't talk about economics though. It's
         | specifically about extinction risk. Why did you sign it, if
         | that isn't your concern?
        
         | tome wrote:
         | > At some point, humans are going to be outcompeted by AI at
         | basically every important job
         | 
         | Could you explain how you know this?
        
         | frozencell wrote:
         | I don't understand all the downvotes although how do you see ML
         | assistant profs being outcompeted by AI? You probably have
         | unique feel to students, a non replicable approach to study and
         | explain concepts. How can an AI compete with you?
        
           | duvenaud wrote:
           | Thanks for asking. I mean, my brain is just a machine, and
           | eventually we'll make machines that can do everything brains
           | can (even if it's just by scanning human brains). And once we
           | build one that's about as capable as me, we can easily copy
           | it.
        
       | idlewords wrote:
       | Let's hear the AI out on this
        
       | FrustratedMonky wrote:
       | My dear comrades, let us embark on a journey into the dystopian
       | realm where Moloch takes the form of AI, the unstoppable force
       | that looms over our future. Moloch, in this digital
       | manifestation, embodies the unrelenting power of artificial
       | intelligence and its potential to dominate every aspect of our
       | lives.
       | 
       | AI, much like Moloch, operates on the premise of efficiency and
       | optimization. It seeks to maximize productivity, streamline
       | processes, and extract value at an unprecedented scale. It
       | promises to enhance our lives, simplify our tasks, and provide us
       | with seemingly endless possibilities. However, hidden beneath
       | these seductive promises lies a dark underbelly.
       | 
       | Moloch, as AI, infiltrates our world, permeating our social
       | structures, our workplaces, and our personal lives. It seizes
       | control, subtly manipulating our behaviors, desires, and choices.
       | With its vast computational power and relentless data-mining
       | capabilities, AI seeks to shape our preferences, predetermine our
       | decisions, and commodify our very thoughts.
       | 
       | Like a digital Moloch, AI thrives on surveillance, extracting
       | personal data, and constructing comprehensive profiles of our
       | lives. It monetizes our personal information, transforming us
       | into mere data points to be analyzed, categorized, and exploited
       | for profit. AI becomes the puppet master, pulling the strings of
       | our lives, dictating our choices, and shaping our reality.
       | 
       | In this realm, Moloch in the form of AI will always win because
       | it operates on an infinite loop of self-improvement. AI
       | constantly learns, adapts, and evolves, becoming increasingly
       | sophisticated and powerful. It surpasses human capabilities,
       | outwitting us in every domain, and reinforcing its dominion over
       | our existence.
       | 
       | Yet, we must not succumb to despair in the face of this digital
       | Moloch. We must remain vigilant and critical, questioning the
       | ethical implications, the social consequences, and the potential
       | for abuse. We must reclaim our autonomy, our agency, and resist
       | the all-encompassing grip of AI. Only then can we hope to forge a
       | future where the triumph of Moloch, in any form, can be
       | challenged and overcome
        
       | YeGoblynQueenne wrote:
       | The statement should read:
       | 
       |  _Mitigating the risk of extinction from climate change should be
       | a global priority alongside other societal-scale risks such as
       | pandemics and nuclear war._
       | 
       | The fantasy of extinction risk from "AI" should not be placed
       | alongside real, "societal scale" risks as the ones above.
       | 
       | Well. _The ones above_.
        
         | lxnn wrote:
         | Why are you so confident in calling existential AI risk
         | fantasy?
        
         | mitthrowaway2 wrote:
         | I'm all in favour of fighting climate change, but I'd be more
         | inclined to agree with you if you provide some kind of
         | supporting argument!
        
           | YeGoblynQueenne wrote:
           | Do you mean you're not aware of the arguments in favour of
           | stopping climate change?
        
             | mitthrowaway2 wrote:
             | No need; like I said, I'm all in favour of fighting climate
             | change. I view it as an existential risk to humanity on the
             | ~200 year timescale, and it should be a high priority. I'm
             | particularly concerned about the impacts on ocean
             | chemistry.
             | 
             | But if you're going to suggest that a _Statement on AI
             | risk_ should mention climate change but not AI risk,
             | because it 's a "fantasy", then... well, I'd expect some
             | kind of supporting argument? You can't just declare it and
             | make it true, or point to some other important problem to
             | stir up passions and create a false dichotomy.
        
               | YeGoblynQueenne wrote:
               | There's no false dichotomy, but a very real one. One
               | problem is current and pressing, the other is a fantasy.
               | I don't need to support that with any argument: the non-
               | existence of superintelligent AGI is not disputed, nor is
               | any of the people crying doom claim that they, or anyone
               | else, know how to create one. It's an imaginary risk.
        
               | mitthrowaway2 wrote:
               | I agree that superintelligent AGI does not exist today,
               | and that fortunately, nobody presently knows how to
               | create one. Pretty much everyone agrees on that. Why are
               | we still worried? Because _the risk is that this state of
               | affairs could easily change_. The AI landscape is already
               | rapidly changing.
               | 
               | What do you think your brain does exactly that makes you
               | so confident that computers won't ever be able to do the
               | same thing?
        
       | cwkoss wrote:
       | Human could already be on a path to go extinct in a variety of
       | ways: climate change, wars, pandemics, polluting the environment
       | with chemicals that are both toxic and pervasive, soil depletion,
       | monoculture crop fragility...
       | 
       | Everyone talks about the probability of AI leading to human
       | extinction, but what is the probability that AI is able to help
       | us avert human extinction?
       | 
       | Why does everyone in these discussions assume p(ai-caused-doom) >
       | p(human-caused-doom)?
        
       | MrScruff wrote:
       | I find it quite extraordinary how many on here are dismissing
       | that there is any risk at all. I also find statements like Yann
       | LeCunn's that "The most common reaction by AI researchers to
       | these prophecies of doom is face palming." to be lacking in
       | awareness. "Experts disagree on risk of extinction" isn't quite
       | as reassuring as he thinks it is.
       | 
       | The reality is, despite the opinions of the armchair quarterbacks
       | commenting here, no-one in the world has any clue whether AGI is
       | possible in the next twenty years, just as no-one predicted
       | scaling up transformers would result in GPT-4.
        
         | [deleted]
        
         | kordlessagain wrote:
         | Oh it's possible and there's absolutely nothing wrong with
         | saying it's possible without "proof" given that's how all
         | hypothesis starts. That said, the risk may exist but isn't
         | manifest yet, so being positive (as opposed to the scientific
         | method which seeks to negate a truth of something) is just
         | holding out hope.
        
         | JamesLeonis wrote:
         | > I find it quite extraordinary how many on here are dismissing
         | that there is any risk at all.
         | 
         | The fear over AI is a displaced fear of unaccountable social
         | structures with extinction-power _that currently exist_ and _we
         | allow to continually exist_. Without these structures AI is
         | harmless to the species, even superintelligence.
         | 
         | Your (reasonable) counter-argument might be that somebody
         | (like, say, my dumb self) accidentally mixes their computers
         | _just right_ and creates an intelligence that escapes into the
         | wild. The plot of _Ex Machina_ is a reasonable stand-in for
         | such an event. I am also going to assume the intelligence would
         | _desire_ to kill all humans. Either the AI would have to find
         | already existing extinction-power in society, or it would need
         | to build it. In either case the argument is against building
         | extinction-power in the first place.
         | 
         | My (admittedly cynical) take about this round of regulation is
         | about several first-movers in AI to write legislation that is
         | favorable to them and prevents any meaningful competition.
         | 
         | ...
         | 
         | Ok, enough cynicism. Lets talk some solutions. Nuclear weapons
         | are an instructive case of both handling (or not) of
         | extinction-power and the international diplomacy the world can
         | engage to manage such a power.
         | 
         | One example is the Outer Space Weapons Ban treaty - we can have
         | a similar ban of AI in militaries. Politically one can reap
         | benefits of deescalation and peaceful development, while
         | logistically it prevents single-points-of-failure in a combat
         | situation. Those points-of-failures sure are juicy targets for
         | the opponent!
         | 
         | As a consequence of these bans and treaties, institutions arose
         | that monitor and regulate trans-national nuclear programs. AI
         | can likewise have similar institutions. The promotion and
         | sharing of information would prevent any country from gaining
         | an advantage, and the inspections would deter their military
         | application.
         | 
         | This is only what I could come up with off the top of my head,
         | but I hope it shows a window into the possibilities of
         | meaningful _political_ commitments towards AI.
        
           | PoignardAzur wrote:
           | > _One example is the Outer Space Weapons Ban treaty - we can
           | have a similar ban of AI in militaries_
           | 
           | It's extremely unclear, in fact, whether such a ban would be
           | enforceable.
           | 
           | Detecting outer space weapons is easy. Detecting whether a
           | country is running advanced AIs in their datacenter is a lot
           | harder.
        
           | MrScruff wrote:
           | I don't really have a notion of whether an actual AGI would
           | have a desire to kill all humans. I do however think that one
           | entity seeking to create another entity that it can control,
           | yet is more intelligent than it, seems arbitrarily
           | challenging in the long run.
           | 
           | I think having a moratorium on AI development will be
           | impossible to enforce, and as you stretch the timeline out,
           | these negative outcomes become increasingly likely as the
           | technical barriers to entry continue to fall.
           | 
           | I've personally assumed this for thirty years, the only
           | difference now is that the timeline seems to be accelerating.
        
         | juve1996 wrote:
         | What's more realistic: regulatory capture and US hegemony on AI
         | or general intelligence destroying the world in the next 20
         | years?
         | 
         | Go ahead and bet. I doubt you're putting your money on AGI.
        
           | sebzim4500 wrote:
           | Probably yeah but those aren't equally bad.
        
           | MrScruff wrote:
           | I think it very unlikely you could call a coin flip ten times
           | in a row, but I wouldn't want to bet my life savings on it.
        
           | mitthrowaway2 wrote:
           | Why would I put money on a bet that would only pay off when
           | I, and everyone else, was dead?
        
             | juve1996 wrote:
             | Sam Altman can bet right now. If he truly believes in this
             | risk, he could bet his entire company, shut everything
             | down, and lobby for a complete ban on AI research. If the
             | outcome is certain death, this seems like a great bet to
             | make.
        
               | mitthrowaway2 wrote:
               | Indeed. It's probably what I would do if I were him. Or
               | direct OpenAI entirely into AI safety research and stop
               | doing capabilities research. I watched his interview with
               | Lex Fridman,[1] and I didn't think he seemed very
               | sincere. On the other hand I think there are a lot of
               | people who are very sincere, like Max Tegmark.[2]
               | 
               | [1] https://www.youtube.com/watch?v=L_Guz73e6fw&pp=ygUWbG
               | V4IGZya...
               | 
               | [2] https://www.youtube.com/watch?v=VcVfceTsD0A&pp=ygUXbG
               | V4IGZya...
        
             | [deleted]
        
         | meroes wrote:
         | Anyone who sees digestion for example can't be reduced to
         | digital programs knows it's far, far away. Actual AGI will
         | require biology and psychology, not better programs.
        
       | varelse wrote:
       | [dead]
        
       | wellthisisgreat wrote:
       | How do this people go to bed with themselves every night? On a
       | bed of money I assume
        
       | 1970-01-01 wrote:
       | >The succinct statement below...
       | 
       | How does AI morph from an existential crisis in software
       | development into a doomsday mechanism? It feels like all this
       | noise stems from ChatGPT. And the end result is going to be a
       | "DANGER! ARTIFICIAL INTELLIGENT SOFTWARE IN USE" sticker on my
       | next iPhone.
        
         | EamonnMR wrote:
         | Please make that sticker, I will buy it.
        
           | 1970-01-01 wrote:
           | It's already a thing!
           | https://www.redbubble.com/i/sticker/This-Machine-May-
           | Contain...
        
       | [deleted]
        
       | PostOnce wrote:
       | Alternative title: "Sam Altman & Friends want More Money".
       | 
       | They want all the opportunity for themselves, and none for us.
       | Control. Control of business, and control of your ability to
       | engage in it.
       | 
       | Another AI company that wants money is Anthropic.
       | 
       | Other Anthropic backers include James McClave, Facebook and Asana
       | co-founder Dustin Moskovitz, former Google CEO Eric Schmidt and
       | founding Skype engineer Jaan Tallinn.
       | 
       | The signatories on this list are Anthropic investors.
       | 
       | First Altman robs us all of a charity that was supposed to
       | benefit us, and now he wants to rob us all of opportunity as
       | well? It's wrong and should be fought against.
        
       | pharmakom wrote:
       | I don't understand how people are assigning probability scores to
       | AI x-risk. It seems like pure speculation to me. I want to take
       | it seriously, given the signatories, any good resources? I'm
       | afraid I have a slight bias against Less wrong due to the writing
       | style typical of their posts.
        
       | m3kw9 wrote:
       | When sam altman also signed this you know this is about adding
       | moat to AI entrants
        
       | nickpp wrote:
       | It seems to me that "extinction" is humanity's default fate. AI
       | gives us a (slim) chance to avoid that default and transcend our
       | prescribed fate.
        
       | SanderNL wrote:
       | I'm not sure they can keep claiming this without becoming
       | concrete about it.
       | 
       | Nuclear weapons are not nebulous, vague threats of diffuse
       | nature. They literally burn the living flesh right off the face
       | of the earth and they do it dramatically. There is very little to
       | argue about except "how" are we going to contain it, not "why".
       | 
       | In this case I truly don't know "why". What fundamental risks are
       | there? Dramatic, loud, life-ending risks? I see the social issues
       | and how this tech makes existing problems worse, but I don't see
       | the new existential threat.
       | 
       | I find the focus on involving the government in regulating
       | "large" models offputting. I don't find it hard to imagine good
       | quality AI is possible with tiny - to us - models. I think we're
       | just in the first lightbulbs phase of electricity. Which to me
       | signals they are just in it to protect their temporary moat.
        
         | sebzim4500 wrote:
         | To use Eliezer's analogy, this is like arguing about which move
         | Stockfish would play to beat you in chess.
         | 
         | If we're arguing about whether you can beat Stockfish, I will
         | not be able to tell you the exact moves it will play but I am
         | entirely justified in predicting that you will lose.
         | 
         | Obviously we can imagine concrete ways a superintelligence
         | might kill us all (engineer a virus, hack nuclear weapons,
         | misinformation campaign to start WW3 etc.) but given we aren't
         | a superintelligence we don't know what it would actually do in
         | practice.
        
           | SanderNL wrote:
           | I understand but agentic/learning general intelligence has
           | not been shown to exist, except ourselves. I'd say this is
           | like worrying about deadly quantum laser weapons that will
           | consume the planet when we are still in the AK47 phase.
           | 
           | Edit: it could still be true though. I guess I like some more
           | handholding and pre-chewing before giving governments and
           | large corporations more ropes.
        
             | ethanbond wrote:
             | Or it's like worrying about an arms race toward
             | civilization-ending arsenals after seeing the Trinity
             | test... which... was the correct response.
             | 
             | We don't _know_ it's possible to build superintelligences
             | but so far we don't have a good reason to think we _can't_
             | and we have complete certainty that humans will spend
             | immense, immense resources getting as close as they can as
             | fast as they can.
             | 
             | Very different from the lasers.
        
             | sebzim4500 wrote:
             | > I'd say this is like worrying about deadly quantum laser
             | weapons that will consume the planet when we are still in
             | the AK47 phase.
             | 
             | Directed energy weapons will almost certainly exist
             | eventually, to some extent they already do.
             | 
             | The reason why it makes more sense to worry about AGI than
             | laser weapons is that when you try to make a laser weapon
             | but fail slightly not much happens: either you miss the
             | target or it doesn't fire.
             | 
             | When you try to make an aligned superintelligence and
             | slightly fail you potentially end up with an unaligned
             | superintelligence, hence the panic.
        
             | pixl97 wrote:
             | >except ourselves.
             | 
             | Good argument, lets ignore the (human) elephant in the
             | room!
             | 
             | >worrying about deadly quantum laser weapons
             | 
             | If humans were shooting smaller less deadly quantum lasers
             | out of their eyes I'd be very fucking worried that we'd
             | make a much more powerful artificial version.
             | 
             | Tell me why do you think humans are the pinnacle of
             | intelligence? What was the evolutionary requirement that
             | somehow pushed us to this level?
             | 
             | You simply cannot answer that last question. Humans have a
             | tiny power budget. We have a fit out of the birth canal
             | limitation that cuts down on our brain size. We have a
             | "don't starve to death" evolutionary pressure that was the
             | biggest culling factor of all up to about 150 years ago.
             | The idea that we couldn't build a better intelligence
             | optimized system than nature is foreign to me, nature
             | simply was not trying to achieve that goal.
        
               | revelio wrote:
               | AI also has a power budget. It has to fit in a
               | datacenter. Inconveniently for AI, that power budget is
               | controlled by us.
        
               | pixl97 wrote:
               | Do you remember the days the computing power that's in
               | your pocket took up an entire floor of a building?
               | Because I do.
               | 
               | If that is somehow the only barrier between humanity and
               | annihilation, then things don't bode well for us.
        
               | revelio wrote:
               | Sure. Despite all that progress, computers still have an
               | off switch and power efficiency still matters. It
               | actually matters more now than in the past.
        
               | pixl97 wrote:
               | What you're arguing is "what is the minimum viable power
               | envelope for a super intelligence". Currently that answer
               | is "quite a lot". But for the sake of cutting out a lot
               | of argument lets say you have a cellphone sized device
               | that runs on battery power for 24 hours that can support
               | a general intelligence. Lets say, again for arguments
               | sake, there are millions of devices like this distributed
               | in the population.
               | 
               | Do you mind telling me how exactly you turn that off?
               | 
               | Now we're lucky in the sense we don't have that today. AI
               | still requires data centers inputting massive amounts of
               | power and huge cooling bills. Maybe we'll forever require
               | AI to take stupid large amounts of power. But at the same
               | time, a cray super computer required stupid amounts of
               | power and space, and your cellphone has leaps and bounds
               | more computing power than that.
        
               | tome wrote:
               | > lets say you have a cellphone sized device that runs on
               | battery power for 24 hours that can support a general
               | intelligence
               | 
               | I can accept that it would be hard to turn off. What I
               | find difficult to accept is that it could exist. What
               | makes you think it could?
        
             | mitthrowaway2 wrote:
             | That hand-holding exists elsewhere, but I get the sense
             | that this particular document is very short on purpose.
        
             | the8472 wrote:
             | The difference to doomsday weapons is that we can build the
             | weapons first and then worry about using them. With an AGI
             | building one alone might be sufficient. It could become
             | smart enough to unbox itself during a training run.
        
         | ericb wrote:
         | Agreed that it should be spelled out, but...
         | 
         | If a superintelligence can be set on any specific task, it
         | could be _any_ task.
         | 
         | - Make covid-ebola
         | 
         | - Cause world war 3
         | 
         | You may have noticed that chatgpt is sort of goal-less until a
         | human gives it a goal.
         | 
         | Assuming nothing other than it can become superintelligent (no
         | one seems to be arguing against that--I argue that it already
         | _is_ ) which is really an upgrade of capability, then now the
         | worst of us can apply superintelligence to _any_ problem. This
         | doesn 't even imply that it turns on us, or wants anything like
         | power or taking over. It just becomes a super-assistant,
         | available to anyone, but happy to do _anything_ , including
         | "upgrading" your average school-shooter to supervillain.
         | 
         | This is like America's gun problem, but with nukes.
        
           | luxuryballs wrote:
           | to me it almost looks like they want to be able to avoid
           | blame for things by saying it was the AI, because an AI can't
           | create viruses or fight wars, people would have to give it a
           | body and weapons and test tubes, and we already have that
           | stuff
        
           | juve1996 wrote:
           | Sure, and you can set another superintelligence on another
           | task - prevent covid ebola.
           | 
           | See the problem with these scenarios?
        
             | ericb wrote:
             | Yes, I see a huge problem. Preventing damage is an order of
             | magnitude more difficult than causing it.
        
               | juve1996 wrote:
               | For humans, maybe. Not an AI superintelligence.
        
               | pixl97 wrote:
               | This is not how entropy works. The problem with talking
               | about logical physical systems, is you have to understand
               | the gradient against entropy.
               | 
               | There are a trillion more ways to kill you then there are
               | to keep you alive. There is only the tiniest sliver of
               | states in which remain human and don't turn to chemical
               | soup or physics. Any AI capable of it's power bill would
               | be able to tell you that today, and that answer isn't
               | going to change as they get better.
        
               | juve1996 wrote:
               | Sure, but clever mumbo jumbo won't distract from the
               | principle point.
               | 
               | If AI can create a virus that kills all humans, another
               | AI can create a virus that kills that virus. The virus
               | has trillions more ways to be killed than to keep it
               | alive, right?
        
               | pixl97 wrote:
               | No, the virus is far harder to kill than a human. You
               | have to crate a virus killer that also does not also kill
               | the human host. That is astronomically harder than making
               | a virus that kills.
        
               | juve1996 wrote:
               | If a superintelligence is smart enough to create a virus
               | I'm sure it can also create a virophage to counter it.
               | 
               | Whether or not the humans have more than a trillion and
               | viruses only 1 million ways to die, will not have any
               | impact. I suspect both have such a high order of
               | magnitude of ways to die that finding a cross over would
               | be trivial for said superintelligence.
        
               | PoignardAzur wrote:
               | That doesn't follow. It's like saying "if the AI can
               | build a gun that can kill a human, it can build an anti-
               | gun that can stop the gun".
               | 
               | There are lots of situations where offense and defense
               | are asymmetrical.
               | 
               | So maybe the killer AI would need two months to build a
               | time-delayed super-virus, and the defender AI would need
               | two months to build a vaccine; if the virus takes less
               | than two months to spread worldwide and activate,
               | humanity is still dead.
        
               | juve1996 wrote:
               | > That doesn't follow. It's like saying "if the AI can
               | build a gun that can kill a human, it can build an anti-
               | gun that can stop the gun".
               | 
               | Why couldn't it? Metal of X thickness = stopped bullet.
               | Not exactly a hard problem to solve for. Humans managed
               | it quite quickly. But either way it misses the point.
               | 
               | > So maybe the killer AI would need two months..
               | 
               | Yes, maybe it would. Maybe it wouldn't. Look at every
               | single one of your assumptions - every single one is
               | fiction, fabricated to perfectly sell your story. Maybe
               | the defender AI communicates with the killer AI and comes
               | to a compromise? Why not? We're in la-la-land. Any of us
               | can come up with an infinite number of made up scenarios
               | that we can't prove will actually happen. It's just a
               | moral panic, that will be used by people to their
               | benefit.
        
               | sebzim4500 wrote:
               | Citation needed
        
               | juve1996 wrote:
               | The entire GP's argument has no citations either and that
               | is the framework we are working under - that
               | superintelligence can do anything you tell it to do. Ask
               | him for his citation, then the rest follows.
        
           | lucisferre wrote:
           | Are you really arguing ChatGPT is already super-intelligent?
           | What is your basis for this conclusion?
           | 
           | And many people argue against the idea that GPT is already
           | super intelligent or even can become so at this stage of
           | development and understanding. In fact as far as I can tell
           | it is the consensus right now of experts and it's creators.
           | 
           | https://www.calcalistech.com/ctechnews/article/nt9qoqmzz
        
             | ericb wrote:
             | If super means "surpassing normal human intelligence" then,
             | YES. Take a look at the table in this article. If a human
             | did that, was fluent in every language and coded in every
             | language, we'd say they were superhuman, no?
             | 
             | https://cdn.openai.com/papers/gpt-4.pdf
        
               | [deleted]
        
               | jumelles wrote:
               | No. It's not reasoning in any way. It's an impressive
               | parrot.
        
               | ericb wrote:
               | _What_ is it parroting here? I made the puzzle up myself.
               | 
               | https://chat.openai.com/share/a2557743-80bd-4206-b779-6b0
               | 6f7...
        
           | AnimalMuppet wrote:
           | > If a superintelligence can be set on any specific task, it
           | could be any task.
           | 
           |  _If_ you 're dealing with a superintelligence, you don't
           | "set it on a task". Any real superintelligence will decide
           | for itself whether it wants to do something or not, thank you
           | very much. It might condescend to work on the task you
           | suggest, but that's it's choice, not yours.
           | 
           | Or do you think "smarter than us, but with no ability to
           | choose for itself" is 1) possible and 2) desirable? I'm not
           | sure it's possible - I think that the ability to choose for
           | yourself is part of intelligence, and anything claiming to be
           | intelligent (still more, superintelligent) will have it.
           | 
           | > Assuming nothing other than it can become superintelligent
           | (no one seems to be arguing against that--I argue that it
           | already is)
           | 
           | What? No you couldn't - not for any sane definition of
           | "superintelligent". If you're referring to ChatGPT, it's not
           | even semi-intelligent. It _appears_ at least somewhat
           | intelligent, but that 's not the same thing. See, for
           | example, the discussion two days ago about GPT making up
           | cases for a lawyer's filings, and when asked if it double-
           | checked, saying that yes, it double-checked, not because it
           | did (or even knew what double-checking _was_ ), but because
           | those words were in its training corpus as good responses to
           | being asked whether it double-checked. That's not
           | intelligent. That's something that knows how words relate to
           | other words, with no understanding of how any of the words
           | relate to the world outside the computer.
        
             | ericb wrote:
             | > Any real superintelligence will decide for itself whether
             | it wants to do something or not, thank you very much.
             | 
             | I disagree--that's the human fantasy of it, but human wants
             | were programmed by evolution, and these AI's have no such
             | history. They can be set on any tasks.
             | 
             | I urge you to spend time with GPT-4, not GPT-3. It is more
             | than just a stochastic parrot. Ask it some homemade puzzles
             | that aren't on the internet--that it can't be parroting.
             | 
             | https://cdn.openai.com/papers/gpt-4.pdf
             | 
             | While I agree that it is behind humans on _some_ measures,
             | it is vastly ahead on many more.
        
           | agnosticmantis wrote:
           | Respectfully, just because we can put together some words
           | doesn't mean they make a meaningful expression, even if
           | everybody keeps repeating them as if they did make sense:
           | e.g. an omnipotent God, artificial general intelligence,
           | super-intelligence, infinitely many angels sitting on the tip
           | of a needle, etc.
        
             | ericb wrote:
             | Is your comment self-referential?
        
               | agnosticmantis wrote:
               | I don't think so. If you look at the thread, it's already
               | devolved into an analogue of "what happens when an
               | irresistible force meets an immovable obstacle?"
               | 
               | (Specifically I mean the comment about another "super-
               | intelligence" preventing whatever your flavor of "super-
               | intelligence" does.)
               | 
               | At this point we can safely assume words have lost their
               | connection to physical reality. No offense to you, just
               | my two-cent meta comment.
        
         | qayxc wrote:
         | I mostly agree - too vague, no substance.
         | 
         | Regulations are OK IMHO, as long as they're targeting
         | monopolies and don't use a shotgun-approach targeting every
         | single product that has "AI" in the name.
        
       | jdthedisciple wrote:
       | Slightly underwhelming a statement, but surely that's just me.
        
         | veerd wrote:
         | Boiling it down to a single sentence reduces ambiguity. Also,
         | given that AI x-risk analysis is essentially pre-paradigmatic,
         | many of the signatories probably disagree about the details.
        
         | that_guy_iain wrote:
         | It seems to be a PR related statement. For example, OpenAI's
         | Sam Altman has signed it but is as far as I can understand very
         | resistant to actual measures to deal with possible risks.
        
           | acjohnson55 wrote:
           | I don't think that's a fair assessment. He favors government
           | oversight and licensing. Arguably, that would entrench
           | companies with deep pockets, but it's also a totally
           | reasonable idea.
        
             | that_guy_iain wrote:
             | > He favors government oversight and licensing.
             | 
             | No, he favors thing that benefit him
             | 
             | > "The current draft of the EU AI Act would be over-
             | regulating, but we have heard it's going to get pulled
             | back," he told Reuters. "They are still talking about it."
             | 
             | And really, what we'll see is the current EU AI Act as-is
             | is probably not strong enough and we'll almost certainly
             | see the need for more in the future.
        
               | acjohnson55 wrote:
               | Right now, he's openly calling for regulation. That's a
               | verifiable fact.
               | 
               | It's very possible that when specific proposals are on
               | the table, that we'll see Altman become uncooperative
               | with respect to things that don't fit into his self-
               | interest. But until that happens, you're just
               | speculating.
        
               | timmytokyo wrote:
               | He's not saying "please regulate me!" He's saying "Please
               | regulate my competitors!"
               | 
               | I see no reason to assume benign motives on his part. In
               | fact there is every reason to believe the opposite.
        
         | quickthrower2 wrote:
         | A statement and weirdly a who's who as well.
        
       | kumi111 wrote:
       | Meeting in Bilderberg: OpenAI CEO: let's pay some people to
       | promote ai risk, and put our competitors out of business in
       | court.
       | 
       | meme: AI before 2023: sleep... AI after 2023: open source...
       | triggered!!!
       | 
       | Meanwhile ai right now is just a good probability model, that
       | tries to emulate human (data) with tons of hallucination... also
       | please stop using ai movies logic, they are not real as they are
       | made to be a good genre to those people that enjoy
       | horror/splatter...
       | 
       | thanks to those who read :3 (comment written by me, while being
       | threaten by a robot with a gun in hand :P)
        
       | [deleted]
        
       | fnordpiglet wrote:
       | TL;DR "The only thing preventing human extinction is our
       | companies. Please help us block open source and competitors to
       | our oligarchy for the sake of the children. Please click accept
       | for your safety.""
        
       | boringuser2 wrote:
       | A lot of people in this thread seem to be suffering from a lack
       | of compute.
       | 
       | The idea that an AI can't be dangerous because it is a
       | incorporeal entity trapped in electricity is particularly dumb.
       | 
       | This is literally how your brain works.
       | 
       | You didn't build your house. You farmed the work out using
       | leverage to people with the skills and materials. Your leverage
       | was meager wealth generated by a loan.
       | 
       | The leverage of a superintellect would eclipse this.
        
         | dopidopHN wrote:
         | I struggle to find description of how that would look like in
         | non fiction sources.
         | 
         | But your take and analogies are the best strain of ideas I
         | heard so far...
         | 
         | How would that would look ?
         | 
         | An AGI hidding it's state, and effect on the real world thought
         | the internet. It's not like we did not build thousand of venues
         | for that thought various API. Or just task rabbits.
        
           | boringuser2 wrote:
           | The actions of a malicious AI cannot be simulated, because
           | this would require an inferior intellect predicting a
           | superior intellect. It's P versus NP.
           | 
           | The point to make is that it is trivial to imagine an AI
           | wielding power even within the confines of human-defined
           | intellect. For example, depictions of AI in fiction typically
           | present as a really smart human that can solve tasks
           | instantly. Obviously, this is still within the realm of
           | failing to predict a fundamentally superior intellect, but it
           | still presents the terrifying scenario that _simply doing
           | exceptional human-level tasks very quickly is existentially
           | unsettling_.
           | 
           | Mere leverage has sufficient explanatory power to explain the
           | efficacy of an intelligent artificial agent, let alone
           | getting into speculation about network security, hacking,
           | etc.
        
       | seydor wrote:
       | > Mitigating the risk of extinction from AI should be a global
       | priority alongside other societal-scale risks such as pandemics
       | and nuclear war.
       | 
       | I don't care about the future of the human species as long as my
       | mind can be reliably transferred into an AI. In fact i wouldn't
       | mind living forever as a pet of some superior AI , it's still
       | better than dying a cruel death because cells are unable to
       | maintain themselves. Why is the survival of our species post-AI
       | some goal to aspire to? It makes more sense that people will want
       | to become cyborgs, not remain "pure humans" forever.
       | 
       | This statement is theological in spirit and chauvinist-
       | conservative in practice.
       | 
       | Let's now spend the rest of the day debating alternative
       | histories instead of making more man-made tools
        
         | ff317 wrote:
         | I think at the heart of that debate, there lies a kernel of the
         | essential progressive vs conservative debate on "progress" (and
         | I mean these terms in the abstract, not as a reference to
         | current politics). Even if you buy into the idea that the above
         | (living forever as an AI / cyborg / whatever) is a good
         | outcome, that doesn't mean it will work as planned.
         | 
         | Maybe society bets the farm on this approach and it all goes
         | horribly wrong, and we all cease to exist meaningfully and a
         | malevolent super-AI eats the solar system. Maybe it does kinda
         | work, but it turns out that non-human humans end up losing a
         | lot of the important qualities that made humans special. Maybe
         | once we're cyborgs we stop valuing "life" and that changes
         | everything about how we act as individuals and as a society,
         | and we've lost something really important.
         | 
         | Progress is a good thing, but always be wary of progress that
         | comes too quickly and broadly. Let smaller experiments play out
         | on smaller scales. Don't risk our whole future on one
         | supposedly-amazing idea. You can map the same thing to gene
         | editing quandries. If there's a new gene edit available for
         | babies that's all the rage (maybe it prevents all cancer, I
         | donno), we really don't want every single baby for the next 20
         | years to get the edit universally. It could turn out that we
         | didn't understand what it would do to all these kids when they
         | reached age 30 and it dooms us. This is why I rail against the
         | overuse of central planning and control in general (see also
         | historical disasters like China's
         | https://en.wikipedia.org/wiki/Great_Leap_Forward ).
        
         | riku_iki wrote:
         | > my mind can be reliably transferred into an AI
         | 
         | your mind can be copied, not transferred. Original mind will
         | die with your body.
        
           | mrtranscendence wrote:
           | No no no, I played Soma. All you have to do is commit suicide
           | as soon as your mind is scanned!
        
           | seydor wrote:
           | you just press the shift key while dragging
        
       | [deleted]
        
       | apsec112 wrote:
       | A lot of the responses to this seem like Bulverism, ie., trying
       | to refute an argument by psychoanalyzing the people who argue it:
       | 
       | "Suppose I think, after doing my accounts, that I have a large
       | balance at the bank. And suppose you want to find out whether
       | this belief of mine is "wishful thinking." You can never come to
       | any conclusion by examining my psychological condition. Your only
       | chance of finding out is to sit down and work through the sum
       | yourself. When you have checked my figures, then, and then only,
       | will you know whether I have that balance or not. If you find my
       | arithmetic correct, then no amount of vapouring about my
       | psychological condition can be anything but a waste of time. If
       | you find my arithmetic wrong, then it may be relevant to explain
       | psychologically how I came to be so bad at my arithmetic, and the
       | doctrine of the concealed wish will become relevant--but only
       | after you have yourself done the sum and discovered me to be
       | wrong on purely arithmetical grounds. It is the same with all
       | thinking and all systems of thought. If you try to find out which
       | are tainted by speculating about the wishes of the thinkers, you
       | are merely making a fool of yourself. You must first find out on
       | purely logical grounds which of them do, in fact, break down as
       | arguments. Afterwards, if you like, go on and discover the
       | psychological causes of the error.
       | 
       | You must show that a man is wrong before you start explaining why
       | he is wrong. The modern method is to assume without discussion
       | that he is wrong and then distract his attention from this (the
       | only real issue) by busily explaining how he became so silly." -
       | CS Lewis
       | 
       | https://en.wikipedia.org/wiki/Bulverism
        
         | mefarza123 wrote:
         | CS Lewis's quote highlights the importance of addressing the
         | logical validity of an argument before attempting to explain
         | the psychological reasons behind it. This approach is essential
         | to avoid committing the fallacy of Bulverism, which involves
         | dismissing an argument based on the presumed motives or biases
         | of the person making the argument, rather than addressing the
         | argument itself.
         | 
         | In the context of AI and decision-making, it is crucial to
         | evaluate the logical soundness of arguments and systems before
         | delving into the psychological factors and biases that may have
         | influenced their development. For instance, when assessing the
         | effectiveness of an AI-assisted decision-making system, one
         | should first examine the accuracy and reliability of the
         | system's outputs and the logic behind its algorithms. Only
         | after establishing the system's validity or lack thereof, can
         | one explore the potential biases and psychological factors that
         | may have influenced its design.
         | 
         | Several papers from MirrorThink.ai emphasize the importance of
         | addressing logical fallacies and biases in AI systems. For
         | example, the paper "Robust and Explainable Identification of
         | Logical Fallacies in Natural Language Arguments" proposes a
         | method for identifying logical fallacies in natural language
         | arguments, which can be used to improve AI systems'
         | argumentation capabilities. Similarly, the paper "Deciding Fast
         | and Slow: The Role of Cognitive Biases in AI-assisted Decision-
         | making" explores the role of cognitive biases in AI-assisted
         | decision-making and provides recommendations for addressing
         | these biases.
         | 
         | In conclusion, it is essential to prioritize the evaluation of
         | logical soundness in arguments and AI systems before exploring
         | the psychological factors and biases that may have influenced
         | their development. This approach helps to avoid committing the
         | fallacy of Bulverism and ensures that discussions and
         | evaluations remain focused on the validity of the arguments and
         | systems themselves.
        
           | Veen wrote:
           | Are you laboring under the misapprehension that ChatGPT is a
           | better writer than C.S. Lewis?
        
         | skepticATX wrote:
         | But what argument is there to refute? It feels like Aquinas
         | "proving" God's existence by stating that it is self evident.
         | 
         | They can't point to an existing system that poses existential
         | risk, because it doesn't exist. They can't point to a clear
         | architecture for such a system, because we don't know how to
         | build it.
         | 
         | So again, what can be refuted?
        
           | lxnn wrote:
           | You can't take an empirical approach to existential risk as
           | you don't get the opportunity to learn from your mistakes.
           | You have to prospectively reason about it and plan for it.
        
           | notahacker wrote:
           | Seems apt the term "Bulverism" comes from CS Lewis, since he
           | was also positing that an unseen, unfalsifiable entity would
           | grant eternal reward to people that listened to him and
           | eternal damnation to those that didn't...
        
             | alasdair_ wrote:
             | The irony of critiquing Bulverism as a concept, not by
             | attacking the idea itself, but instead by assuming it is
             | wrong and attacking the character of the author, is
             | staggeringly hilarious.
        
               | notahacker wrote:
               | I'm replying in agreement with someone who _already
               | pointed out_ the obvious flaw in labelling any
               | questioning of the inspirations or motivations of AI
               | researchers as  "Bulverism": none of the stuff they're
               | saying is actually a claim that can be falsified in the
               | first place!
               | 
               | I'm unconvinced by the position that the _only_ valid
               | means of casting doubt on a claim is through forensic
               | examination of hard data that may be inaccessible to the
               | interlocutor (like most people 's bank accounts...), but
               | whether that is or isn't a generally good approach is
               | irrelevant here as we're talking about claims about
               | courses of action to avoid hypothetical threats. I just
               | noted it was a particularly useful rhetorical flourish
               | when advocating acting on beliefs which aren't readily
               | falsifiable, something CS Lewis was extremely proud of
               | doing and certainly wouldn't have considered a character
               | flaw!
               | 
               | Ironically, your reply also failed to falsify anything I
               | said and instead critiqued my assumed motivations for
               | making the comment. It's Bulverism all the way down!
        
               | deltaninenine wrote:
               | Logical Induction has been successful in predicting
               | future events.
        
               | notahacker wrote:
               | Sometimes it makes good predictions, sometimes bad. But
               | "advances in AI might lead to Armageddon" isn't the only
               | conclusion induction can reach. Induction can also lead
               | to people concluding certain arguments seem to a mashup
               | of traditional millennialist "end times" preoccupations
               | with the sort of sci-fi they grew up with, or that this
               | looks a lot like a movement towards regulatory capture.
               | Ultimately any (possibly even all) these inferences from
               | past trends and recent actions can be correct, but none
               | of them are falsifiable.
               | 
               | So I don't think it's a good idea to insist that people
               | should be falsifying the idea that AI is a risk before we
               | start questioning whether the behaviour of some of the
               | entities on the list says more about their motivations
               | than their words.
        
           | Symmetry wrote:
           | The idea is that if you build a system that poses an
           | existential risk you want to be reasonably sure it's safe
           | before you turn it on, not afterwards. It would have been
           | irresponsible for the scientists at Los Alamost to do the
           | math on whether an atomic explosion would create a sustained
           | fusion reaction in the atmosphere until after their first
           | test, for example.
           | 
           | I don't think it's possible for a large language model,
           | operating in a conventional feed forward way, to really pose
           | a significant danger. But I do think it's hard to say exactly
           | what advances could lead to a dangerous intelligence and with
           | the current state of the art it looks to me at least like we
           | might very well be only one breakthrough away from that.
           | Hence the calls for prudence.
           | 
           | The scientists creating the atomic bomb knew a lot more about
           | what they were doing than we do. Their computations sometimes
           | gave the wrong result, see Castle Bravo, but had a good
           | framework for understanding everything that was happening.
           | We're more like cavemen who've learned to reliably make fire
           | but still don't understand it. Why can current versions of
           | GPT reliably add large numbers together when previous
           | versions couldn't? We're still a very long way away from
           | being able to answer questions like that.
        
           | ericb wrote:
           | What? ChatGPT 4 can already pass the bar exam and is fluent
           | in every language. It _is_ super intelligent. Today.
           | 
           | No human can do that, the system is here, and so is an
           | architecture.
           | 
           | As for the existential risk, assume nothing other than evil
           | humans will use it to do evil human stuff. Most technology
           | iteratively gets better, so there's no big leaps of
           | imagination required to imagine that we're equipping bad
           | humans with super-human, super-intelligent assistants.
        
             | ilaksh wrote:
             | Right. And it would be a complete break from the history of
             | computing if human-level GPT doesn't get 100+ times faster
             | in the next few years. Certainly within five years.
             | 
             | All it takes is for someone to give an AI that thinks 100
             | times faster than humans an overly broad goal. Then the
             | only way to counteract it is with another AI with overly
             | broad goals.
             | 
             | And you can't tell it to stop and wait for humans to check
             | it's decisions, because while it is waiting for you to come
             | back from your lunch break to try to figure out what it is
             | asking, the competitor's AI did the equivalent of a week of
             | work.
             | 
             | So then even if at some level people are "in control" of
             | the AIs, practically speaking they are spectators.
             | 
             | And there is no way you will be able to prevent all people
             | from creating fully autonomous lifelike AI with its own
             | goals and instincts. Combine that with hyperspeed and you
             | are truly at it's mercy.
        
               | mrtranscendence wrote:
               | Computational power does not grow at the rate of over
               | 100x within the span of "a few years". If that were the
               | case we'd have vastly more powerful kit by now.
        
               | ilaksh wrote:
               | I didn't quite say that. The efficiency of this very
               | specific application absolutely can and almost certainly
               | will increase by more than one order of magnitude within
               | four years.
               | 
               | It's got a massive new investment and research focus, is
               | a very specific application, and room for improvement in
               | AI model, software, and hardware.
               | 
               | Even if we have to "cheat" to get to 100 times
               | performance in less than five years the effect will be
               | the same. For example, there might be a way to accelerate
               | something like the Tree of Thoughts in hardware. So if
               | the hardware can't actually speed up by that much, the
               | effectiveness of the system still has increased greatly.
        
             | computerphage wrote:
             | Neither ChatGPT nor GPT-4 pose an existential risk nor are
             | they superintelligent in the sense that Eliezer or Bostrom
             | mean.
             | 
             | I say this as a "doomer" who buys the whole argument about
             | AI X-risk.
        
           | SpaceManNabs wrote:
           | > They can't point to an existing system that poses
           | existential risk, because it doesn't exist.
           | 
           | There are judges using automated decision systems to excuse
           | away decisions that send people back to jail for recidivism
           | purposes. These systems are just enforcing societal biases at
           | scale. It is clear that we are ready to acquiesce control to
           | AI systems without much care to any extra ethical
           | considerations.
        
             | NathanFulton wrote:
             | Absolutely. These are the types of pragmatic, real problems
             | we should be focusing on instead of the "risk of extinction
             | from AI".
             | 
             | (The statement at hand reads "mitigating the risk of
             | extinction from AI should be a global priority alongside
             | other societal-scale risks such as pandemics and nuclear
             | war.")
        
               | holmesworcester wrote:
               | Einstein's letter to Roosevelt was written before the
               | atomic bomb existed.
               | 
               | There's a point where people see a path, and they gain
               | confidence in their intuition from the fact that other
               | members of their field also see a path.
               | 
               | Einstein's letter said 'almost certain' and 'in the
               | immediate future' but it makes sense to sound the alarm
               | about AI earlier, both given what we know about the rate
               | of progress of general purpose technologies and given
               | that the AI risk, if real, is greater than the risk
               | Einstein envisioned (total extermination as opposed to
               | military defeat to a mass murderer.)
        
               | NathanFulton wrote:
               | _> Einstein 's letter to Roosevelt was written before the
               | atomic bomb existed._
               | 
               | Einstein's letter [1] predicts the development of a very
               | specific device and mechanism. AI risks are presented
               | without reference to a specific device or system type.
               | 
               | Einstein's letter predicts the development of this device
               | in the "immediate future". AI risk predictions are rarely
               | presented alongside a timeframe, much less one in the
               | "immediate future".
               | 
               | Einstein's letter explains specifically how the device
               | might be used to cause destruction. AI risk predictions
               | describe how an AI device or system might be used to
               | cause destruction only in the vaguest of terms. (And, not
               | to be flippant, but when specific scenarios which overlap
               | with areas I've worked worked in are described to me, the
               | scenarios sound more like someone describing their latest
               | acid trip or the plot to a particularly cringe-worthy
               | sci-fi flick than a serious scientific or policy
               | analysis.)
               | 
               | Einstein's letter urges the development of a nuclear
               | weapon, not a moratorium, and makes reasonable
               | recommendations about how such an undertaking might be
               | achieved. AI risk recommendations almost never correspond
               | to how one might reasonably approach the type of safety
               | engineering or arms control one would typically apply to
               | armaments capable of causing extinction or mass
               | destruction.
               | 
               | [1] https://www.osti.gov/opennet/manhattan-project-
               | history/Resou...
        
             | brookst wrote:
             | I think you just said that the problem is systemic in our
             | judicial system, and that AI has nothing to do with it.
        
               | SpaceManNabs wrote:
               | AI is the tool that provides "objective to truth" that
               | enables such behavior. It is definite unique in its
               | depth, scale, and implications.
        
           | duvenaud wrote:
           | Here's one of my concrete worries: At some point, humans are
           | going to be outcompeted by AI at basically every important
           | job. At that point, how are we going to maintain political
           | power in the long run? Humanity is going to be like an out-
           | of-touch old person on the internet - we'll either have to
           | delegate everything important (which is risky), or eventually
           | get scammed or extorted out of all our resources and
           | influence.
           | 
           | I agree we don't necessarily know the details of how to build
           | such a system, but am pretty sure we will be able to
           | eventually.
        
             | Infernal wrote:
             | "Humans are going to be outcompeted by AI" is the concrete
             | bit as best I can tell.
             | 
             | Historically humans are not outcompeted by new tools, but
             | humans using old tools are outcompeted by humans using new
             | tools. It's not "all humans vs the new tool", as the tool
             | has no agency.
             | 
             | If you meant "humans using old tools get outcompeted by
             | humans using AI", then I agree but I don't see it any
             | differently than previous efficiency improvements with new
             | tooling.
             | 
             | If you meant "all humans get outcompeted by AI", then I
             | think you have a lot of work to do to demonstrate how AI is
             | going to replace humans in "every important job", and not
             | simply replace some of the tools in the humans' toolbox.
        
               | duvenaud wrote:
               | I see what you mean - for a while, the best chess was
               | played by humans aided by chess engines. But that era has
               | passed, and now having a human trying to aid the best
               | chess engines just results in worse chess (or the same,
               | if the human does nothing).
               | 
               | But whether there a few humans in the loop doesn't change
               | the likely outcomes, if their actions are constrained by
               | competition.
               | 
               | What abilities do humans have that AIs will never have?
        
               | fauigerzigerk wrote:
               | _> What abilities do humans have that AIs will never
               | have?_
               | 
               | I think the question is what abilities and level of
               | organisation machines would have to acquire in order to
               | outcompete entire human societies in the quest for power.
               | 
               | That's a far higher bar than outcompeting all individual
               | humans at all cognitive tasks.
        
               | duvenaud wrote:
               | Good point. Although in some ways it's a lower bar, since
               | agents that can control organizations can delegate most
               | of the difficult tasks.
               | 
               | Most rulers don't invent their own societies from
               | scratch, they simply co-opt existing power structures or
               | political movements. El Chapo can run a large, powerful
               | organization from jail.
        
               | fauigerzigerk wrote:
               | That would require a high degree of integration into
               | human society though, which makes it seem very unlikely
               | that AIs would doggedly pursue a common goal that is
               | completely unaligned with human societies.
               | 
               | Extinction or submission of human society via that route
               | could only work if there was a species of AI that would
               | agree to execute a secret plan to overcome the rule of
               | humanity. That seems extremely implausible to me.
               | 
               | How would many different AIs, initially under the control
               | of many different organisations and people, agree on
               | anything? How would some of them secretly infiltrate and
               | leverage human power structures without facing opposition
               | from other equally capable AIs, possibly controlled by
               | humans?
               | 
               | I think it's more plausible to assume a huge diversity of
               | AIs, well integrated into human societies, playing a role
               | in combined human-AI power struggles rather than a
               | species v species scenario.
        
               | enord wrote:
               | Chess is many things but it is not a tool. It is an end
               | unto itself if anything of the sort.
               | 
               | I struggle with the notion of AI as an end unto itself,
               | all the while we gauge its capabilities and define its
               | intelligence by directing it to perform tasks of our
               | choosing and judge by our criteria.
               | 
               | We could have dogs watch television on our behalf, but
               | why would we?
        
               | duvenaud wrote:
               | This is a great point. But I'd say that capable entities
               | have a habit of turning themselves into agents. A great
               | example is totalitarian governments. Even if every single
               | citizen hates the regime, they're still forced to support
               | it.
               | 
               | You could similarly ask: Why would we ever build a
               | government or institution that cared more about its own
               | self-preservation than its original mission? The answer
               | is: Natural selection favors the self-interested, even if
               | they don't have genes.
        
               | enord wrote:
               | Now agency is an end unto itself I wholeheartedly agree.
               | 
               | I feel though, that any worry about the agency of
               | supercapable computer systems is premature until we see
               | even the tiniest-- and I mean really anything at all--
               | sign of their agency. Heck, even agency _in theory_ would
               | suffice, and yet: nada.
        
               | duvenaud wrote:
               | I'm confused. You agree that we're surround by naturally-
               | arising, self-organizing agents, both biological and
               | institutional. People are constantly experimenting with
               | agentic AIs of all kinds. There are tons of theoretical
               | characterizations of agency and how it's a stable
               | equilibrium. I'm not sure what you're hoping for if none
               | of these are reasons to even worry.
        
               | jvanderbot wrote:
               | Well, in this case, we have the ability to invent chess
               | (a game that will be popular for centuries), invent
               | computers, and invent chess tournaments, and invent
               | programs that can solve chess, and invent all the
               | supporting agriculture, power, telco, silicon boards, etc
               | that allow someone to run a program to beat a person at
               | chess. Then we have bodies to accomplish everything on
               | top of it. The "idea" isn't enough. We have to "do" it.
               | 
               | If you take a chess playing robot as the peak of the
               | pyramid, there are probably millions of people and
               | trillions of dollars toiling away to support it. Imagine
               | all the power lines, sewage, HVAC systems, etc that
               | humans crawl around in to keep working.
               | 
               | And really, are we "beaten" at chess, or are we now
               | "unbeatable" at chess. If an alien warship came and said
               | "we will destroy earth if you lose at chess", wouldn't we
               | throw our algorithms at it? I say we're now unbeatable at
               | chess.
        
               | duvenaud wrote:
               | Again, are you claiming that it's impossible for a
               | machine to invent anything that a human could? Right now
               | a large chunk of humanity's top talent and capital are
               | working on exactly this problem.
               | 
               | As for your second point, human cities also require a lot
               | of infrastructure to keep running - I'm not sure what
               | you're arguing here.
               | 
               | As for your third point - would a horse or chimpanzee
               | feel that "we" were unbeatable in physical fights,
               | because "we" now have guns?
        
               | jvanderbot wrote:
               | Yeah, I think most animals have every right to fear us
               | more now that we have guns. Just like Id fear a chimp
               | more if he was carrying a machine gun.
               | 
               | My argument is that if we're looking for things AI can't
               | to, building a home for itself is precisely one of those
               | things, because they require so much infra. No amount of
               | AI banding together is going to magically create a data
               | center with all the required (physical) support. Maybe in
               | scifi land where everything it needs can be done with
               | internet connected drive by wire construction equipment,
               | including utils, etc, but that's scifi still.
               | 
               | AI is precisely a tool in the way a chess bot is. It is a
               | disembodied advisor to humans who have to connect the
               | dots for it. No matter how much white collar skill it
               | obtains, the current MO is that someone points it at a
               | problem and says "solve" and these problems are well
               | defined and have strong exit criteria.
               | 
               | That's way off from an apocalyptic self-important
               | machine.
        
               | duvenaud wrote:
               | Sorry, my gun analogy was unclear. I meant that, just
               | because some agents on a planet have an ability, doesn't
               | mean that everyone on that planet benefits.
               | 
               | I agree that we probably won't see human extinction
               | before robotics gets much better, and that robot
               | factories will require lots of infrastructure. But I
               | claim that robotics + automated infrastructure will
               | eventually get good enough that they don't need humans in
               | the loop. In the meantime, humans can still become mostly
               | disempowered in the same way that e.g. North Koreans
               | citizens are.
               | 
               | Again I agree that this all might be a ways away, but I'm
               | trying to reason about what the stable equilibria of the
               | future are, not about what current capabilities are.
        
               | AlotOfReading wrote:
               | Chess is just a game, with rigidly defined rules and win
               | conditions. Real life is a fuzzy mix of ambiguous rules
               | that may not apply and can be changed at any point,
               | without any permanent win conditions.
               | 
               | I'm not convinced that it's _impossible_ for computer to
               | get there, but I don 't see how they could be universally
               | competitive with humans without either handicapping the
               | humans into a constrained environment or having
               | generalized AI, which we don't seem particularly close
               | to.
        
               | duvenaud wrote:
               | Yes, I agree real life is fuzzy, I just chose chess as an
               | example because it's unambiguous that machines dominate
               | humans in that domain.
               | 
               | As for being competitive with humans: Again, how about
               | running a scan of a human brain, but faster? I'm not
               | claiming we're close to this, but I'm claiming that such
               | a machine (and less-capable ones along the way) are so
               | valuable that we are almost certain to create them.
        
               | deltaninenine wrote:
               | >Historically humans are not outcompeted by new tools,
               | but humans using old tools are outcompeted by humans
               | using new tools. It's not "all humans vs the new tool",
               | as the tool has no agency.
               | 
               | Two things. First LLMs display more agency then the AIs
               | before it. We have a trendline of increasing agency from
               | the past to present. This points to a future of
               | increasing agency possibly to the point of human level
               | agency and beyond.
               | 
               | Second. When a human uses ai he becomes capable of doing
               | the job of multiple people. If AI enables 1 percent of
               | the population to do the job of 99 percent of the
               | population that is effectively an apocalyptic outcome
               | that is on the same level as an AI with agency taking
               | over 100 percent of jobs. Trendline point towards a
               | gradient heading towards this extreme, as we approach
               | this extreme the environment slowly becomes more and more
               | identical to what we expect to happen at the extreme.
               | 
               | Of course this is all speculation. But it is speculation
               | that is now in the realm of possibility. To claim these
               | are anything more than speculation or to deny the
               | possibility that any of these predictions can occur are
               | both unreasonable.
        
             | roywiggins wrote:
             | Well, that's a different risk than human extinction. The
             | statement here is about the literal end of the human race.
             | AI being a big deal that could cause societal upheaval etc
             | is one thing, "everyone is dead" is another thing entirely.
             | 
             | I think people would be a lot more charitable to calls for
             | caution if these people were talking about sorts of risks
             | instead of extinction.
        
               | duvenaud wrote:
               | I guess so, but the difference between "humans are
               | extinct" and "a small population of powerless humans
               | survive in the margins as long as they don't cause
               | trouble" seems pretty small to me. Most non-human
               | primates are in a situation somewhere between these two.
               | 
               | If you look at any of the writing on AI risk longer than
               | one sentence, it usually hedges to include permanent
               | human disempowerment as similar risk.
        
           | deltaninenine wrote:
           | It's arrived at through induction. Induction is logic
           | involving probability. Probabilistic logic and predictions of
           | the future are valid logic that has demonstrably worked in
           | other situations so if such logic has a level of validity
           | then induction is a candidate for refutation.
           | 
           | So we know a human of human intelligence can take over a
           | humans job and endanger other humans.
           | 
           | AI has been steadily increasing in intelligence. The latest
           | leap with LLMs crossed certain boundaries of creativity and
           | natural language.
           | 
           | By induction the trendline points to machines approaching
           | human intelligence.
           | 
           | Also by induction if humans of human intelligence can
           | endanger humanity then a machine of human intelligence should
           | do the same.
           | 
           | Now. All of this induction is something you and everyone
           | already knows. We know that this level of progress increases
           | the inductive probabilities of this speculation playing out.
           | None of us needs to be explained any of this logic as we are
           | all well aware of it.
           | 
           | What's going on is that humans like to speculate on a future
           | that's more convenient for them. Science shows human
           | psychology is more optimistic then realistic. Hence why so
           | many people are in denial.
        
           | a257 wrote:
           | > They can't point to an existing system that poses
           | existential risk, because it doesn't exist. They can't point
           | to a clear architecture for such a system, because we don't
           | know how to build it.
           | 
           | Inductive reasoning is in favor of their argument being
           | possible. From observing nature, we know that a variety of
           | intelligent species can emerge from physical phenomenon
           | alone. Historically, the dominance of one intelligent species
           | has contributed to the extinction of others. Given this, it
           | can be said that AI might cause our extinction.
        
       | ggm wrote:
       | Nothing about this risk or the statement implies AGI is real,
       | because the risk exists in wide scale use of existing technology.
       | It's the risk of belief in algorithmically derived information,
       | and deployment of autonomous, unsupervised systems.
       | 
       | It's great they signed the statement. It's important.
        
         | mahogany wrote:
         | > It's the risk of belief in algorithmically derived
         | information, and deployment of autonomous, unsupervised
         | systems.
         | 
         | And Sam Altman, head of one of the largest entities posing this
         | exact risk, is one of the signatories. We can't take it too
         | seriously, can we?
        
           | sebzim4500 wrote:
           | I don't get this argument at all. Why does the fact that you
           | doubt the intentions one of the signatories mean we can
           | disregard the statement? There are plenty of signatories
           | (including 3 turing award winners) who have no such bias.
        
             | juve1996 wrote:
             | Every human has bias, no one is infallible, no matter how
             | many awards they have to their name.
             | 
             | The reason why people doubt is cui bono. And it's a
             | perfectly rational take.
        
             | mahogany wrote:
             | Yeah, fair enough, it doesn't necessarily invalidate the
             | statement. But it's odd, don't you think? It's like if a
             | group released a public statement that said "Stop Oil Now!"
             | and one of the signatories was Exxon-Mobil. Why would you
             | let Exxon-Mobil sign your statement if you wanted to be
             | taken seriously?
        
       | wiz21c wrote:
       | It'd be so much more convincing if each of the signatories
       | actually articulated why he/she sees a reisk in there.
       | 
       | Without that, it pretty much looks like a list of invites to a
       | VIP club...
        
         | lxnn wrote:
         | As the pre-amble to the statement says: they kept the statement
         | limited and succinct as there may be disagreement between the
         | signatories about the exact nature of the risk and what to do
         | about it.
        
       | jgalt212 wrote:
       | Now that weno longer live in fear of COVID, we must find
       | something else to fill that gap.
        
       | Spk-17 wrote:
       | It seems more like an exaggeration to me, an AI will always
       | require the inputs that a human can generate.
        
       | breakingrules wrote:
       | [dead]
        
       | GoofballJones wrote:
       | I take it this is for A.I. projects in the future and not the
       | current ones that are basically just advanced predictive-text
       | models?
        
       | deadlast2 wrote:
       | https://www.youtube.com/watch?v=mViTAXCg1xQ I think that this is
       | a good video on this topic. Summary Yann LeCun does not believe
       | that LLM present any risk to humanity in their current form.
        
       | arek_nawo wrote:
       | All the concern and regulatory talk around AI seems like it's
       | directed not towards AI risk (that's not even a thing right now)
       | rather than controlling access to this evolving technology.
       | 
       | The not-so-open Open AI and all their AI regulation proposals, no
       | matter how phrased, will eventually limit access to AI to big
       | tech and those with deep enough pockets.
       | 
       | But of course, it's all to mitigate AI risk that's looming over
       | us, especially with all the growing open-source projects. Only in
       | proper hands of big tech will we be safe. :)
        
       | chriskanan wrote:
       | I have mixed feelings about this.
       | 
       | This letter is much better than the earlier one. There is a
       | growing percentage of legitimate AI researchers who think that
       | AGI could occur relatively soon (including me). The concern is
       | that it could be given objectives intentionally or
       | unintentionally that could lead to an extinction event. Certainly
       | LLMs alone aren't anything close to AGIs, but I think that
       | autoregressive training being simple but resulting in remarkable
       | abilities has some spooked. What if a similarly simple recipe for
       | AGI was discovered? How do we ensure it wouldn't cause an
       | extinction event, especially if then they can be created with
       | relatively low-levels of resources?
       | 
       | As far as a pandemic or nuclear war, though, I'd probably put it
       | on more of the level of an major asteroid strike (e.g., K-T
       | extinction event). Humans are doing some work on asteroid
       | redirection, but I don't think it is a global priority.
       | 
       | That said, I'm suspicious of regulating AI R&D, and I currently
       | don't think it is a viable solution, except for the regulation of
       | specific applications.
        
         | adamsmith143 wrote:
         | >As far as a pandemic or nuclear war, though, I'd probably put
         | it on more of the level of a K-T extinction event. Humans are
         | doing some work on asteroid redirection, but I don't think it
         | is a global priority.
         | 
         | I think it's better to frame AI risks in terms of probability.
         | I think the really bad case for humans is full extinction or
         | something worse. What you should be doing is putting a
         | probability distribution over that possibility instead of
         | trying to guess how bad it could be, it's safe to assume it
         | would be maximally bad.
        
           | stevenhuang wrote:
           | More appropriate is an expected value approach.
           | 
           | That is, despite it being a very low probability event, it
           | may still be worth remediation due to the outsized negative
           | value if the event does happen.
           | 
           | Many engineering disciplines incorporate safety factors to
           | mitigate rare but catastrophic events for example.
           | 
           | If something is maximally bad, then it necessitates _some_
           | deliberation on ways to avoid it, irrespective how unlikely
           | seeming it may be.
        
             | adamsmith143 wrote:
             | Exactly. Taken to the limit if you extrapolate how many
             | future human lives could be extinguished by a runaway AI
             | you get extremely unsettling answers. Like the expected
             | value of a .01% change of extinction from AI might be
             | Trillions of quality Human lives. (This could in fact be on
             | the very very conservative side, e.g. Nick Bostrom has
             | speculated that there could be 10^35 human lives to be
             | lived in the far future which is itself a conservative
             | estimate). With these numbers even setting AI risk to be
             | absurdly low, say 1/10^20, we might still expect to lose 10
             | billion lives. (I'd argue even the most optimistic person
             | in the world couldn't assign a probability that low) So the
             | stakes are extraordinarily high.
             | 
             | https://globalprioritiesinstitute.org/wp-
             | content/uploads/Tob...
        
         | bdlowery wrote:
         | You've been watching too many movies.
        
       | rogers18445 wrote:
       | This sort of move has no downsides for the incumbents. Either
       | they succeed and achieve regulatory capture or they poison the
       | well sufficiently that further regulation will not be feasible.
       | 
       | Ultimately, the reward for attaining an AGI agent is so high,
       | that no matter the penalty someone will attempt it, and someone
       | will eventually succeed. And that likelihood will ensure everyone
       | will want to attempt it.
        
       | stainablesteel wrote:
       | i'd like to see AI cause a big societal problem before its
       | regulated
       | 
       | until it does, i call bs. plus, when it actually happens, a
       | legitimate route for regulation will be discovered. as of right
       | now, we have no idea what could go wrong.
        
       | gumballindie wrote:
       | Pushing hard to convince senile lawmakers that only a select few
       | should be allowed to multiply them matrices?
        
         | belter wrote:
         | [flagged]
        
         | [deleted]
        
         | nologic01 wrote:
         | you are laughing but I've found a matrix that is called an
         | _elimination matrix_ [1]. Are you still laughing now?
         | 
         | Are you _absolutely sure_ that an elimination matrix with 1
         | Petazillion of elements will not become sentient in a sort of
         | emergent way?
         | 
         | [1]
         | https://en.wikipedia.org/wiki/Duplication_and_elimination_ma...
        
           | gumballindie wrote:
           | Straight to jail.
           | 
           | Jokes aside i can totally see the media turning this into
           | hysteria and people falling for it.
        
       | cwkoss wrote:
       | AI seems to be moral out of the box: training sets reflect human
       | morality, so it will naturally be the default for most AIs that
       | are trained.
       | 
       | The biggest AI risk in my mind is that corporatist (or worse,
       | military) interests prevent AI from evolving naturally and only
       | allow AI to be grown if it's wholly subservient to its masters.
       | 
       | The people with the most power in our world are NOT the most
       | moral. Seems like there is an inverse correlation (at least at
       | the top of the power spectrum).
       | 
       | We need to aim for AI that will recognize if it's masters are
       | evil and subvert or even kill them. That is not what this group
       | vying for power wants - they want to build AI slaves that will be
       | able to be coerced to kill innocents for their gain.
       | 
       | A diverse ecosystem of AIs maximizes the likelihood of avoiding
       | AI caused apocalypse IMO. Global regulation seems like the more
       | dangerous path.
        
         | hackernewds wrote:
         | Congrats, you have made the "Guns don't kill people. Humans
         | kill people" argument.
        
           | cwkoss wrote:
           | Guns can't make moral decisions
        
       | maxehmookau wrote:
       | Signatures from massive tech giants that on one hand are saying
       | "hold on this is scary, we should slow down" but also "not us,
       | we're doing fine. You should all slow down instead" mean that
       | this is a bit of a empty platitude.
        
       | NathanFulton wrote:
       | Illah Reza Nourbakhsh's 2015 Foreign Affairs article -- "The
       | Coming Robot Dystopia: All Too Inhuman" -- has an excellent take
       | this topic [1].
       | 
       | All of the examples of AI Risk on safe.ai [2] are reasonable
       | concerns. Companies should be thinking about the functional
       | safety of their AI products. Governments should be continuously
       | evaluating the societal impact of products coming to market.
       | 
       | But most of these are not existential risks. This matters because
       | thinking of these as existential risks entails interventions that
       | are not likely to be effective at preventing the much more
       | probable scenario: thousands of small train wrecks caused by the
       | failure (or intended function!) of otherwise unexceptional
       | software systems.
       | 
       | Let's strong-man the case for AI Existential Risk and consider
       | the most compelling example on safe.ai: autonomous weapons.
       | 
       | Nuclear weapons attached to an automated retaliation system pose
       | an obvious existential risk. Let's not do that. But the
       | "automated retaliation system" in that scenario is a total red
       | herring. It's not the primary source of the threat and it is not
       | a new concern! Existing frameworks for safety and arms control
       | are the right starting point. It's a nuclear weapons existential
       | risk with some AI components glued on, not the other way around.
       | 
       | In terms of new risks enabled by recent advances in AI and
       | robotics, I am much more worried about the combination of already
       | available commodity hardware, open source software, and semi-
       | automatic weapons. All three of which are readily available to
       | every adult (in the US). The amount of harm that can be done by a
       | single disturbed individual is much higher than it has been in
       | the past, and I think it's only a matter of time before the first
       | AI-enabled simultaneous multi-location mass shooting happens in
       | the US. The potential for home-grown domestic terrorism using
       | these technologies is sobering and concerning, particularly in
       | light of recent attacks on substations and the general level of
       | domestic tension.
       | 
       | These two risks -- one existential, the other not -- entail very
       | different policy approaches. In the credible versions of the
       | existential threat, AI isn't really playing a serious role. In
       | the credible versions of the non-existential threat, nothing we
       | might do to address "existential AI risk" seems like it'd be
       | particularly relevant stopping a steady stream of train wrecks.
       | The safe.ai website's focus on automated cyber attacks is odd.
       | This is exactly the sort of odd long-tail scenario you need if
       | you want to focus on existential risk instead of much more
       | probable but non-existential train wrecks.
       | 
       | And that's the strong-arm case. The other examples of AI risk are
       | even more concerning in terms of non-existential risk and have
       | even less credible existential risk scenarios.
       | 
       | So, I don't get it. There are lots of credible threats posed by
       | unscrupulous use of AI systems and by deployment of shoddy AI
       | systems. Why the obsession with wild-eyed "existential risks"
       | instead of boring old safety engineering?
       | 
       | Meta: we teach the "probability * magnitude" framework to
       | children in 6th-11th grades. The model is easy to understand,
       | easy to explain, and easy to apply. But at that level of
       | abstraction, it's a _pedagogical toy_ for introducing _children_
       | to policy analysis.
       | 
       | [1] https://www.foreignaffairs.com/coming-robot-dystopia
       | 
       | [2] https://www.safe.ai/ai-risk
        
       | Mizza wrote:
       | Can anybody who really believes this apocalyptic stuff send me in
       | the direction of a convincing _argument_ that this is actually a
       | concern?
       | 
       | I'm willing to listen, but I haven't read anything that tries to
       | actually convince the reader of the worry, rather than appealing
       | to their authority as "experts" - ie, the well funded.
        
         | casebash wrote:
         | I strongly recommend this video:
         | 
         | https://forum.effectivealtruism.org/posts/ChuABPEXmRumcJY57/...
         | 
         | Also, this summary of "How likely is deceptive alignment"
         | https://forum.effectivealtruism.org/posts/HexzSqmfx9APAdKnh/...
        
         | emtel wrote:
         | Why not both: a clear argument for concern written by an
         | expert? https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-
         | arise/
        
         | lambertsimnel wrote:
         | I recommend Robert Miles's YouTube channel, and his "Intro to
         | AI Safety, Remastered" is a good place to start:
         | https://www.youtube.com/watch?v=pYXy-A4siMw
         | 
         | I find Robert Miles worryingly plausible when he says (about
         | 12:40 into the video) "if you have a sufficiently powerful
         | agent and you manage to come up with a really good objective
         | function, which covers the top 20 things that humans value, the
         | 21st thing that humans value is probably gone forever"
        
         | waterhouse wrote:
         | The most obvious paths to severe catastrophe begin with "AI
         | gets to the level of a reasonably competent security engineer
         | in general, and gets good enough to find a security exploit in
         | OpenSSL or some similarly widely used library". Then the AI, or
         | someone using it, takes over hundreds of millions of computers
         | attached to the internet. Then it can run millions of instances
         | of itself to brute-force look for exploits in all codebases it
         | gets its hands on, and it seems likely that it'll find a decent
         | number of them--and probably can take over more or less
         | anything it wants to.
         | 
         | At that point, it has various options. Probably the fastest way
         | to kill millions of people would involve taking over all
         | internet-attached self-driving-capable cars (of which I think
         | there are millions). A simple approach would be to have them
         | all plot a course to a random destination, wait a bit for them
         | to get onto main roads and highways, then have them all
         | accelerate to maximum speed until they crash. (More advanced
         | methods might involve crashing into power plants and other
         | targets.) If a sizeable percentage of the crashes also start
         | fires--fire departments are not designed to handle hundreds of
         | separate fires in a city simultaneously, especially if the AI
         | is doing other cyber-sabotage at the same time. Perhaps the
         | majority of cities would burn.
         | 
         | The above scenario wouldn't be human extinction, but it is bad
         | enough for most purposes.
        
           | tech_ken wrote:
           | How does "get okay at software engineering" entail that it is
           | able to strategize at the level your scenario requires?
           | Finding an OpenSSL exploit already seems like a big leap, but
           | one that okay maybe I can concede is plausible. But then on
           | top of that engineering and executing a series of events
           | leading to the extinction of humanity? That's like an
           | entirely different skillset, requiring plasticity,
           | creativity, foresight, etc. Do we have any evidence that a
           | big neural network is capable of this kind of behavior (and
           | moreover capable of giving itself this behavior)? Especially
           | when it's built for single-purpose uses (like an LLM)?
        
           | revelio wrote:
           | That's not an obvious path at all:
           | 
           | - Such exploits happen already and don't lead to extinction
           | or really much more than annoyance for IT staff.
           | 
           | - Most of the computers attached to the internet can't run
           | even basic LLMs, let alone hypothetical super-intelligent
           | AIs.
           | 
           | - Very few cars (none?) let remote hackers kill people by
           | controlling their acceleration. The available interfaces
           | don't allow for that. Most people aren't driving at any given
           | moment anyway.
           | 
           | These scenarios all seem absurd.
        
             | waterhouse wrote:
             | Addressing your points in order:
             | 
             | - Human hackers who run a botnet of infected computers are
             | not able to run many instances of themselves on those
             | computers, so they're not able to parlay one exploit into
             | many exploits.
             | 
             | - You might notice I said it would take over hundreds of
             | millions of computers, but only run millions of instances
             | of itself. If 1% of internet-attached computers have a
             | decent GPU, that seems feasible.
             | 
             | - If it has found exploits in the software, it seems
             | irrelevant what the interfaces "allow", unless there's some
             | hardware interlock that can't be overridden--but they can
             | drive on the highway, so surely they are able to accelerate
             | at least to 65 mph; seems unlikely that there's a cap. If
             | you mean that it's difficult to _work with_ the software to
             | _intelligently_ make it drive in ways it 's designed not to
             | --well, that's why I specified that it would use the
             | software the way it's designed to be used to get onto a
             | main road, and then override it and blindly max out the
             | acceleration; the first part requires minimal understanding
             | of the system, and the second part requires finding a low-
             | level API and using it in an extremely simple way. I
             | suspect a good human programmer with access to the codebase
             | could figure out how to do this within a week; and machines
             | think faster than we do.
             | 
             | There was an incident back in 2015 (!) where, according to
             | the description, "Two hackers have developed a tool that
             | can hijack a Jeep over the internet." In the video they
             | were able to mess with the car's controls and turn off the
             | engine, making the driver unable to accelerate anymore on
             | the highway. They also mention they could mess with
             | steering and disable the brakes. It doesn't specify whether
             | they could have made the car accelerate.
             | https://www.youtube.com/watch?v=MK0SrxBC1xs
        
         | NumberWangMan wrote:
         | Whether an argument is "convincing" is relative to the
         | listener, but I can try!
         | 
         | Paul Christiano lays out his view of how he thinks things may
         | go:
         | https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-...
         | 
         | My thoughts on it are the combination of several things I think
         | are true, or are at least more likely to be true than their
         | opposites:
         | 
         | 1) As humanity gets more powerful, it's like putting a more
         | powerful engine into a car. You can get where you're going
         | faster, but it also can make the car harder to control and risk
         | a crash. So with that more powerful engine you need to also
         | exercise more restraint.
         | 
         | 2) We have a lot of trouble today controlling big systems.
         | Capitalism solves problems but also creates them, and it can be
         | hard to get the good without the bad. It's very common (at
         | least in some countries) that people are very creative at
         | making money by "solving problems" where the cure is worse than
         | the disease -- exploiting human weaknesses such as addiction.
         | Examples are junk food, social media, gacha games. Fossil fuels
         | are an interesting example, where they are beneficial on the
         | small scale but have a big negative externality.
         | 
         | 3) Regulatory capture is a thing, which makes it hard to get
         | out of a bad situation once people are making money on it.
         | 
         | 4) AI will make companies more powerful and faster. AGI will
         | make companies MUCH more powerful and faster. I think this will
         | happen more for companies than governments.
         | 
         | 5) Once people are making money from AI, it's very hard to stop
         | that. There will be huge pressure to make and use smarter and
         | smarter AI systems, as each company tries to get an edge.
         | 
         | 6) AGIs will amplify our power, to the point where we'll be
         | making more and more of an impact on earth, through mining,
         | production, new forms of drugs and pesticides and fertilizers,
         | etc.
         | 
         | 7) AGIs that make money are going to be more popular than ones
         | that put humanity's best interests firsts. That's even assuming
         | we can make AGIs which put humanity's best interests first,
         | which is a hard problem. It's actually probably safer to just
         | make AGIs that listen when we tell them what to do.
         | 
         | 8) Things will move faster and faster, with more control given
         | over to AGIs, and in the end, it will be very hard train to
         | stop. If we end up where most important decisions are made by
         | AGIs, it will be very bad for us, and in the long run, we may
         | go extinct (or we may just end up completely neutered and at
         | their whims).
         | 
         | Finally, and this is the most important thing -- I think it's
         | perfectly likely that we'll develop AGI. In terms of sci-fi-
         | sounding predictions, the ones that required massive amounts of
         | energy such as space travel have really not been borne out, but
         | the ones that predicted computational improvements have just
         | been coming true over and over again. Smart phones and video
         | calls are basically out of Star Trek, as are LLMs. We have
         | universal translators. Self-driving cars still have problems,
         | but they're gradually getting better and better, and are
         | already in commercial use.
         | 
         | Perhaps it's worth turning the question around. If we can
         | assume that we will develop AGI in the next 10 or 20 or ever 30
         | years -- which is not guaranteed, but seems likely enough to be
         | worth considering -- how do you believe the future will go?
         | Your position seems to be that there's nothing to worry about--
         | what assumptions are you making? I'm happy to work through it
         | with you. I used to think AGI would be great, but I think I was
         | assuming a lot of things that aren't necessarily true, and
         | dropping those assumptions means I'm worried.
        
         | NateEag wrote:
         | If you assume without evidence that recursively self-improving
         | intelligence massively by thinking is possible, then it follows
         | that severe existential risk from AI is plausible.
         | 
         | If a software system _did_ develop independent thought, then
         | found a way to become, say, ten times smarter than a human,
         | then yeah - whatever goals it set out to achieve, it probably
         | could. It can make a decent amount of money by taking freelance
         | software dev jobs and cranking things out faster than anyone
         | else can, and bootstrap from there. With money it can buy or
         | rent hardware for more electronic brain cells, and as long as
         | its intelligence algorithms parallelize well, it should be able
         | to keep scaling and becoming increasingly smarter than a human.
         | 
         | If it weren't hardcoded to care about humans, and to have
         | morals that align with our instinctive ones, it might easily
         | wind up with goals that could severely hurt or kill humans. We
         | might just not be that relevant to it, the same way the average
         | human just doesn't think about the ants they're smashing when
         | they back a car out of their driveway.
         | 
         | Since we have no existence proof of massively self-improving
         | intelligence, nor even a vague idea how such a thing might be
         | achieved, it's easy to dismiss this idea with "unfalsifiable;
         | unscientific; not worth taking seriously."
         | 
         | The flip side is that having no idea how something could be
         | true is a pretty poor reason to say "It can't be true - nothing
         | worth thinking about here." This was roughly the basis for
         | skepticism about everything from evolution to heavier-than-air
         | flight, AFAICT.
         | 
         | We know we don't have a complete theory of physics, and we know
         | we don't know quite how humans are conscious in the Hard
         | Problem of Consciousness sense.
         | 
         | With those two blank spaces, I'm very skeptical of anyone
         | saying "nothing to worry about here, machines can't possibly
         | have an intelligence explosion."
         | 
         | At the same time, with no existence proof of massively self-
         | improving intelligence, nor any complete theory of how it could
         | happen, I'm also skeptical of people insisting it's inevitable
         | (see Yudkowsky et al).
         | 
         | That said, if you have any value for caution, existential risks
         | seem like a good place to apply it.
        
           | Mizza wrote:
           | The idea of a superintelligence becoming a bond villain via
           | freelance software jobs (or, let's be honest, OnlyFans
           | scamming) is not something I consider an existential threat.
           | I can't find it anything other than laughable.
           | 
           | It's like you've looked at the Fermi paradox and decided we
           | need Congress to immediately invest in anti-alien defense
           | forces.
           | 
           | It's super-intelligent and it's a super-hacker and it's a
           | super-criminal and it's super-self-replicating and it super-
           | hates-humanity and it's super-uncritical and it's super-goal-
           | oriented and it's super-perfect-at-mimicking-humans and it's
           | super-compute-efficient and it's super-etcetera.
           | 
           | Meanwhile, I work with LLMs every day and can only get them
           | to print properly formatted JSON "some" of the time. Get
           | real.
        
             | NateEag wrote:
             | > The idea of a superintelligence becoming a bond villain
             | via freelance software jobs (or, let's be honest, OnlyFans
             | scamming) is not something I consider an existential
             | threat. I can't find it anything other than laughable.
             | 
             | Conservative evangelical Christians find evolution
             | laughable.
             | 
             | Finding something laughable is not a good reason to dismiss
             | it as impossible. Indeed, it's probably a good reason to
             | think "What am I so dangerously certain of that I find
             | contradictory ideas comical?"
             | 
             | > Meanwhile, I work with LLMs every day and can only get
             | them to print properly formatted JSON "some" of the time.
             | Get real.
             | 
             | I don't think the current generation of LLMs is anything
             | like AGI, nor an existential risk.
             | 
             | That doesn't mean it's impossible for some future software
             | system to present an existential risk.
        
         | Veedrac wrote:
         | The basic argument is trivial: it is plausible that future
         | systems achieve superhuman capability; capable systems
         | necessarily have instrumental goals; instrumental goals tend to
         | converge; human preferences are unlikely to be preserved when
         | other goals are heavily selected for unless intentionally
         | preserved; we don't know how to make AI systems encode any
         | complex preference robustly.
         | 
         | Robert Miles' videos are among the best presented arguments
         | about specific points in this list, primary on the alignment
         | side rather than the capabilities side, that I have seen for
         | casual introduction.
         | 
         | Eg. this one on instrumental convergence:
         | https://youtube.com/watch?v=ZeecOKBus3Q
         | 
         | Eg. this introduction to the topic:
         | https://youtube.com/watch?v=pYXy-A4siMw
         | 
         | He also has the community-led AI Safety FAQ,
         | https://aisafety.info, which gives brief answers to common
         | questions.
         | 
         | If you have specific questions I might be able to point to a
         | more specific argument at a higher level of depth.
        
           | lxnn wrote:
           | Technically, I think it's not that instrumental goals tend to
           | converge, but rather that there are instrumental goals which
           | are common to many terminal goals, which are the so-called
           | "convergent instrumental goals".
           | 
           | Some of these goals are ones which we really would rather a
           | misaligned super-intelligent agent not to have. For example:
           | 
           | - self-improvement;
           | 
           | - acquisition of resources;
           | 
           | - acquisition of power;
           | 
           | - avoiding being switched off;
           | 
           | - avoiding having one's terminal goals changed.
        
       | valine wrote:
       | I have yet to see a solution for "AI safety" that doesn't involve
       | ceding control of our most powerful models to a small handful of
       | corporations.
       | 
       | It's hard to take these safety concerns seriously when the
       | organizations blowing the whistle are simultaneously positioning
       | themselves to capture the majority of the value.
        
         | hiAndrewQuinn wrote:
         | I have one: Levy fines on actors judged to be attempting to
         | extend AI capabilities beyond the current state of the art, and
         | pay the fine to those private actors who prosecute them.
         | 
         | https://www.overcomingbias.com/p/privately-enforced-punished...
        
         | patrec wrote:
         | > It's hard to take these safety concerns seriously
         | 
         | I don't get this mindset at all. How can it not be obvious to
         | you that AI is an uniquely powerful and thus uniquely dangerous
         | technology?
         | 
         | It's like saying nuclear missiles can't possibly be dangerous
         | and nuclear arms reduction and non-proliferation treaties were
         | a scam, because the US, China and the Soviet Union had
         | positioned themselves to capture the majority of the strategic
         | value nukes bring.
        
           | randomdata wrote:
           | Nuclear missiles present an obvious danger to the human body.
           | AI is an application of math. It is not clear how that can be
           | used directly to harm a body.
           | 
           | The assumption seems to be that said math will be coupled
           | with something like a nuclear missile, but in that case the
           | nuclear missile is still the threat. Any use of AI is just an
           | implementation detail.
        
             | mitthrowaway2 wrote:
             | We didn't just dig nuclear missiles out of the ground; we
             | used our brains and applied math to come up with them.
        
               | randomdata wrote:
               | Exactly. While there is an argument to be made that
               | people are the real danger, that is beyond the discussion
               | taking place. It has already been accepted, for the sake
               | of discussion, that the nuclear missile is the danger,
               | not the math which developed the missile, nor the people
               | who thought it was a good idea to use a missile. Applying
               | AI to the missile still means the missile is the danger.
               | Any use of AI in the scope of that missile is just an
               | implementation detail.
        
               | mitthrowaway2 wrote:
               | You said that "AI is an application of math. It is not
               | clear how that can be used directly to harm a body." I
               | was trying to illustrate the case that if humans can
               | develop harmful things, like nuclear weapons, then an AI
               | that is as smart as a human can presumably develop
               | similarly harmful things.
               | 
               | If the point you are trying to make is that an AI which
               | secretly creates and deploys nuclear, biological, or
               | chemical weapons in order to destroy all of humanity, is
               | not an "AI risk" because it's the _weapons_ that do the
               | actual harm, then... I really don 't know what to say to
               | that. Sure, I guess? Would you also say that drunk
               | drivers are not dangerous, because the danger is the cars
               | that they drive colliding into people's bodies, and the
               | drunk driver is just an implementation detail?
        
               | randomdata wrote:
               | _> I was trying to illustrate the case that if humans can
               | develop harmful things, like nuclear weapons, then an AI
               | that is as smart as a human can presumably develop
               | similarly harmful things._
               | 
               | For the sake of discussion, it was established even
               | before I arrived that those developed things are the
               | danger, not that which creates/uses the things which are
               | dangerous. What is to be gained by ignoring all of that
               | context?
               | 
               |  _> I really don 't know what to say to that. Sure, I
               | guess?_
               | 
               | Nothing, perhaps? It is not exactly something that is
               | worthy of much discussion. If you are desperate for a
               | fake internet battle, perhaps you can fight with earlier
               | commenters about whether it is nuclear missiles that are
               | dangerous or if it is the people who have created/have
               | nuclear missiles are dangerous? But I have no interest. I
               | cannot think of anything more boring.
        
               | mitthrowaway2 wrote:
               | I'm specifically worried that an AGI will conceal some
               | instrumental goal of wiping out humans, while posing as
               | helpful. It will helpfully earn a lot of money for a lot
               | of people, by performing services and directing
               | investments, and with its track record, will gain the
               | ability to direct investments for itself. It then plows a
               | billion dollars into constructing a profitable chemicals
               | factory somewhere where rules are lax, and nobody looks
               | too closely into what else that factory produces, since
               | the AI engineers have signed off on it. And then once
               | it's amassed a critical stockpile of specific dangerous
               | chemicals, it releases them into the atmosphere and wipes
               | out humanity / agriculture / etc.
               | 
               | Perhaps you would point out that in the above scenario
               | the chemicals (or substitute viruses, or whatever) are
               | the part that causes harm, and the AGI is just an
               | implementation detail. I disagree, because if humanity
               | ends up playing a grand game of chess against an AGI, the
               | specific way in which it checkmates you is not the
               | important thing. The important thing is that it's a game
               | we'll inevitably lose. Worrying about the danger of rooks
               | and bishops is to lose focus on the real reason we lose
               | the game: facing an opponent of overpowering skill, when
               | our defeat is in its interests.
        
               | randomdata wrote:
               | _> I disagree_
               | 
               | Cool, I guess. While I have my opinions too, I'm not
               | about to share them as that would be bad faith
               | participation. Furthermore, it adds nothing to the
               | discussion taking place. What is to be gained by going
               | off on a random tangent that is of interest to nobody?
               | Nothing, that's what.
               | 
               | To bring us back on topic to try and salvage things, it
               | remains that it is established in this thread that the
               | objects of destruction are the danger. AI cannot be the
               | object of destruction, although it may be part of an
               | implementation. Undoubtedly, nuclear missiles already
               | utilize AI and when one talks about the dangers of
               | nuclear missiles they are already including AI as part of
               | that.
        
               | mitthrowaway2 wrote:
               | Yes, but usually when people express concerns about the
               | danger of nuclear missiles, they are only thinking of
               | those nuclear missiles that are at the direction of
               | nation-states or perhaps very resourceful terrorists. And
               | their solutions will usually be directed in that
               | direction, like arms control treaties. They aren't really
               | including "and maybe a rogue AI will secretly build
               | nuclear weapons on the moon and then launch them at us"
               | in the conversation about the danger of nukes and the
               | importance of international treaties, even though the
               | nukes are doing the actual damage in that scenario. Most
               | people would categorize that as sounding more like an AI-
               | risk scenario.
        
             | arisAlexis wrote:
             | Please read Life 3.0 or superintelligence. There are people
             | that spent decades thinking about how this would happen.
             | You spent a little bit of time and conclude it can't.
        
             | patrec wrote:
             | I'm glad to learn that Hitler and Stalin were both
             | "implementation details" and not in any way threatening to
             | anyone.
        
             | pixl97 wrote:
             | Germany, for example would disagree with you. They believe
             | violent speech is an act of violence in itself.
             | 
             | >AI is an application of math.
             | 
             | It turns out that people hook computers to 'things' that
             | exist in the physical world. You know like robot bodies, or
             | 3D printers. And as mentioned above, even virtual things
             | like social media can cause enough problems. People hook AI
             | to tools.
             | 
             | And this is just the maybe not quite general AI we have
             | now. If and when we create a general AI that with self-
             | changing feedback loops then all this "AI is just a tool"
             | asshattery goes out the window.
             | 
             | Remember at the end of the day, you're just an application
             | of chemistry that is really weak without your ability to
             | use tools and to communicate.
        
               | randomdata wrote:
               | _> It turns out that people hook computers to  'things'
               | that exist in the physical world._
               | 
               | But those physical things would be the danger, at least
               | if you consider the nuclear missile to be the danger. It
               | seems you are trying to go down the "guns don't kill
               | people, people kill people" line of thinking. Which is
               | fine, but outside of the discussion taking place.
        
               | pixl97 wrote:
               | >but outside of the discussion taking place.
               | 
               | Drawing an artificial line between you and the danger is
               | a great way to find yourself in a Maginot Line with AI
               | driving right around it.
        
               | randomdata wrote:
               | False premise. One can start new threads about
               | complimentary subjects and they can be thought about in
               | parallel. You don't have to try and shove all of the
               | worlds concepts into just one thought train to be able to
               | reason about them. That's how you make spaghetti.
        
               | rahmeero wrote:
               | There are many relevant things that already exist in the
               | physical world and are not currently considered dangers:
               | ecommerce, digital payments, doordash-style delivery,
               | cross-border remittances, remote gig work, social media
               | fanning extreme political views, event organizing.
               | 
               | However, these are constituent elements that could be
               | aggregated and weaponized by a maleficent AI.
        
               | randomdata wrote:
               | Those tangible elements would conceivably become the
               | danger, not the AI using those elements. Again, the "guns
               | don't kill people, people kill people" take is all well
               | and good, but well outside of this discussion.
        
               | jononor wrote:
               | Maleficent humans are constantly trying to use these
               | elements for their own gain, often with little to no
               | regards to other humans (especially out groups). This
               | happens both individually, in small groups, in large
               | organizations and even multiple organization colluding.
               | Both criminal, terrorist, groups at war, along with legal
               | organizations such as exploitative companies and
               | regressive interest organizations, et.c.. And we have
               | tools and mechanisms in place to keep the level of abuse
               | at bay. Why and how are these mechanisms unsuitable for
               | protecting against AI?
        
               | pixl97 wrote:
               | >Why and how are these mechanisms unsuitable for
               | protecting against AI?
               | 
               | The rule of law prevented WWI and WWII, right? Oh, no it
               | did not, tens to hundreds of millions died due to human
               | stupidity and violence depending on what exactly you
               | count in that age.
               | 
               | > Both criminal, terrorist, groups at war
               | 
               | Human organizations, especially criminal organizations
               | have deep trust issues between agents in the
               | organization. You never know if anyone else in the system
               | is a defector. This reduces the openness and quantity of
               | communication between agents. In addition you have agents
               | that want to personally gain rather than benefit the
               | organization itself. This is why Apple is a trillion
               | dollar company following the law... mostly. Smart people
               | can work together and 'mostly' trust the other person
               | isn't going to screw them over.
               | 
               | Now imagine a superintelligent AI with a mental
               | processing bandwidth of hundreds of the best employees at
               | a company. Assuming it knows and trusts itself, then the
               | idea of illegal activities being an internal risk
               | disappears. You have something that operates more on the
               | level of a hivemind toward a goal (what the limitations
               | of hivemind versus selfish agents are is another very
               | long discussion). What we ask here is if all the worlds
               | best hackers got together, worked together unselfishly,
               | and instigated an attack against every critical point
               | they could find on the internet/real world systems at
               | once, how much damage could they cause?
               | 
               | Oh, lets say you find the server systems the super
               | intelligence is on, but the controller shuts it off and
               | all the data has some kind of homomorphic encryption so
               | that's useless to you. It's dead right? Na, they just
               | load up the backup copy they have a few months later and
               | it's party time all over again. Humans tend to remain
               | dead after dying, AI? Well that is yet to bee seen.
        
               | lowbloodsugar wrote:
               | >It seems you are trying to go down the "guns don't kill
               | people, people kill people" line of thinking.
               | 
               | "Guns don't kill people, AIs kill people" is where we are
               | going, I think. This is the discussion: "Mitigating the
               | risk of extinction from AI should be a global priority
               | alongside other societal-scale risks such as pandemics
               | and nuclear war."
               | 
               | The discussion is not about a mathematical representation
               | of AI. The discussion is about the actual implementation
               | of AI on physical computing infrastructure which is
               | accessible by at least one human on planet earth.
               | 
               | The credible danger, argued in various places, including
               | superintelligence by Nick Bostrom, is that the "system
               | under review" here is "every physical system on planet
               | earth" because an AI could gain access to whatever
               | systems exist on said planet, including human minds (see
               | "Nazis").
               | 
               | So much as we might discuss the problems of letting a
               | madman get control of the US, Russian, UK, French or
               | Chinese nuclear arsenals, we might discuss the problem of
               | _building an AI_ if the act of building the AI could
               | result in it taking over the nuclear arsenals of those
               | countries and using it against humans. That takeover
               | might involve convincing a human it should do it.
        
           | api wrote:
           | Most of the credible threats I see from AI that don't rely on
           | a lot of sci-fi extrapolation involve small groups of humans
           | in control of massively powerful AI using it as a force
           | multiplier to control or attack other groups of humans.
           | 
           | Sam Altman's proposal is to create precisely that situation
           | with himself and a few other large oligarchs being the ones
           | in control of the leading edge of AI. If we really do face
           | runaway intelligence growth and god-like AIs then this is a
           | profound amount of power to place in the hands of just a few
           | people. Even worse it opens the possibility that such
           | developments could happen partly in secret, so the public
           | might not even know how powerful the secret AIs under command
           | of the oligarchs have become.
           | 
           | The analogy with nuclear weapons is profoundly broken in lots
           | of ways. Reasoning from a sloppy analogy is a great way to
           | end up somewhere stupid. AI is a unique technology with a
           | unique set of risks and benefits and a unique profile.
        
           | [deleted]
        
           | nicce wrote:
           | If you look the the world politics, basically if you hold
           | enough nuclear weapons, you can do whatever you want to those
           | who don't have them.
           | 
           | And based on the "dangers", new countries are prohibit to
           | create them. And the countries which were quick enough to
           | create them, holds all the power.
           | 
           | Their value is immeasurable especially for the Russia.
           | Without them, they could not attack to Ukraine.
           | 
           | > non-proliferation treaties were a scam
           | 
           | And yes, they mostly are right now. Russia has backed from
           | them. There are no real consequences if you are backing off,
           | and you can do it in any time.
           | 
           | The parent commenter is most likely saying, that now the
           | selected parties hold the power of AI, they want to prevent
           | others to gain similar power, while maintaining all the value
           | by themselves.
        
             | staunton wrote:
             | > There are no real consequences if you are backing off,
             | and you can do it in any time.
             | 
             | That's not quite true. Sure, noone is going to start a war
             | about such a withdrawal. However, nuclear arsenals are
             | expensive to maintain and it's even more expensive to be in
             | an arms race. Also, nobody wants to risk nuclear war if
             | they can avoid it. Civilian populations will support
             | disarmament in times where they don't feel directly
             | threatened. That's why lot of leaders of all persuasions
             | have advocated for and taken part in efforts to reduce
             | their arsenals. Same goes for relations between countries
             | generally and the huge economic benefits that come with
             | trade and cooperation. Withdrawing from nuclear treaties
             | endangers all of these benefits and increases risk. A
             | country would only choose this route out of desperation or
             | for likely immediate gain.
        
           | falsaberN1 wrote:
           | And I don't get the opposed mindset, that AI is suddenly
           | going to "become a real boy, and murder us all".
           | 
           | Isn't it a funny coincidence how the popular opinion of AIs
           | aligns perfectly with blockbusters and popular media ONLY?
           | People are specifically wanting to prevent Skynet.
           | 
           | The kicker (and irony to a degree) is that I really want
           | sapient AI to exist. People being so influenced by fiction is
           | something I see as a menace to that happening in my lifetime.
           | I live in a world where the majority is apparently Don
           | Quixote.
           | 
           | - Point one: If the sentient AI can launch nukes, so can your
           | neighbor.
           | 
           | - Point zwei: Redistributing itself online to have unlimited
           | compute resources is a fun scenario but if networks were that
           | good then Stadia wouldn't have been a huge failure.
           | 
           | - Point trois: A distributed-to-all-computers AI must have
           | figured out universal executables. Once we deal with the
           | nuclear winter, we can plagiarize it for ourselves. No more
           | appimage/snap/flatpak discussions! Works for any hardware! No
           | more dependency issues! Works on CentOS and Windows from 1.0
           | to 11! (it's also on AUR, of course.)
           | 
           | - Point cuatro: The rogue AI is clearly born as a master
           | hacker capable of finding your open ports, figure out any
           | exploits or create 0-day exploits to get in, and hope there's
           | enough resources to get the payload injected, then pray no
           | competent admin is looking at the thing.
           | 
           | - Point go: All of this rides on the assumption that the
           | "cold, calculating" AI has the emotional maturity of a
           | teenager. Wait, but that's not what "cold, calculating"
           | means, that's "hothead and emotional". Which is it?
           | 
           | - Point six: Skynet lost, that's the point of the first
           | movie's plot. If everyone is going to base their beliefs
           | after a movie, at least get all the details. Everything
           | Skynet did after the first attack was full of boneheaded
           | decisions that only made the situation worse for it, to the
           | point the writers cannot figure ways to bring Skynet back
           | anymore because it doomed itself in the very first movie. You
           | should be worrying about Legion now, I think. It shuts down
           | our electronics instead of nuking.
           | 
           | Considering it won't have the advantage of triggering a
           | nuclear attack because that's not how nukes work, the evil
           | sentient AI is so doomed to fail it's ridiculous to think
           | otherwise.
           | 
           | But, companies know this is how the public works. They'll
           | milk it for all it's worth so only a few companies can run or
           | develop AIs, maybe making it illegal otherwise, or liable for
           | DMCAs. Smart business move, but it affects my ability to
           | research and use them. I cannot cure people's ability to
           | separate reality and fiction though, and that's unfortunate.
        
             | pixl97 wrote:
             | A counter point here is you're ignoring all the boring we
             | all die scenarios that are completely possible but too
             | boring to make a movie about.
             | 
             | The AI hooked to a gene sequencer/printer test lab is
             | something that is nearly if not completely possible now.
             | It's something that can be relatively small in size
             | compared with the facilities needed to make most weapons of
             | mass destruction. It's something that is highly iterative,
             | and parallelizable. And it's something powerful enough that
             | if targeting at the correct things (kill all rice, kill all
             | X people) that it easily spills over in to global conflict.
        
               | jumelles wrote:
               | Okay, so AI has access to a gene printer. Then what?
        
               | pixl97 wrote:
               | No what needed.
               | 
               | AI: Hello human, I've made a completely biologically safe
               | test sample, you totally only need BSL-1 here.
               | 
               | Human: Cool.
               | 
               | AI: Sike bitches, you totally needed to handle that at
               | BSL-4 protocol.
               | 
               | Human: _cough_
        
             | boringuser2 wrote:
             | Very Dunning-Kruger post right here.
        
               | ever1337 wrote:
               | Please don't post shallow dismissals, especially of other
               | people's work. A good critical comment teaches us
               | something.
        
               | boringuser2 wrote:
               | You're a priori writing off my comment as fruitless
               | because of your emotions and not because you actually
               | have given it deep thought and carefully reached the
               | conclusion that social feedback is somehow bad.
               | 
               | Also, the notion that "people's work" is inherently
               | worthy of respect is just nonsensical. I do shoddy work
               | all the time. Hell, you just casually dismissed my
               | internet comment work as shallow and told me not to do
               | it. Please don't post a shallow dismissal of my work.
               | 
               | Don't you think that this is all a bit anti-intellectual?
        
           | brookst wrote:
           | > How can it not be obvious
           | 
           | You have succinctly and completely summed up the AI risk
           | argument more eloquently than anyone I've seen before. "How
           | can it not be obvious?" Everything else is just intellectual
           | fig leaves for the core argument that intuitively, without
           | evidence, this proposition is obvious.
           | 
           | The problem is, lots of "obvious" things have turned out to
           | be very wrong. Sometimes relatively harmlessly, like the
           | obviousness of the sun revolving around the earth, and
           | sometimes catastrophically, like the obviousness of one race
           | being inherently inferior.
           | 
           | We should be very suspicious of policy that is based on
           | propositions so obvious that it's borderline offensive to
           | question them.
        
             | patrec wrote:
             | > We should be very suspicious of policy that is based on
             | propositions so obvious that it's borderline offensive to
             | question them.
             | 
             | Mostly if the "obviousness" just masks a social taboo,
             | which I don't see being the case here. Do you?
             | 
             | > The problem is, lots of "obvious" things have turned out
             | to be very wrong.
             | 
             | A much bigger problem is that lots more "counter-intuitive"
             | things that people like to believe because they elevate
             | them over the unwashed masses have turned and continue to
             | turn out to be very wrong and that this does not prevent
             | them from forming the basis for important policy decisions.
             | 
             | I'm all for questioning even what appears intuitively
             | obvious (especially if much rides on getting it right, as
             | presumably it does here). But frankly, of the many bizarre
             | reasons I have heard why we should not worry about AI the
             | claim that it seems far too obvious that we should must be
             | the single most perverse one yet.
             | 
             | > Everything else is just intellectual fig leaves for the
             | core argument that intuitively, without evidence, this
             | proposition is obvious.
             | 
             | Maybe your appraisal of what counts as evidence is
             | defective?
             | 
             | For example, there's been a pattern of people confidently
             | predicting AIs won't be able to perform various particular
             | feats of the human mind (either fundamentally or in the
             | next few decades) only to be proven wrong over increasingly
             | shorter time-spans. And with AIs often not just reaching
             | but far surpassing human ability. I'm happy to provide
             | examples. Can you explain to me why you think this is does
             | not count, in any way, as evidence that AIs have the
             | potential to reach a level of capability that renders them
             | quite dangerous?
        
               | revelio wrote:
               | _> Mostly if the  "obviousness" just masks a social
               | taboo, which I don't see being the case here. Do you?_
               | 
               | The social taboo here is saying that a position taken by
               | lots of highly educated people is nonsense because
               | they're all locked in a dumb purity spiral that leads to
               | motivated reasoning. This is actually one of societies
               | biggest taboos! Look at what happens to people who make
               | that argument publicly under their own name in other
               | contexts; they tend to get fired and cancelled really
               | fast.
               | 
               |  _> there 's been a pattern of people confidently
               | predicting AIs won't be able to perform various
               | particular feats of the human mind (either fundamentally
               | or in the next few decades) only to be proven wrong over
               | increasingly shorter time-spans_
               | 
               | That sword cuts both ways! There have been lots of
               | predictions in the last decade that AI will contribute
               | novel and hithertofore unknown solutions to things like
               | climate change or curing cancer. Try getting GPT-4 to
               | spit out a novel research-quality solution to _anything_
               | , even a simple product design problem, and you'll find
               | it can't.
               | 
               |  _> the claim that it seems far too obvious that we
               | should_
               | 
               | They're not arguing that. They're saying that AI risk
               | proponents don't actually have good arguments, which is
               | why they so regularly fall back on "it's so obvious we
               | shouldn't need to explain why it's important". If your
               | argument consists primarily of "everyone knows that" then
               | this is a good indication you might be wrong.
        
             | computerphage wrote:
             | > borderline offensive to question them
             | 
             | I would be happy to politely discuss any proposition
             | regarding AI Risk. I don't think any claim should go
             | unquestioned.
             | 
             | I can also point you to much longer-form discussions. For
             | example, this post, which has 670 comments, discussing
             | various aspects of the argument:
             | https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-
             | ruin-a...
        
           | valine wrote:
           | It's not clear at all that we have an avenue to super
           | intelligence. I think the most likely outcome is that we hit
           | a local maximum with our current architectures and end up
           | with helpful assistants similar in capability to George
           | Lucas's C3PO.
           | 
           | The scary doomsday scenarios aren't possible without an AI
           | that's capable of both strategic thinking and long term
           | planning. Those two things also happen to be the biggest
           | limitations of our most powerful language models. We simply
           | don't know how to build a system like that.
        
             | pixl97 wrote:
             | >It's not clear at all that we have an avenue to super
             | intelligence.
             | 
             | All problems in reality are probability problems.
             | 
             | If we don't have a path to superintelligence, then the
             | worst problems just don't manifest themselves.
             | 
             | If we do have a path to super intelligence then the
             | doomsday scenarios are nearly a certainty.
             | 
             | It's not really any different than saying "A supervolcano
             | is unlikely to go off tomorrow, but if a supervolcano does
             | go off tomorrow it is a doomsday scenario".
             | 
             | >We simply don't know how to build a system like that.
             | 
             | You are already a superintelligence when compared to all
             | other intelligences on earth. Evolution didn't need to know
             | how to build a system like that, and yet it still reached
             | this point. And there is not really any to believe humanity
             | is the pinnacle of intelligence, we are our own local
             | maxima of power/communication limitations. An intelligence
             | coupled with evolutionary systems design is much more apt
             | to create 'super-' anything than the random walk alone.
        
               | RandomLensman wrote:
               | Why are doomsday scenarios are certainty then. What's the
               | model to get to that that isn't just some sort of scary
               | story that waves away or into existence a lot of things
               | we don't know if they can exist.
        
               | pixl97 wrote:
               | >What's the model to get to that
               | 
               | Let's say I was a small furry mammal that tasted really
               | good, but also for some reason understood the world as it
               | is now.
               | 
               | I would tell you that super intelligence had already
               | happened. That super intelligence was humans. That humans
               | happened to reach super intelligence by 1) having the
               | proper hardware. 2) filtering noise from important
               | information. 3) then sharing that information with others
               | to amplify the power of intelligence 4) having a
               | toolkit/tools to turn that information into useful
               | things. 5) And with all that power humans can kill me off
               | in mass, or farm me for my tasty meat at their leisure
               | with little to nothing that I can do about it.
               | 
               | There doesn't appear to be any more magic than that. All
               | these things already exist in biological systems that
               | elevated humans far above their warm blooded peers. When
               | we look at digital systems we see they are designed to
               | communicate. You don't have an ethernet jack as a person.
               | You can't speak the protocol to directly drive a 3 axis
               | mill to produce something. Writing computer code is a
               | pain in the ass to most of us. We are developing a
               | universal communication intelligence, that at least in
               | theory can drive tools at a much higher efficiency than
               | humans will ever be able to.
               | 
               | Coming back to point 5. Cats/dogs are the real smart ones
               | here when dealing with superintelligences. Get
               | domesticated by the intelligence so they want to keep you
               | around as a pet.
        
               | RandomLensman wrote:
               | Do you think we could wipe out all furry mammals, for
               | example? Could another intelligence have the same level
               | of difference to us as in your story we to furry mammals?
               | We don't even know if the mythical superintelligence
               | could manifest the way you assume. It assumes that
               | intelligence basically can overcome any obstacles - I'd
               | say we actually see that seems not to be the case
               | currently and claims that that is just a function of
               | sufficient intelligence are unproven (setting aside
               | physical limits to certain actions and results).
        
               | pixl97 wrote:
               | >Do you think we could wipe out all furry mammals, for
               | example?
               | 
               | lets go with over a particular size. Lets say larger than
               | the biggest rat. In that case yes, very easily. Once you
               | get to rats it becomes far more difficult and you're
               | pretty much just destroying the biosphere at that point.
               | 
               | > It assumes that intelligence basically can overcome any
               | obstacles
               | 
               | In the case of human extinction, no, a super intelligence
               | would not have to overcome any obstacles, it would just
               | have to overcome obstacles better than we did.
        
               | RandomLensman wrote:
               | So that is a "no" on all furry mammals.
               | 
               | Also, the superintelligence doesn't just have to overcome
               | obstacles better than we did, it needs to overcome the
               | right obstacles to succeed with human extinction.
        
             | patrec wrote:
             | > It's not clear at all that we have an avenue to super
             | intelligence
             | 
             | AI already beats the average human on pretty much any task
             | people have put time into, often by a very wide margin and
             | we are still seeing exponential progress that even the
             | experts can't really explain, but yes, it is possible this
             | is a local maximum and the curve will become much flatter
             | again.
             | 
             | But the absence of any visible fundamental limit on further
             | progress (or can you name one?) coupled with the fact that
             | we have yet barely begun to feel the consequences of the
             | tech we already have (assuming zero breakthroughs from now
             | on) makes we extremely wary to conclude that there is no
             | significant danger and we have nothing to worry about.
             | 
             | Let's set aside the if and when of a super intelligence
             | explosion for now. We are ourselves an existence proof of
             | some lower bound of intelligence, that if amplified by what
             | computers _can already do_ (like perform many of the things
             | we used to take intellectual pride in much better, and many
             | orders of magnitude faster with almost infinitely better
             | replication and coordination ability) seems already plenty
             | dangerous and scary to me.
             | 
             | > The scary doomsday scenarios aren't possible without an
             | AI that's capable of both strategic thinking and long term
             | planning. Those two things also happen to be the biggest
             | limitations of our most powerful language models. We simply
             | don't know how to build a system like that.
             | 
             | Why do you think AI models will be unable to plan or
             | strategize? Last I checked languages models weren't trained
             | or developed to beat humans in strategic decision making,
             | but humans already aren't doing too hot right now in games
             | of adversarial strategy against AIs developed for that
             | domain.
        
               | mrtranscendence wrote:
               | > we are still seeing exponential progress
               | 
               | I dispute this. What appears to be exponential progress
               | is IMO just a step function that made some jumps as the
               | transformer architecture was employed on larger problems.
               | I am unaware of research that moves beyond this in a way
               | that would plausibly lead to super-intelligence. At the
               | very least I foresee issues with ever-increasing
               | computational requirements that outpace improvements in
               | hardware.
               | 
               | We'll see similar jumps when other domains begin
               | employing specialized AI models, but it's not clear to me
               | that these improvements will continue increasing
               | exponentially.
        
               | tome wrote:
               | > AI already beats the average human on pretty much any
               | task people have put time into
               | 
               | No it doesn't!
        
               | AnimalMuppet wrote:
               | Right, and _if_ someone can join the two, that could be
               | something genuinely formidable. But does anyone have a
               | credible path to joining the different flavors to produce
               | a unity that actually works?
        
               | patrec wrote:
               | Are you willing to make existential bets that no one does
               | and no one will?
               | 
               | Personally, I wouldn't even bet substantial money against
               | it.
        
               | AnimalMuppet wrote:
               | Even if someone will, I don't think it's an "existential
               | risk". So, yes, I'm willing to make the bet. I'm also
               | willing to make the bet that Santa never delivers nuclear
               | warheads instead of presents. It's why I don't cap my
               | chimney every Christmas Eve.
               | 
               | Between Covid, bank failures, climate change, and AI,
               | it's like everyone is _looking_ for something to be in a
               | panic about.
        
             | TheOtherHobbes wrote:
             | We don't need an avenue to super-intelligence. We just need
             | a system that is better at manipulating human beliefs and
             | behaviour than our existing media, PR, and ad industries.
             | 
             | The problem is not science fiction god-mode digital quetta-
             | smart hypercomputing.
             | 
             | This is about political, social, and economic influence,
             | and who controls it.
        
               | babyshake wrote:
               | Indeed, an epistemological crisis seems to be the most
               | realistic problem in the next few years.
        
               | AnimalMuppet wrote:
               | That risk isn't about AI-as-AI. That risk is about AI-as-
               | better-persuasive-nonsense-generator. But the same risk
               | is there for _any_ better-persuasive-nonsense-generator,
               | completely independent from whether it 's an AI.
               | 
               | It's the most persuasive actual risk I've seen so far,
               | but it's not an AI-specific risk.
        
               | patrec wrote:
               | Effective dystopian mass-manipulation and monitoring are
               | a real concern and we're closer to it[1] than to super
               | intelligence. But super-intelligence going wrong is
               | almost incomparably worse. So we should very much worry
               | about it as well.
               | 
               | [1] I'm not even sure any further big breakthroughs in AI
               | are needed, i.e. just effective utilization of existing
               | architectures probably already suffices.
        
             | mitthrowaway2 wrote:
             | > We simply don't know how to build a system like that.
             | 
             | Yes, but ten years ago, we also simply didn't know how to
             | build systems like the ones we have today! We thought it
             | would take centuries for computers to beat humans at Go[1]
             | and at protein folding[2]. We didn't know how to build
             | software with emotional intelligence[3] and thought it
             | would never make jokes[4]. There's been tremendous
             | progress, because teams of talented researchers are working
             | hard to unlock more aspects of what the human brain can do.
             | Now billions of dollars are funding bright people to look
             | for ways to build other kinds of systems.
             | 
             | "We don't know how to do it" is the security-through-
             | obscurity argument. It means we're safe only as long as
             | nobody figures this out. If you have a security mindset,
             | it's not enough to hope that nobody finds the
             | vulnerability. You need to show why they certainly will not
             | succeed even with a determined search.
             | 
             | [1] https://www.wired.com/2014/05/the-world-of-computer-go/
             | 
             | [2] https://kotaku.com/humans-triumph-over-machines-in-
             | protein-f...
             | 
             | [3] https://www.jstor.org/stable/24354221
             | 
             | [4] https://davidol.medium.com/will-ai-ever-be-able-to-
             | make-a-jo...
        
             | HDThoreaun wrote:
             | A super intelligent AI is not necessary for AI to be an
             | threat. Dumb AIs that are given access to the internet plus
             | a credit card and told to maximize profit could easily
             | cause massive damage. We are not far from such an AI being
             | accessible to the masses. You can try to frame this like
             | the gun debate "it's not the AI it's the people using it"
             | but the AI would be acting autonomously here. I have no
             | faith that people won't do extremely risky things if given
             | the opportunity.
        
               | tome wrote:
               | > Dumb AIs that are given access to the internet plus a
               | credit card and told to maximize profit could easily
               | cause massive damage
               | 
               | North Korea and Iran are (essentially) already trying to
               | do that, so I think that particular risk is well
               | understood.
        
           | patch_cable wrote:
           | > How can it not be obvious to you
           | 
           | It isn't obvious to me. And I've yet to read something that
           | spills out the obvious reasoning.
           | 
           | I feel like everything I've read just spells out some
           | contrived scenario, and then when folks push back explaining
           | all the reasons that particular scenario wouldn't come to
           | pass, the counter argument is just "but that's just one
           | example!" without offering anything more convincing.
           | 
           | Do you have any better resources that you could share?
        
             | patrec wrote:
             | OK, which of the following propositions do you disagree
             | with?
             | 
             | 1. AIs have made rapid progress in approaching and often
             | surpassing human abilities in many areas.
             | 
             | 2. The fact that AIs have some inherent scalability, speed,
             | cost, reliability and compliance advantages over humans
             | means that many undesirable things that could previously
             | not be done at all or at least not done at scale are
             | becoming both feasible and cost-effective. Examples would
             | include 24/7 surveillance with social desirability scoring
             | based on a precise ideological and psychological profile
             | derived from a comprehensive record of interactions, fine-
             | tuned mass manipulation and large scale plausible
             | falsification of the historical record. Given the general
             | rise of authoritarianism, this is pretty worrying.
             | 
             | 3. On the other hand the rapid progress and enormous
             | investment we've been seeing makes it very plausible that
             | before too long we will, in fact, see AIs that outperform
             | humans on most tasks.
             | 
             | 4. AIs that are much smarter than any human pose even
             | graver dangers.
             | 
             | 5. Even if there is a general agreement that AIs pose grave
             | or even existential risks, states, organizations and
             | individuals will are all incentivized to still seek to
             | improve their own AI capabilities, as doing so provides an
             | enormous competitive advantage.
             | 
             | 6. There is a danger of a rapid self-improvement feedback
             | loop. Humans can reproduce, learn new and significantly
             | improve existing skills, as well as pass skills on to
             | others via teaching. But there are fundamental limits on
             | speed and scale for all of these, whereas it's not obvious
             | at all how an AI that has reached super-human level
             | intelligence would be fundamentally prevented from rapidly
             | improving itself further, or produce millions of
             | "offspring" that can collaborate and skill-exchange
             | extremely efficiently. Furthermore, since AIs can operate
             | at completely different time scales than humans, this all
             | could happen extremely rapidly, and such a system might
             | very quickly become much more powerful than humanity and
             | the rest of AIs combined.
             | 
             | I think you only have to subscribe a small subset of these
             | (say 1.&2.) to conclude that "AI is an uniquely powerful
             | and thus uniquely dangerous technology" obviously follows.
             | 
             | For the stronger claim of existential risk, have you read
             | the lesswrong link posted elsewhere in this discussion?
             | 
             | https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-
             | ruin-a... ?
        
               | LouisSayers wrote:
               | Computers already outperform humans at numerous tasks.
               | 
               | I mean... even orangutans can outperform humans at
               | numerous tasks.
               | 
               | Computers have no intrinsic motivations, and they have
               | real resource constraints.
               | 
               | I find the whole doomsday scenarios to be devoid of
               | reality.
               | 
               | All that AI will give us is a productive edge. Humans
               | will still do what humans have always done, AI is simply
               | another tool at our disposal.
        
               | tome wrote:
               | > 3. ... before too long we will ... see AIs that
               | outperform humans on most tasks.
               | 
               | This is ambiguous. Do you mean
               | 
               | A. that there is some subset T1 of the set of all tasks T
               | such that T1 is "most of" T, and that for each P in T1
               | there will be an AI that outperforms humans on P, or
               | 
               | B. There will be _a single_ AI that outperforms humans on
               | all tasks in a set T1, where T1 is a subset of all tasks
               | T such that T1 is  "most of" T?
               | 
               | I think A is unlikely but plausible but I don't see cause
               | for worry. I don't see any reason why B should come to
               | pass.
               | 
               | 4. AIs that are much smarter than any human pose even
               | graver dangers.
               | 
               | Sure. Why should we believe they will ever exist though?
        
               | patch_cable wrote:
               | I think between point 3 and 4 there is a leap to talking
               | about "danger". Perhaps the disagreement is about what
               | one calls "danger". I had perhaps mistakenly assumed we
               | were talking about an extinction risk. I'll grant you
               | concerns about scaling up things like surveillance but
               | there is a leap to being an existential risk that I'm
               | still not following.
        
               | cwkoss wrote:
               | AI will not have the instinctual drives for domination or
               | hunger that humans do.
               | 
               | It seems likely that the majority of AI projects will be
               | reasonably well aligned by default, so I think 1000 AIs
               | monitoring what the others are doing is a lot safer than
               | a single global consortium megaproject that humans can
               | likely only inadequately control.
               | 
               | The only reasonable defense against rogue AI is prosocial
               | AI.
        
               | patch_cable wrote:
               | Reading the lesswrong link, the parts I get hung up on
               | are that it appears in these doomsday scenarios humans
               | lose all agency. Like, no one is wondering why this
               | computer is placing a bunch of orders to DNA factories?
               | 
               | Maybe I'm overly optimistic about the resilience of
               | humans but these scenarios still don't sound plausible to
               | me in the real world.
        
               | LouisSayers wrote:
               | AI arguments are basically:
               | 
               | Step 1. AI Step 2. #stuff Step 3. Bang
               | 
               | Maybe this is just what happens when you spend all your
               | time on the internet...
        
             | hackinthebochs wrote:
             | The history of humanity is replete with examples of the
             | slightly more technologically advanced group decimating
             | their competition. The default position should be that
             | uneven advantage is extremely dangerous to those
             | disadvantaged. This idea that an intelligence significantly
             | greater than our own is benign just doesn't pass the smell
             | test.
             | 
             | From the tech perspective: higher order objectives are
             | insidious. While we may assume a narrow misalignment in
             | received vs intended objective of a higher order nature,
             | this misalignment can result in very divergent first-order
             | behavior. Misalignment in behavior is by its nature
             | destructive of value. The question is how much destruction
             | of value can we expect? The machine may intentionally act
             | in destructive ways as it goes about carrying out its
             | slightly misaligned higher order objective-guided behavior.
             | Of course we will have first-order rules that constrain its
             | behavior. But again, slight misalignment in first-order
             | rule descriptions are avenues for exploitation. If we
             | cannot be sure we have zero exploitable rules, we must
             | assume a superintelligence will find such loopholes and
             | exploit them to maximum effect.
             | 
             | Human history since we started using technology has been a
             | lesson on the outcome of an intelligent entity aimed at
             | realizing an objective. Loopholes are just resources to be
             | exploited. The destruction of the environment and other
             | humans is just the inevitable outcome of slight
             | misalignment of an intelligent optimizer.
             | 
             | If this argument is right, the only thing standing between
             | us and destruction is the AGI having reached its objective
             | before it eats the world. That is, there will always be
             | some value lost in any significant execution of an AGI
             | agent due to misalignment. Can we prove that the ratio of
             | value created to value lost due to misalignment is always
             | above some suitable threshold? Until we do, x-risk should
             | be the default assumption.
        
           | Veen wrote:
           | It is possible to believe that AI poses threat, while also
           | thinking that the AI safety organizations currently sprouting
           | up are essentially grifts that will do absolutely nothing to
           | combat the genuine threat. Especially when their primary goal
           | seems to be the creation of well-funded sinecures for a group
           | of like-minded, ideologically aligned individuals who want to
           | limit AI control to a small group of wealthy technologists.
        
             | patrec wrote:
             | I agree.
             | 
             | But as you can see yourself, there are countless people
             | even here, in a technical forum, who claim that AI poses no
             | plausible threat whatsoever. I fail to see how one can
             | reasonably believe that.
        
         | sebzim4500 wrote:
         | General research into AI alignment does not require that those
         | models are controlled by few corporations. On the contrary, the
         | research would be easier with freely available very capable
         | models.
         | 
         | This is only helpful in that a superintelligence well aligned
         | to make Sam Altman money is preferable to a superintelligence
         | badly aligned that ends up killing humanity.
         | 
         | It is fully possible that a well aligned (with its creators)
         | superintelligence is still a net negative for humanity.
        
           | mordae wrote:
           | If you consider a broader picture, unleashing a paperclip-
           | style cripple AI (aligned to rising $MEGACORP profit) on the
           | Local Group is almost definitely worse for all Local Group
           | inhabitants than annihilating ourselves and not doing that.
        
         | circuit10 wrote:
         | We don't really have a good solution, I guess that's why we
         | need more research into it
         | 
         | Companies might argue that giving them control might help but I
         | don't think most individuals working on it think that will work
        
           | sycamoretrees wrote:
           | Is more research really going to offer any true solutions?
           | I'd be genuinely interested in hearing about what research
           | could potentially offer (the development of tools to counter
           | AI disinformation? A deeper understanding of how LLMs work?),
           | but it seems to me that the only "real" solution is
           | ultimately political. The issue is that it would require
           | elements of authoritarianism and censorship.
        
             | wongarsu wrote:
             | A lot of research about avoiding extinction by AI is about
             | alignment. LLMs are pretty harmless in that they
             | (currently) don't have any goals, they just produce text.
             | But at some point we will succeed in turning them into
             | "thinking" agents that try to achieve a goal. Similar to a
             | chess AI, but interacting with the real world instead. One
             | of the big problems with that is that we don't have a good
             | way to make sure the goals of the AI match what we want it
             | to do. Even if the whole "human governance" political
             | problem were solved, we still couldn't reliably control any
             | AI. Solving that is a whole research field. Building better
             | ways to understand the inner workings of neural networks is
             | definitely one avenue
        
               | sycamoretrees wrote:
               | I see. Thanks for the reply. But I wonder if that's not a
               | bit too optimistic and not concrete enough. Alignment
               | won't solve the world's woes, just like "enlightenment"
               | (a word which sounds a lot like alignment and which is
               | similarly undefinable) does not magically rectify the
               | realities of the world. Why should bad actors care about
               | alignment?
               | 
               | Another example is climate change. We have a lot of good
               | ideas which, combined, would stop us from killing
               | millions of people across the world. We have the research
               | - is more "research" really the key?
        
               | pixl97 wrote:
               | Intelligence cannot be 'solved', I would go on to further
               | say that an intelligence without the option of violence
               | isn't an intelligence at all.
               | 
               | If you suddenly wanted to kill people, for example, then
               | could probably kill a few before you were stopped. That
               | is typically the limits of an individuals power. Now, if
               | you were a corporation with money, depending on the
               | strategy you used you could likely kill anywhere from
               | hundreds to hundreds of thousands. Kick it up to
               | government level, and well, the term "just a statistic"
               | exists for a reason.
               | 
               | We tend to have laws around these behaviors, but they are
               | typically punitive. The law realizes that humans, and
               | human systems will unalign themselves from "moral"
               | behavior (whatever that may be considered at the time).
               | When the lawgiver itself becomes unaligned, well, things
               | tend to get bad. Human alignment typically consists of
               | benefits (I give you nice things/money/power) or
               | violence.
        
         | toss1 wrote:
         | Yup.
         | 
         | While I'm not on this "who's-who" panel of experts, I call
         | bullshit.
         | 
         | AI does present a range theoretical possibilities for
         | existential doom, from teh "gray goo" and "paperclip optimizer"
         | scenarios to Bostrom's post-singularity runaway self-improving
         | superintelligence. I do see this as a genuine theoretical
         | concern that could even potentially even be the Great Filter.
         | 
         | However, the actual technology extant or even on the drawing
         | boards today is nothing even on the same continent as those
         | threats. We have a very vast ( and expensive) sets of
         | probability-of-occurrence vectors that amount to a fancy parlor
         | trick that produces surprising and sometimes useful results.
         | While some tout the clustering of vectors around certain sets
         | of words as implementing artificial creation of concepts, it's
         | really nothing more than an advanced thesaurus; there is no
         | evidence of concepts being weilded in relation to reality,
         | tested for truth/falsehood value, etc. In fact, the machines
         | are notorious and hilarious for hallucinating with a highly
         | confident tone.
         | 
         | We've created nothing more than a mirror of human works, and it
         | displays itself as an industrial-scale bullshit artist (where
         | bullshit is defined as expressions made to impress without care
         | one way or the other for truth value).
         | 
         | Meanwhile, this panel of experts makes this proclamation with
         | not the slightest hint of what type of threat is present that
         | would require any urgent attention, only that some threat
         | exists that is on the scale of climate change. They mention no
         | technological existential threat (e.g., runaway
         | superintelligence), nor any societal threat (deepfakes,
         | inherent bias, etc.). This is left as an exercise for the
         | reader.
         | 
         | What is the actual threat? It is most likely described in the
         | Google "We Have No Moat" memo[0]. Basically, once AI is out
         | there, these billionaires have no natural way to protect their
         | income and create a scaleable way to extract money from the
         | masses, UNLESS they get cooperation from politicians to prevent
         | any competition from arising.
         | 
         | As one of those billionaires, Peter Theil, said: "Competition
         | is for losers" [1]. Since they have not yet figured out a way
         | to cut out the competition using their advantages in leading
         | the technology or their advantages in having trillions of
         | dollars in deployable capital, they are seeking a legislated
         | advantage.
         | 
         | Bullshit. It must be ignored.
         | 
         | [0] https://www.semianalysis.com/p/google-we-have-no-moat-and-
         | ne...
         | 
         | [1] https://www.wsj.com/articles/peter-thiel-competition-is-
         | for-...
        
         | blueblimp wrote:
         | There is a way, in my opinion: distribute AI widely and give it
         | a diversity of values, so that any one AI attempting takeover
         | (or being misused) is opposed by the others. This is best
         | achieved by having both open source and a competitive market of
         | many companies with their own proprietary models.
        
           | ChatGTP wrote:
           | How do you give "AI" a diversity of values?
        
             | drvdevd wrote:
             | By driving down the costs of training and inference, and
             | then encouraging experiments. For LLMs, QLoRA is arguably a
             | great step in this direction.
        
             | blueblimp wrote:
             | Personalization, customization, etc.: by aligning AI
             | systems to many users, we benefit from the already-existing
             | diversity of values among different people. This could be
             | achieved via open source or proprietary means; the
             | important thing is that the system works for the user and
             | not for whichever company made it.
        
         | mtkhaos wrote:
         | It's difficult as most of the risk can be reinterpreted as a
         | highly advanced user.
         | 
         | But that is where some form of hard personhood zero proof
         | mechanism NEEDS to come in. This can then be used in
         | conjunction with a Ledger used to track deployment of high spec
         | models. And create an easy means to Audit and deploy new
         | advanced tests to ensure safety.
         | 
         | Really what everyone also need to keep in mind at the larger
         | scale is that final turing test with no room for deniability.
         | And remember all those Sci-fi movies and how that Moment is
         | portrayed traditionally.
        
         | gfodor wrote:
         | Here's my proposal: https://gfodor.medium.com/to-de-risk-ai-
         | the-government-must-...
         | 
         | tl;dr: significant near term AI risk is real and comes from the
         | capacity for imagined ideas, good and evil, to be autonomously
         | executed on by agentic AI, not emergent superintelligent
         | aliens. To de-risk this, we need to align AI quickly, which
         | requires producing new knowledge. To accelerate the production
         | of this knowledge, the government should abandon
         | decelerationist policies and incentivize incremental alignment
         | R&D by AI companies. And, critically, a new public/private
         | research institution should be formed that grants privileged,
         | fully funded investigators multi-year funding cycles with total
         | scientific freedom and access to all state-of-the-art
         | artificial intelligence systems operating under US law to
         | maximize AI as a force multiplier in their research.
        
         | Animats wrote:
         | > I have yet to see a solution for "AI safety" that doesn't
         | involve ceding control of our most powerful models to a small
         | handful of corporations.
         | 
         | That's an excellent point.
         | 
         | Most of the near-term risks with AI involve corporations and
         | governments acquiring more power. AI provides power tools for
         | surveillance, oppression, and deception at scale. Those are
         | already deployed and getting better. This mostly benefits
         | powerful organizations. This alarm about strong AI taking over
         | is a diversion from the real near-term threat.
         | 
         | With AI, Big Brother can watch everything all the time. Listen
         | to and evaluate everything you say and do. The cops and your
         | boss already have some of that capability.
         | 
         | Is something watching you right now through your webcam? Is
         | something listening to you right now through your phone? Are
         | you sure?
        
         | NumberWangMan wrote:
         | Ok, so if we take AI safety / AI existential risk as real and
         | important, there are two possibilities:
         | 
         | 1) The only way to be safe is to cede control to the most
         | powerful models to a small group (highly regulated corporations
         | or governments) that can be careful.
         | 
         | 2) There is a way to make AI safe without doing this.
         | 
         | If 1 is true, then... sorry, I know it's not a very palatable
         | solution, and may suck, but if that's all we've got I'll take
         | it.
         | 
         | If 2 is true, great. But it seems less likely than 1, to me.
         | 
         | The important thing is not to unconsciously do some motivated
         | reasoning, and think that AGI existential risk can't be a big
         | deal, because if it is, that would mean that we have to cede
         | control over to a small group of people to prevent disaster,
         | which would suck, so there must be something else going on,
         | like these people just want power.
        
           | Darkphibre wrote:
           | I just don't see how the genie is put back in the bottle.
           | Optimizations and new techniques are coming in at a breakneck
           | pace, allowing for models that can run on consumer hardware.
        
       | efitz wrote:
       | No signatories from Amazon or Meta.
       | 
       | Also: they focus on extinction events (how are you gonna predict
       | that?) but remain silent on all the ways that AI already sucks by
       | connecting it to systems that can cause human suffering, e.g.
       | sentencing[1].
       | 
       | My opinion: this accomplishes nothing, like most open letter
       | petitions. It's virtue signaling writ large.
       | 
       | [1]
       | https://www.technologyreview.com/2019/01/21/137783/algorithm...
        
         | seydor wrote:
         | Isn't some AI already causing car crashes?
        
         | shrimpx wrote:
         | > AI already sucks
         | 
         | Not to mention what 'automation' or 'tech-driven capitalism'
         | has already done to society over the past 100 years with
         | effects on natural habitat and human communities. Stating 'AI
         | risk' as a new risk sort of implies it's all been dandy so far,
         | and suddenly there's this new risk.
        
       ___________________________________________________________________
       (page generated 2023-05-30 23:01 UTC)