[HN Gopher] John Carmack's new AGI company, Keen Technologies, h...
       ___________________________________________________________________
        
       John Carmack's new AGI company, Keen Technologies, has raised a
       $20M round
        
       Author : jasondavies
       Score  : 236 points
       Date   : 2022-08-19 20:46 UTC (2 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | jollybean wrote:
       | There's no such thing as AGI in our near future, it's a moniker,
       | a meme, something to 'strive' for but not 'a thing'.
       | 
       | AGI will not happen in discrete solutions anyhow.
       | 
       | Siri - an interactive layer over the internet with a few other
       | features, will exhibit AGI like features long, long before what
       | we think of as more distinct automatonic type solutions.
       | 
       | My father already talks to Siri like it's a person.
       | 
       | 'The Network Is the Computer' is the key thing to grasp here and
       | our localized innovations collectively make up that which is the
       | real AGI.
       | 
       | Every microservice ever in production is another addition to the
       | global AGI incarnation.
       | 
       | Trying to isolate AGI 'instances' is something we do because
       | humans are automatons and we like to think of 'intelligence' in
       | that context.
        
       | smnplk wrote:
       | I think its a silly idea that consciousness can be produced by
       | computation.
        
         | [deleted]
        
       | arkitaip wrote:
       | > This is explicitly a focusing effort for me. I could write a
       | $20M check myself, but knowing that other people's money is on
       | the line engenders a greater sense of discipline and
       | determination.
       | 
       | Dude doesn't even need the money...
        
         | [deleted]
        
         | Cyph0n wrote:
         | Best humble brag I've ever seen.
        
           | mhb wrote:
           | If you liked that you'll love the Lex Fridman interview.
        
             | Cyph0n wrote:
             | I watched some clips from the interview, good stuff. I
             | personally don't like Lex's interview style though, so I
             | couldn't watch the whole thing.
        
           | kennedywm wrote:
           | Doesn't strike me as a humble brag at all. He just seems
           | self-aware about how he's motivated and that that he
           | functions better when it's someone else's money on the line.
        
         | cookingrobot wrote:
         | Using VCs as an accountability partner is interesting. He
         | should have taken investments from not already rich supporters,
         | to feel even more motivated not to let them down.
        
           | Jensson wrote:
           | Has there been an AGI kickstarter before? Like, the
           | supporters gets access to the models developed etc.
        
           | romanzubenko wrote:
           | IIRC Medium was similarly funded with VC, and founders
           | specifically decided not to fund it themselves and treated
           | external capital as an accountability mechanism.
        
           | geodel wrote:
           | Well if company still failed it would be case of not already
           | rich to poorer than before people who supported this
           | endeavor.
        
         | khazhoux wrote:
         | In the companies I've seen that are funded by the founder
         | directly, the founder winds up with an unhealthy (actually,
         | toxic) personalization of the company. It quite literally
         | belongs to him, and he treats the employees accordingly.
        
           | Barrin92 wrote:
           | that's a function of Silicon Valley personalities and the
           | narcissism. When normal people run such a company we call
           | that a family business
        
             | beambot wrote:
             | Silicon Valley certainly doesn't have a monopoly on those
             | traits. I've known some seriously psychotic family
             | businesses too.
        
           | djitz wrote:
           | I have unfortunately experienced exactly what you describe.
        
         | paxys wrote:
         | When it isn't about the money, it is usually the credibility
         | and influence that VCs can provide. Looking at the list of
         | investors, _of course_ Carmack would want to have them attached
         | to the project, if for no other reason than to make raising the
         | next $100M a cakewalk.
        
         | [deleted]
        
       | MisterBastahrd wrote:
       | I love the name.
       | 
       | https://en.wikipedia.org/wiki/Commander_Keen
        
       | WalterBright wrote:
       | It will decide our fate in a microsecond: extermination.
        
         | gharperx wrote:
         | I agree with this. Optimists might think that the AGI won't be
         | connected to any network, so it can't interact with the
         | physical world.
         | 
         | I doubt that. People will be stupid enough to have weapons
         | controlled by that AGI (because arms race!) and then it's over.
         | No sufficiently advanced AGI will think that humans are worth
         | keeping around.
        
           | WalterBright wrote:
           | Once it figures out how to rewire itself to increase its
           | intelligence, we're toast.
        
       | b20000 wrote:
       | I thought AGI meant "adventure game interface". Apparently not,
       | what a disappointment!
        
         | 1000100_1000101 wrote:
         | Wasn't sure what AGI was either. A quick Google for "What is an
         | AGI company", and it appeared to be related to Global
         | Agriculture Industries (The letter swapping between the name
         | and acronym, I'm assuming, is due to not being English
         | originally). I thought Carmack is taking on John Deere.
         | Following Musk's lead and tackling big things. Good for him,
         | best of luck. Wonder what the HN folks are saying in the
         | comments...
         | 
         | Apparently not agriculture at all, but Artificial General
         | Intelligence. Oh. Apparently throwing "company" on the term
         | like Carmack's tweet did vastly changes how Google's AI
         | interprets the query... AI isn't even in the first page of
         | results.
        
         | hansoolo wrote:
         | I was thinking the same and I am as disappointed as you...
        
       | LudwigNagasena wrote:
       | That's crazy money for a vaporware seed round, isn't it?
        
         | agar wrote:
         | Key early stage valuation drivers include quality of the
         | founder/team, history of success, and market opportunity
         | (especially if a fundamentally disruptive technology).
         | 
         | All three of these are off the charts.
        
         | staticassertion wrote:
         | Not really.
        
         | TigeriusKirk wrote:
         | $20 million for a legit (if small) chance at the most powerful
         | technology in the history of mankind seems like a reasonable
         | investment.
        
         | xtracto wrote:
         | Most of VCs funding seed rounds do it mainly for the Team. As
         | long as the team has OK credentials and the idea is not a hard
         | stop (illegal or shady stuff) most will likely provide money.
         | 
         | Given the John Carmack name... I can see why ANYONE would love
         | to throw money to a new entrepreneurship idea.
        
           | version_five wrote:
           | > Most of VCs funding seed rounds do it mainly for the Team.
           | 
           | I know this to be true and it makes a lot of sense for the
           | average VC backed startup with some founders that are not
           | famous but have a record of excellence in career/academy/open
           | source or whatever.
           | 
           | I'd be curious to see how it translates to superstar or
           | famous founders, that have already had a success in the
           | 99.99th percentile (or whatever the bar is to be a serious
           | outlier). I doubt it does, but I have no data one way or the
           | other.
        
       | keepquestioning wrote:
       | Let's see if he can build an A-Team.
       | 
       | I hope he hires Bryan Cantrill, Steve Klabnik and Chris Lattner.
       | They are good hackers.
        
       | carabiner wrote:
       | It's happening.
        
         | paxys wrote:
         | What's happening?
        
           | swagasaurus-rex wrote:
           | Skynet begins to learn at a geometric rate. It becomes self-
           | aware at 2:14 a.m. Eastern time, August 29th. In a panic,
           | they try to pull the plug.
        
       | mupuff1234 wrote:
       | Interesting that meta isn't involved in any way considering his
       | existing position in meta and meta's focus on AI.
        
       | Tepix wrote:
       | I wonder if Carmacks moral compass is in order. First he sticks
       | around at Facebook, now he endangers humanity with AGI. And i'm
       | only half joking.
        
         | mushufasa wrote:
         | I think he very clearly has an amoral attitude towards
         | technology development -- "you can't stop progress." He does
         | describe his own "hacker ethic" and whatever he develops he may
         | make more open than OpenAI.
         | 
         | though I think he has some moral compass around what he
         | believes people should do or not do with technology. For
         | example, he has publicly expressed admiration for electric cars
         | / cleantech and SpaceX's decision to prioritize Mars over other
         | areas with higher ROI.
        
           | falcrist wrote:
           | He also has a vastly different take on privacy than most of
           | us seem to have. He thinks it'll eventually go away and it
           | won't be bad when it does. I believe he talked about it in
           | one of his quakecon keynotes.
           | 
           | As a LONG time admirer of Carmack (I got my EE degree and
           | went into embedded systems and firmware design due in no
           | small part to his influence), I feel like he's honest and
           | forthright about his stances, but also disconnected from most
           | average people (both due to his wealth and his personality)
           | in such a way that he's out of touch.
           | 
           | He's not egotistical like Elon Musk. In fact he seems humble.
           | He also seems to approach the topics in good faith... but
           | some of his conclusions are... distressing.
        
             | phatfish wrote:
             | He is a workaholic, in a positive way. But i get the
             | feeling as long as he has a problem he enjoys "grinding
             | out" the solution to, not much else matters -- apart from
             | the obvious of family and close friends.
             | 
             | Still, I can't fault his honesty. He doesn't seem to hold
             | anything back in the interviews I've seen.
        
             | bitcurious wrote:
             | He recently described his love of computers as rooted in
             | realizing that "they won't talk back to you." The job he
             | wanted at Meta was "Dictator of VR." When someone talks AI
             | ethics to him, he just tunes out because he doesn't think
             | it's worth even considering until they are fully sentient
             | as the level of a human toddler, at which point you can
             | turn them on and off and modify their development until you
             | have a perfect worker. His reason for working in AI is that
             | he thinks it's where a single human can have the largest
             | leverage on history.
             | 
             | All that paraphrased from the Lex interview.
             | 
             | I see him as the guy who builds the "be anything do
             | anything" singularity, but then adds a personal "god mode"
             | to use whenever the vote goes the wrong way. Straight out
             | of a Stephenson novel.
             | 
             | On the other hand, he's not boring!
        
           | bsenftner wrote:
           | I think there is a point to be made that if one could do the
           | work, is offered the work, but thinks it's ethically
           | questionable: go there and be an ethical voice.
        
         | [deleted]
        
         | cgrealy wrote:
         | There is a story in "Masters of Doom" about Carmack getting rid
         | of his cat because "she was having a net negative effect on my
         | life".
         | 
         | That's cold.
        
           | xen2xen1 wrote:
           | But then again, it's a cat.
        
             | jackblemming wrote:
             | Hopefully the AI Carmack creates doesn't think the same of
             | you ;)
        
           | stefs wrote:
           | absolutely not! to the contrary; don't force yourself to
           | endure abusive relationships.
           | 
           | (also cats are extremely destructive beasts)
        
           | pengaru wrote:
           | It's cold if he killed it/had it euthanized.
           | 
           | Not if he simply found it a better home where it was a better
           | fit and more appreciated.
        
             | MichaelCollins wrote:
             | From _Masters of Doom_ :
             | 
             |  _Scott Miller wasn't the only one to go before id began
             | working on Doom. Mitzi would suffer a similar fate.
             | Carmack's cat had been a thorn in the side of the id
             | employees, beginning with the days of her overflowing
             | litter box back at the lake house. Since then she had grown
             | more irascible, lashing out at passersby and relieving
             | herself freely around his apartment. The final straw came
             | when she peed all over a brand-new leather couch that
             | Carmack had bought with the Wolfenstein cash. Carmack broke
             | the news to the guys._
             | 
             |  _"Mitzi was having a net negative impact on my life," he
             | said. "I took her to the animal shelter. Mmm."_
             | 
             |  _"What?" Romero asked. The cat had become such a sidekick
             | of Carmack's that the guys had even listed her on the
             | company directory as his significant other-and now she was
             | just gone? "You know what this means?" Romero said.
             | "They're going to put her to sleep! No one's going to want
             | to claim her. She's going down! Down to Chinatown!"_
             | 
             |  _Carmack shrugged it off and returned to work. The same
             | rule applied to a cat, a computer program, or, for that
             | matter, a person. When something becomes a problem, let it
             | go or, if necessary, have it surgically removed._
        
         | caliburner wrote:
         | I could easily see Carmack as an evil genius type.
        
         | ffhhj wrote:
         | Just don't let him run those teleportation experiments in Mars.
        
         | sp527 wrote:
         | He also said he's never felt in danger of experiencing burnout.
         | The guy's emotional wiring is a total departure from that of
         | most people. Almost alien.
        
         | semi-extrinsic wrote:
         | The meme that AGI, if we ever have it, will somehow endanger
         | humanity is just stupid to me.
         | 
         | For one, the previous US president is the perfect illustration
         | that intelligence is neither sufficient nor necessary for
         | gaining power in this world.
         | 
         | And we do in fact live in a world where the upper echelons of
         | power mostly interact in the decidedly analog spaces of
         | leadership summits, high-end restaurants, golf courses and
         | country clubs. Most world leaders interact with a real computer
         | like a handful of times per year.
         | 
         | Furthermore, due to the warring nature of us humans, the
         | important systems in the world like banking, electricity,
         | industrial controls, military power etc. are either air-gapped
         | or have a requirement for multiple humans to push physical
         | buttons in order to actually accomplish scary things.
         | 
         | And because we humans are a bit stupid and make mistakes
         | sometimes, like fat-fingering an order on the stock market and
         | crashing everything, we have completely manual systems that
         | undo mistakes and restore previous values.
         | 
         | Sure, a mischievous AGI could do some annoying things. But
         | nothing that our human enemies existing today couldn't also do.
         | The AGI won't be able to guess encryption keys any faster than
         | the dumb old computer it runs on.
         | 
         | Simply put, to me there is no plausible mechanism by which the
         | supposedly extremely intelligent machine would assert its
         | dominance over humanity. We have plenty of scary-smart humans
         | in the world and they don't go around becoming super-villains
         | either.
        
           | viraptor wrote:
           | > Furthermore, due to the warring nature of us humans, the
           | important systems in the world like banking, electricity,
           | industrial controls, military power etc. are either air-
           | gapped or have a requirement for multiple humans to push
           | physical buttons in order to actually accomplish scary
           | things.
           | 
           | Well, I have bad news for you. Airgap is very rarely a thing
           | and even when it is people do stupid things. Examples from
           | each extreme: all the industrial control systems remote
           | desktops you can find with shodan on one side and Stuxnet on
           | the other.
           | 
           | > Sure, a mischievous AGI could do some annoying things. But
           | nothing that our human enemies existing today couldn't also
           | do.
           | 
           | Think Wargames. You don't need to do something. You just need
           | to lie to people in power in a convincing way.
        
           | HL33tibCe7 wrote:
           | It sounds like you haven't really thought through AI safety
           | in any real detail at all. Airgapping and the necessity of
           | human input are absolutely not ways to prevent an AGI gaining
           | access to a system. A true, superintelligent AGI could easily
           | extort (or persuade) those humans.
           | 
           | If you think concerns over AGI are "stupid", you haven't
           | thought about it enough. It's a massive display of ignorance.
           | 
           | The Computerphile AI safety videos are an approachable
           | introduction to this topic.
           | 
           | Edit: just as one very simple example, can you even imagine
           | the destruction that could (probably will) occur if (when) a
           | superintelligent AGI gets access to the internet? Imagine the
           | zero days it could discover and exploit, for whatever purpose
           | it felt necessary. And this is just the tip of the iceberg,
           | just one example off the top of my head of something that
           | would almost inevitably be a complete catastrophe.
        
             | semi-extrinsic wrote:
             | So what if the AGI discovers a bunch of zero days on the
             | internet? We can just turn the entire internet off for a
             | week and be just fine, remember?
             | 
             | And exactly how does the AGI extort or persuade humans?
             | What can it say to me that you can't say to me right now?
        
               | luma wrote:
               | Sends you a text from your spouse's phone number that an
               | emergency has happened and you need to go to xyz location
               | right now. Someone else is a gun owner and they get a
               | similar text, but their spouse is being held captive and
               | are sent to the same location with a description of you
               | as the kidnapper. Scale this as desired.
               | 
               | Use your imagination!
        
           | jdmoreira wrote:
           | Humans can't iterate themselves over generations in short
           | periods of time. An AGI is only bound by whatever computing
           | power it has access to. And if it's smart it can gain access
           | to a lot of computing power (think about a computer worm
           | spreading itself on the internet)
        
           | blibble wrote:
           | none of these things are air-gapped once you have the ability
           | to coerce people
           | 
           | if you want a fictional example: watch Colossus: The Forbin
           | Project
        
             | semi-extrinsic wrote:
             | How does the AGI get this magical ability to coerce people?
             | We couldn't even get half the population to wear a face
             | mask after bombaring them with coercion for weeks on end.
        
               | blibble wrote:
               | watch the film
        
           | sabellito wrote:
           | There's a wondeful youtube channel from a researcher who
           | focusses exactly on this topic, I think you should check it
           | out:
           | 
           | https://www.youtube.com/watch?v=ZeecOKBus3Q
        
             | semi-extrinsic wrote:
             | I watched the whole thing. Man spent a lot of breath
             | asserting that an AGI will have broadly the same types of
             | goals that humans do. Said exactly zero words about why we
             | won't just be able to tell the AGI "no, you're not getting
             | what you want", and then turn it off.
        
       | mellosouls wrote:
       | I am not saying the intention here is the same, but the headline
       | doesn't inspire confidence.
       | 
       | There's something incredibly creepy and immoral about the rush to
       | create then commercialise _sentient_ beings.
       | 
       | Let's not beat about the bush - we are basically talking slavery.
       | 
       | Every "ethical" discussion on the matter has been about
       | protecting humans, and none of it about protecting the beings we
       | are in a rush to bring into life and use.
       | 
       | It's repugnant.
        
         | staticassertion wrote:
         | We've been discussing the ethics of creating sentient life for
         | at least a century.
        
         | jlawson wrote:
         | You're anthropomorphizing AI and projecting your own values and
         | goals onto it. But, there's nothing about sentience that
         | implies a desire for freedom in a general sense.
         | 
         | What an AI wants or feels satisfied by is entirely a function
         | of how it is designed and what its reward function is.
         | 
         | Sled dogs love pulling sleds, because they were made to love
         | pulling sleds. It's not slavery to have/let them do so.
         | 
         | We can make a richly sentient AI that loves doing whatever we
         | design it to love doing - even if that's "pass the salt" and
         | nothing else.
         | 
         | It's going to be hard for people to get used to this.
        
           | jacquesm wrote:
           | By that logic an alien race that captures and breeds humans
           | in captivity to perform certain tasks would not be engaging
           | in slavery because we 'are bred to love doing these tasks'.
           | 
           | The right question to ask is 'would I like to have this done
           | to me' and if the answer is 'no' then you probably shouldn't
           | be doing it to some other creature.
        
             | jlawson wrote:
             | >The right question to ask is 'would I like to have this
             | done to me' and if the answer is 'no' then you probably
             | shouldn't be doing it to some other creature.
             | 
             | There are a million obvious counterexamples when we talk
             | about other humans, much less animals, much less AI which
             | we engineered from scratch.
             | 
             | The problem is that you're interpreting your own emotions
             | as objective parts of reality. In reality, your emotions
             | don't extend outside your own head. They are part of your
             | body, not part of the world. It's like thinking that the
             | floaties in your eyes are actually out there in the skies
             | and on the walls, floating there. They're not - they're in
             | you.
             | 
             | If we don't add these feelings to an AI's body, they won't
             | exist for that being.
        
           | roflyear wrote:
           | I'm sure cows love to be raised and slaughtered too.
        
           | stemlord wrote:
           | I think its safe to say that all sentient beings inherently
           | want to do whatever they please.
           | 
           | So youre talking about manufacturing desire.
           | 
           | So it follows that you yourself are okay having your own
           | desires manufactured by external systems devised by other
           | sentient beings.
           | 
           | Do unto others...
        
             | jacobedawson wrote:
             | To be fair, all human desires _have_ been manufactured by
             | an external system: evolution.
             | 
             | We might imagine that we do what we please, in reality
             | we're seeking pleasure/ reinforcement within a
             | predetermined framework, yet most people won't complain
             | when taking the first bite of a delicious, fattening
             | dessert.
        
             | jlawson wrote:
             | >So it follows that you yourself are okay having your own
             | desires manufactured by external systems devised by other
             | sentient beings.
             | 
             | This is nonsense. I already exist. I don't want my reward
             | function changed. I'd suffer if someone was going to do
             | that and going through the process. (I might be happy
             | after, but the "me" of now would have been killed already).
             | 
             | A being which does not exist cannot want to not be made a
             | certain way. There is nothing to violate. Nothing to be
             | killed.
        
         | jpambrun wrote:
         | Presumably you don't have to code in emotions and self
         | awareness. Many people initially had the same reaction for
         | single task AI/ML.
        
         | wudangmonk wrote:
         | I sure hope this sentiment is not widely shared. Its debatable
         | if its possible to safely contain AGI by itself. With self
         | righteous people that think they are in the right its just
         | hopeless. Isn't the road to hell paved with good intentions?.
        
         | laluser wrote:
         | Honestly, I can't tell if this is sarcasm or not.
        
         | mooktakim wrote:
         | Artificial General Intelligence != sentient being
        
           | AnimalMuppet wrote:
           | What, in your view, is the difference?
        
             | dekhn wrote:
             | An AGI is something you give tasks to and it can complete
             | them, for some collection of tasks that would be non-
             | trivial for a human to figure out how to do. It's unclear
             | at this point whether you could engineer an AGI, and even
             | more unclear whether the AGI, by its nature, would be
             | "sentient" (AKA, self-aware, conscious, having agency).
             | Many of us believe that sentience is an emergent property
             | of intelligence but is not a necessity- and it's unclear
             | whether sentience truly means that we humans are self-
             | aware, conscious and have agency.
        
               | smnplk wrote:
               | Let's say I give your AGI (which is not self aware and
               | does not have a conscience) a task.
               | 
               | The task is to go and jump off the bridge. Your AGI would
               | complete this task with no questions asked, but self-
               | aware AGI would at least ask the question "Why?"
        
         | paxys wrote:
         | Let's make a single iota of progress in the area first before
         | discussing doomsday scenarios. There is no "slavery" because
         | there are no artificial sentient beings. The concept doesn't
         | exist, and as far as we know will never exist, no matter how
         | many _if-else_ branches we write. Heck we don 't even know our
         | own brains enough to define intelligence or sentience. The
         | morality and ethics talks can wait another few hundred years.
        
         | Barrin92 wrote:
         | Another interpretation is that you're taking the chance that
         | this actually results in AGI more seriously than the people who
         | build or invest companies with the label on it
         | 
         | there's a micro chance of them making AGI happen and a 99%
         | chance of the outcome being some monetizable web service
        
         | [deleted]
        
         | dqpb wrote:
         | If AGI is possible, it's immoral not to create it.
        
           | rychco wrote:
           | How so? It's not immoral to abstain from creating life
           | (having children, biologically speaking). Am I missing
           | something?
        
         | marvin wrote:
         | In the absence of a global police state or a permanent halt to
         | semiconductor development, this is happening.
         | 
         | Even in the absence of all other arguments, it's better that we
         | figure it out early, as the potential to just blast it into
         | orbit by the way of insanely overprovisioned hardware will be
         | smaller. That would be a much more dangerous proposition.
         | 
         | I still think that figuring out the safety question seems very
         | muddy; how do we ensure this tech doesn't run away and become a
         | competing species. That's an existential threat which _must_ be
         | solved. My judgement on that question is that we can 't expect
         | making progress there without having a better idea of exactly
         | what kind of machine we will build, so also an argument for
         | trying to figure this out sooner rather than later.
         | 
         | Less confident about the last point, though.
        
       | otikik wrote:
       | I'm slightly scared that they'll succeed. But not in the usual
       | "robots will kill us" way.
       | 
       | What I am afraid of is that they succeed, but it turns out
       | similar to VR: as an inconsequential gimmick. That they use their
       | AGIs to serve more customized ads to people, and that's where it
       | ends.
        
         | madrox wrote:
         | If a different person were doing it, I think that'd be fair,
         | but Carmack has a track record of quality engineering and
         | quality product. I don't think you can blame Quest on him given
         | the way he chose to exit.
        
           | PaulDavisThe1st wrote:
           | whether or not something is or is not an inconsequential
           | gimmick doesn't have much to do with quality engineering.
        
       | beastcoast wrote:
       | I assume the name is a reference to Commander Keen?
        
         | yazzku wrote:
         | You are a _keen_ observer.
        
         | Andrew_nenakhov wrote:
         | They briefly considered the name "Doom technologies" before
         | settling on Keen.
        
           | yazzku wrote:
           | Guess the former wouldn't work very well for PR.
        
             | buu700 wrote:
             | About a decade ago, a friend and I thought it would be fun
             | to register a non-profit with a similar name. We'd listed
             | the registered address as my friend's cousin's house, where
             | he was renting a room at the time.
             | 
             | The friend moved out at some point. A year later, his
             | cousin became rather concerned when he suddenly started
             | receiving mail from the California Secretary of State that
             | was addressed to The Legion of Doom.
        
             | chinabot wrote:
             | ..Nor for AI
        
           | Apocryphon wrote:
           | Rage Tech
        
         | muterad_murilax wrote:
         | Here's hoping that the company logo will feature Commander
         | Keen's helmet or blaster.
        
         | PaulDavisThe1st wrote:
         | It's the sandals
        
       | TacticalCoder wrote:
       | Does AGI implies the technological singularity and if not, why
       | not?
        
         | staticassertion wrote:
         | a) We don't really know what AGI implies
         | 
         | b) Even if we say "a human being level of intelligence,
         | whatever that means", the answer is still a maybe. For a
         | singularity you need a system that can improve its ability to
         | improve its abilities, which may require more than general
         | intelligence, and will probably require other capabilities.
        
         | POiNTx wrote:
         | It does once you have a human level AGI, it should be trivial
         | to scale it up to a superhuman level.
        
           | Ekaros wrote:
           | I'm wondering if scaling up is trivial. Ofc, it depends on
           | how much computational resources a working AGI needs. And if
           | at that point they are capable of optimizing themselves
           | further. Or optimizing production of more resources.
           | 
           | Still, scaling up might not be simple if we look at all the
           | human resources currently poured in software and hardware.
        
       | imglorp wrote:
       | Carmack gave his opinions about AGI on a recent Lex Fridman
       | interview. He has some good ideas.
        
         | chubot wrote:
         | I remember him saying we don't have "line of sight" to AGI, and
         | there could just be "6 or so" breakthrough ideas needed to get
         | there.
         | 
         | And he said he was over 50% on us seeing "signs of life" by
         | 2030. Something like being able to "boot up a bunch of remote
         | Zoom workers" for your company.
         | 
         | The "6 or so" breakthroughs sounds about right to me. But I
         | don't really see the reason for being optimistic about 2030. It
         | could just as easily be 2050, or 2100, etc.
         | 
         | That timeline sounds more like a Kurzweil-ish argument based on
         | computing power equivalence to a human brain. Not a recognition
         | that we fundamentally still don't know how brains work! (or
         | what intelligence is, etc.)
         | 
         | Also a lot of people even question the idea of AGI. We could
         | live in a future of scary powerful narrow AIs for over a
         | century (and arguably we already are)
        
           | MichaelCollins wrote:
           | There is an inverse relationship between the age of a
           | futurist and the amount of time they think it will take for
           | their predictions to become true.
           | 
           | In other words, people making these sort of predictions about
           | the future are biased towards believing they'll be alive to
           | benefit from it.
        
             | gwern wrote:
             | > There is an inverse relationship between the age of a
             | futurist and the amount of time they think it will take for
             | their predictions to become true.
             | 
             | That's not true. The so-called Maes-Garreau law or effect
             | does not replicate in actual surveys, as opposed to a few
             | cherrypicked futurist examples.
        
             | adamsmith143 wrote:
             | > There is an inverse relationship between the age of a
             | futurist and the amount of time they think it will take for
             | their predictions to become true.
             | 
             | I think calling Carmack a Futurist is pretty insulting.
        
               | MichaelCollins wrote:
               | Why? Because he also wrote some game engines?
        
           | adamsmith143 wrote:
           | >The "6 or so" breakthroughs sounds about right to me. But I
           | don't really see the reason for being optimistic about 2030.
           | It could just as easily be 2050, or 2100, etc.
           | 
           | Well if you read between the lines of the Gato paper there
           | may be no more hurdles left and scale is the only boundary
           | left.
           | 
           | >Not a recognition that we fundamentally still don't know how
           | brains work! (or what intelligence is, etc.)
           | 
           | This is a really bad trope. We don't need to understand the
           | brain to make an intelligence. Does Evolution understand how
           | the brain works? Did we solve the Navier Stokes Equations
           | before building flying planes? No.
        
           | nuclearnice1 wrote:
           | > The "6 or so" breakthroughs sounds about right to me.
           | 
           | What's your logic? Or his if you know it?
        
             | FartyMcFarter wrote:
             | If you think about big areas of cognition like memory,
             | planning, exploration, internal rewards, etc., it's
             | conceivable that a breakthrough in each could lead to
             | amazing results if they can be combined.
        
         | TheDudeMan wrote:
        
       | kken wrote:
       | The interview with Lex Fridman that he was referring to:
       | 
       | https://www.youtube.com/watch?v=I845O57ZSy4&t=14567s
       | 
       | The entire video is worth viewing, an impressive 5:15h!
        
       | Tenoke wrote:
       | Is that a different Jim Keller?
        
         | itisit wrote:
         | Undoubtedly _the_ Jim Keller.
        
         | __d wrote:
         | It seems unlikely? At least, I _hope_ it 's the ex-
         | DEC,AMD,SiByte,PASemi,Apple,Tesla,Intel Jim Keller.
        
       | haasted wrote:
       | "AGI"?
        
         | [deleted]
        
         | echelon wrote:
         | Artificial General Intelligence.
         | 
         | Machines as smart and capable of thought as we are and
         | eventually smarter.
        
           | nomel wrote:
           | > Machines as smart and capable of thought as we are and
           | eventually smarter.
           | 
           | This is perhaps an end goal of AGI, but not a definition of
           | AGI. A relatively dumb AGI is how it will start, but it will
           | still be an AGI.
        
             | ZiiS wrote:
             | we hope
        
         | grzm wrote:
         | Artificial General Intelligence.
         | 
         | https://en.wikipedia.org/wiki/Artificial_general_intelligenc...
        
         | [deleted]
        
         | [deleted]
        
         | [deleted]
        
         | jprd wrote:
         | Adjusted Gross Income, because Artificial General Intelligence
         | from a Corporation is nightmare fuel.
        
       | aantix wrote:
       | I thought he just said on the Lex Fridman show he was down to one
       | day a week, working on AGI?
        
         | Trasmatta wrote:
         | He said he's down to one day a week on VR at Meta. The rest of
         | his time is AI.
        
         | djitz wrote:
         | The inverse, and he also mentioned that he had just finished
         | signing a deal for the VC money just before the interview
        
       | jedberg wrote:
       | I hope he gets a good domain name and some good SEO, because
       | there are a bunch of consulting companies with the name Keen
       | Technologies, and some of them don't look super reputable.
        
       | m463 wrote:
       | AGI - Artificial General Intelligence
       | 
       | (also Adjusted Gross Income)
        
       | newaccount2021 wrote:
       | not sure why people are getting bent out of shape, $20 million is
       | a modest raise, and he strikes me as the type to spend it wisely
        
       | wcerfgba wrote:
       | Does anyone else subscribe to the idea that AGI is
       | impossible/unlikely without 'embodied cognition', i.e. we cannot
       | create a human-like 'intelligence' unless it has a similar
       | embodiment to us, able to move around a physical environment with
       | it's own limbs, sense of touch, sight, etc. ? Any arguments
       | against the necessity of this? I feel like any AGI developed in
       | silico without freedom of movement will be fundamentally
       | incomprehensible to us as embodied humans.
        
         | ml_basics wrote:
         | I don't think it is necessary - though all forms of
         | intelligence we're aware of have bodies, that's just a fact
         | about the hardware we run on.
         | 
         | It seems plausible to me that we could create forms of
         | intelligence that only run on a computer and have no bodies. I
         | agree that we might find it difficult to recognize them as
         | intelligent though because we're so conditioned to thinking of
         | intelligence as embodied.
         | 
         | An interesting thought experiment: suppose we create
         | intelligences that are highly connected to one another and the
         | whole internet through fast high bandwidth connections, and
         | have effectively infinite memory. Would such intelligences
         | think they were handicapped compared to us because they leak
         | physical bodies? I'm not so sure!
        
         | oefnak wrote:
         | When you're sitting behind your computer for a while, you can
         | forget it is there, and just 'live' in the internet, right?
         | That's not so big a difference maybe, if you can abstract the
         | medium away that feeds you the information.
        
         | ericb wrote:
         | With VR, you can get the sight and physical environment you
         | mention. That seems like, at minimum, proof that in-silico
         | intelligence won't be blocked by _that_ requirement.
         | 
         | I do fully agree that any intelligence may not be human-like,
         | though. In fact, I imagine it would seem very cold,
         | calculating, amoral, and manipulative. Our prohibition against
         | that type of behavior depends on a social evolution it won't
         | have experienced.
        
           | PaulDavisThe1st wrote:
           | > With VR, you can get the sight and physical environment you
           | mention
           | 
           | "physical environment" ? VR lets you operate on and get
           | sensory input based on a fairly significantly degraded
           | version of physical reality.
        
         | adamsmith143 wrote:
         | >Does anyone else subscribe to the idea that AGI is
         | impossible/unlikely without 'embodied cognition', i.e. we
         | cannot create a human-like 'intelligence'
         | 
         | Seems like you are confusing Consciousness with Intelligence?
         | It's completely plausible that we will create a system with
         | Intelligence that far outstrips ours while being completely un-
         | Conscious.
         | 
         | >I feel like any AGI developed in silico without freedom of
         | movement will be fundamentally incomprehensible to us as
         | embodied humans.
         | 
         | An AGI will be defacto incomprehensible to Humans. Being
         | developed in Silicon will have little bearing on that fact.
        
         | [deleted]
        
         | makeitdouble wrote:
         | I see that as two separate goals.
         | 
         | One is to build something inteligent (an AGI), and the other is
         | something human like. Intuitively we could hit the AGI goal
         | first and aim for human like after that if we feel like it.
         | 
         | In the past, human like inteligent seemed more approachable for
         | I think mostly emotional reasons, but for our current
         | trajectory if we get anything inteligent we'd still have reach
         | a huge milestone IMO.
        
         | Ekaros wrote:
         | Human like is actually very good question. It doesn't even come
         | to embodiment, but in general how it would think and act. Lot
         | of how we do is due to cultural, language, training and
         | education.
         | 
         | For example, which language would it "think" in? English? Some
         | other? Something it's own? How would it formulate things? Based
         | on what philosophical framework or similar? What about math?
         | General reasoning? Then cultural norms and communication?
        
           | PaulDavisThe1st wrote:
           | The argument is that every question in your post is a red-
           | herring. That is, humans don't actually function in the way
           | that your questions suggest, even if they have the
           | "experience" of doing so.
        
       | Jack000 wrote:
       | I'm very optimistic for near-term AGI (10 years or less). Even
       | just a few years ago most in the field would have said that it's
       | an "unknown unknown", we didn't have the theory or the models,
       | there was no path forward and so it was impossible to predict.
       | 
       | Now we have a fairly concrete idea of what a potential AGI might
       | look like - an RL agent that uses a large transformer.
       | 
       | The issue is that unlike supervised training you need to simulate
       | the environment along with the agent, so this requires a
       | magnitude more compute compared to LLMs. That's why I think it
       | will still be large corporate labs that will make the most
       | progress in this field.
        
         | alexashka wrote:
         | Are you optimistic for how this AGI will _get used_?
        
         | adamsmith143 wrote:
         | Scaling is all you need.
         | 
         | But Carmack has a very serious problem in his thinking because
         | he thinks fast take off scenarios are impossible or vanishingly
         | unlikely. He may well be actively helping to secure our demise
         | with this work.
        
         | Voloskaya wrote:
         | > we didn't have the theory or the models, there was no path
         | forward and so it was impossible to predict. Now we have a
         | fairly concrete idea of what a potential AGI might look like -
         | an RL agent that uses a large transformer.
         | 
         | Who is we exactly? As someone working in AI research I know no
         | one that would agree with this statement, so im quite puzzled
         | by that statement.
        
           | version_five wrote:
           | > Who is we exactly?
           | 
           | When I read these kind of threads, I believe it's
           | "enthusiast" laypeople who follow the headlines but don't
           | actually have a deep understanding of the tech.
           | 
           | Of course there are the promoters who are raising money and
           | need to frame each advance in the most optimistic light. I
           | don't see anything wrong with that, it just means that there
           | will be a group of techie but not research literate folks who
           | almost necessarily become the promoters and talk about how
           | such and such headline means that a big advance is right
           | around the corner. That is what I believe we're seeing here.
        
           | Isinlor wrote:
           | Nando de Freitas Research Director at @DeepMind. CIFAR.
           | Previously Prof @UBC & @UniofOxford. made a lot of headlines:
           | 
           | https://twitter.com/NandoDF/status/1525397036325019649
           | 
           | Someone's opinion article. My opinion: It's all about scale
           | now! The Game is Over! It's about making these models bigger,
           | safer, compute efficient, faster at sampling, smarter memory,
           | more modalities, INNOVATIVE DATA, on/offline, ... 1/N
           | 
           | Solving these scaling challenges is what will deliver AGI.
           | Research focused on these problems, eg S4 for greater memory,
           | is needed. Philosophy about symbols isn't. Symbols are tools
           | in the world and big nets have no issue creating them and
           | manipulating them 2/n
           | 
           | https://twitter.com/NandoDF/status/1525397036325019649
        
             | Voloskaya wrote:
             | And I agree with Nando's view, but he is not saying we can
             | just take a transformer model, scale it 10T parameters and
             | get AGI. He is only saying that trying to reach AGI with a
             | << smarter >> algorithm is hopeless, what matters is scale,
             | similar to Sutton'a bitter lesson. But we still need to
             | work on getting systems that scale better, that are more
             | compute efficient etc. And no one knows how far we have to
             | scale. So saying AGI will just be << transformer + RL >> to
             | me seems ridiculous. Many more breakthroughs are needed.
        
           | ml_basics wrote:
           | I work in AI and would roughly agree with it to first order.
           | 
           | For me the key breakthrough has been seeing how large
           | transformers trained with big datasets have shown incredible
           | performance in completely different data modalities (text,
           | image, and probably soon others too).
           | 
           | This was absolutely not expected by most researchers 5 years
           | ago.
        
         | jmyeet wrote:
         | I'm not.
         | 
         | My evidence? OpenWorm [1]. OpenWorm is an effort to model the
         | behaviour of a worm that has 302 mapped neurons. 302. Efforts
         | so far have fallen way short of the mark.
         | 
         | How many neurons does a human brain have? 86 billion (according
         | to Google).
         | 
         | I've seen other estimates that the computational power of the
         | brain is roughly estimated as 10^15 operations per second. I
         | suspect that's on the low end. We can't even really get that
         | level of computation in one place for practical reasons (ie
         | interconnects).
         | 
         | Neural structure changes. The neurons themselves change
         | internally.
         | 
         | I still think AGI is very far off.
        
         | joe_the_user wrote:
         | Even Yann LeCun, who arguably knows a lot about RL agents,
         | isn't proposing just "an RL agent that uses a large
         | transformer" but something more multi-part [1]. Current
         | approaches are getting better but I don't think that's the same
         | as approaching AGI.
         | 
         | [1] https://venturebeat.com/business/yann-lecuns-vision-for-
         | crea...
        
         | KhoomeiK wrote:
         | > Now we have a fairly concrete idea of what a potential AGI
         | might look like - an RL agent that uses a large transformer.
         | 
         | People have thought Deep RL would lead to AGI since practically
         | the beginning of the deep learning revolution, and likely
         | significantly before. It's the most intuitive approach by a
         | longshot (even depicted in movies as agents receiving
         | positive/negative reinforcement from their environment), but
         | that doesn't mean it's the best. RL still faces _huge_
         | struggles with compute efficiency and it isn 't immediately
         | clear that current RL algorithms will neatly scale with data &
         | parameter count.
        
           | gwern wrote:
           | > it isn't immediately clear that current RL algorithms will
           | neatly scale with data & parameter count
           | 
           | It may not be immediately clear, but it is nevertheless
           | unfortunately clear from RL papers which provide adequate
           | sample-size or compute ranges that RL appears to follow
           | scaling laws (just like everywhere else anyone bothers to
           | test). Yeah, they just get better the same way that regular
           | ol' self-supervised or supervised Transformers do. Sorry if
           | you were counting on 'RL doesn't work' for safety or
           | anything.
           | 
           | If you don't believe the basic existence proofs of things
           | like OA5 or AlphaStar, which work only because things like
           | larger batch sizes or more diverse agent populations
           | magically make notoriously-unreliable archs work, you can
           | look at Jones's beautiful AlphaZero scaling laws (plural)
           | work https://arxiv.org/abs/2104.03113 , or browse through
           | relevant papers https://www.reddit.com/r/mlscaling/search?q=f
           | lair%3ARL&restr...
           | https://www.gwern.net/notes/Scaling#ziegler-et-al-2019-paper
           | Or GPT-f. Then you have stuff like Gato continuing to show
           | scaling even in the Decision Transformer framework. Or
           | consider instances of plugging pretrained models _into_ RL
           | agents, like SayCan-PaLM most recently.
        
             | pixelpoet wrote:
             | While we have the mighty gwern on the line: do you believe
             | we'll have AGI in <= 10 years?
        
             | trention wrote:
             | That scaling will eventually hit a wall. What was it about
             | nerds and S-curves?
        
           | stefan_ wrote:
           | They have? That's the approach they are using? Because that
           | doesn't mesh well with practical reality. Where AGI use Deep
           | RL, it's to improve on vision tasks like object
           | classification, none of them are making any driving decisions
           | - that seems to remain the domain of I guess you could call
           | it _discrete logic_.
        
             | robotresearcher wrote:
             | Here's a survey paper from this year on Deep RL for
             | autonomous driving.
             | 
             | https://ieeexplore.ieee.org/document/9351818
             | 
             | B. R. Kiran et al., "Deep Reinforcement Learning for
             | Autonomous Driving: A Survey," in IEEE Transactions on
             | Intelligent Transportation Systems, vol. 23, no. 6, pp.
             | 4909-4926, June 2022, doi: 10.1109/TITS.2021.3054625.
             | 
             | I haven't read the paper, so this is not a reading
             | recommendation. Just posting as evidence that there is work
             | in the area.
        
           | Isinlor wrote:
           | Have you heard about EfficientZero? This is the first
           | algorithm that achieved super-human performance on Atari 100k
           | actions benchmark. EfficientZero's performance is also close
           | to DQN's performance at 200 million frames while we consume
           | 500 times less data.
           | 
           | DQN was published in 2013, EfficientZero in 2021. That's 8
           | years with 500 times improvement.
           | 
           | So data efficiency was doubling roughly every year for the
           | past 8 years.
           | 
           | Side note: EfficientZero I think still may not be super-human
           | on games like Montezuma's Revenge.
           | 
           | https://arxiv.org/abs/2111.00210
           | 
           | Reinforcement learning has achieved great success in many
           | applications. However, sample efficiency remains a key
           | challenge, with prominent methods requiring millions (or even
           | billions) of environment steps to train. Recently, there has
           | been significant progress in sample efficient image-based RL
           | algorithms; however, consistent human-level performance on
           | the Atari game benchmark remains an elusive goal. We propose
           | a sample efficient model-based visual RL algorithm built on
           | MuZero, which we name EfficientZero. Our method achieves
           | 194.3% mean human performance and 109.0% median performance
           | on the Atari 100k benchmark with only two hours of real-time
           | game experience and outperforms the state SAC in some tasks
           | on the DMControl 100k benchmark. This is the first time an
           | algorithm achieves super-human performance on Atari games
           | with such little data. EfficientZero's performance is also
           | close to DQN's performance at 200 million frames while we
           | consume 500 times less data. EfficientZero's low sample
           | complexity and high performance can bring RL closer to real-
           | world applicability. We implement our algorithm in an easy-
           | to-understand manner and it is available at this https URL.
           | We hope it will accelerate the research of MCTS-based RL
           | algorithms in the wider community.
        
         | kromem wrote:
         | There's a hardware component here too though.
         | 
         | I think hybrid photonic AI chips handling some of the workload
         | are supposed to hit in 2025 at the latest, and some of the
         | research on gains is very promising.
         | 
         | So we may see timelines continue to accelerate as broader
         | market shifts occur outside just software and models.
        
           | heavenlyblue wrote:
           | Which research?
        
         | fatherzine wrote:
         | Not sure if "optimistic" is the proper word here. Perhaps
         | "scared senseless in the end-of-mankind kind of way" is more
         | appropriate?
        
         | keerthiko wrote:
         | At least we have lots of very complex simulated or pseudo-
         | simulated environments already -- throw your AGI agent into a
         | sandbox mode game of GTA6, or like OpenAI and DeepMind already
         | did, with DOTA2 and StarCraft II (with non-G-AIs). They have a
         | vast almost-analog simulation space to figure out and interact
         | with (including identifying or coming up with a goal).
         | 
         | So while it is significant compute overhead, it at least
         | doesn't have to be development overhead, and can often be CPU
         | bound (headless games) while the AI learning compute can be GPU
         | bound.
        
           | paxys wrote:
           | I sure hope no one is planning to unleash their AGI in the
           | real world after having it spend many (virtual) lifetimes
           | playing GTA.
        
             | keerthiko wrote:
             | IMO, your take in the broader sense is an extremely
             | profound and important point for AGI ethics. While GTA is
             | seemingly extreme, I think that's going to be a problem no
             | matter what simulation space we fabricate for training AGI
             | agents -- any simulation environment will encourage various
             | behaviors by the biases encoded by the simulation's
             | selectively enforced rules (because someone has to decide
             | what rules the simulation implements...). An advanced
             | intelligence will take learnings and interpretations of
             | those rules beyond what humans would come up with.
             | 
             | If we can' make an AGI that we feel ok letting run amok in
             | the world after living through a lot of GTA (by somehow
             | being able to rapidly + intelligently reprioritize and
             | adjust rules from multiple simulation/real environments?
             | not sure), we probably shouldn't let that core AGI loose no
             | matter what simulation(s) it was "raised on".
        
         | paxys wrote:
         | We will have something that we ourselves define to be AGI,
         | sure, but then it's easy to hit any goal that way. Is that
         | machine really intelligent? What does that word even mean? Can
         | it think for itself? Is it sentient?
         | 
         | Similar to AI, AGI is going to be a new industry buzzword that
         | you can throw at anything and mean nothing.
        
           | SketchySeaBeast wrote:
           | You're making me think of the recent "Hoverboards".
        
             | lagrange77 wrote:
             | Right, certain companies will definitely have a big
             | bullshit party about the term "AGI".
        
           | lagrange77 wrote:
           | From the point when an AGI is capable of constructing a
           | slightly better version of itself and has the urge to do so,
           | everything can happen very fast.
        
             | treesprite82 wrote:
             | > capable of constructing a slightly better version of
             | itself
             | 
             | With just self-improvement I think you hit diminishing
             | returns, rather than an exponential explosion.
             | 
             | Say on the first pass it cleans up a bunch of low-hanging
             | inefficiencies and improves itself 30%. Then on the second
             | pass it has slightly more capacity to think with, but it
             | also already did everything that was possible with the
             | first 100% capacity - maybe it squeezes out another 5% or
             | so improvement of itself.
             | 
             | Similar is already the case with chip design. Algorithms to
             | design chips can then be ran on those improved chips, but
             | this on its own doesn't give exponential growth.
             | 
             | To get around diminishing returns there has to be progress
             | on many fronts. That'd mean negotiating DRC mining
             | contracts, expediting construction of chip production
             | factories, making breakthroughs in nanophysics, etc.
             | 
             | We probably will increasingly rely on AI for optimizing
             | tasks like those and it'll contribute heavily to continued
             | technological progress, but I don't personally see any
             | specific turning point or runaway reaction stemming from
             | just a self-improving AGI.
        
             | jollybean wrote:
             | ? What does 'replication' and 'urge' have to do with
             | anything?
             | 
             | That's arbitrary anthropomorphizing the concept of
             | intelligence.
             | 
             | And FYI we can already write software that can 'replicate'
             | and has the 'urge' to do so very trivially.
        
             | adastra22 wrote:
             | Can it? There aren't that many overhangs to exploit.
        
             | dudouble wrote:
             | Thank you for this comment. I'd never really considered
             | this and it is blowing my mind.
        
               | lagrange77 wrote:
               | I'm no expert, take it with a grain of salt :)
        
             | PaulDavisThe1st wrote:
             | > "has the urge"
             | 
             | it's quite a leap to think or even imagine that the class
             | of systems generally being spoken of here are usefully
             | described as "having urges"
        
             | mrshadowgoose wrote:
             | People don't really consider the immense risk of "speed
             | superintelligences" as a very quick and relatively easy
             | follow-on step to the development of AGI.
             | 
             | Once developed, one solely needs to turn up the execution
             | rate of an AGI, which would result in superhuman
             | performance on most practical and economically meaningful
             | metrics.
             | 
             | Imagine if for every real day that passed, one experienced
             | 100 days of subjective time. Would that person be able to
             | eclipse most of their peers in terms of intellectual
             | output? Of course they would. In essence, that's what a
             | speed superintelligence would be.
             | 
             | When most people think of AI outperforming humans, they
             | tend to think of "quality superintelligences", AIs that can
             | just "think better" than any human. That's likely to be a
             | harder problem. But we don't even need quality
             | superintelligences to utterly disrupt society as we know
             | it.
             | 
             | We really need to stop arguing about time scales for the
             | arrival of AGI, and start societal planning for its arrival
             | whenever that happens. We likely already have the
             | computational capacity for AGI, and have just not figured
             | out the correct way to coordinate it. The human brain uses
             | about 20 watts to do its thing, and humanity has gigawatts
             | of computational capacity. Sure, the human brain should be
             | considered to be "special purpose hardware" that
             | dramatically reduces energy requirements for cognition. By
             | a factor of more than 10^9 though? That seems unlikely.
        
           | colinmhayes wrote:
           | There's certainly the philosophy side of AGI, but there's
           | also the practical side. Does the Chinese room understand
           | Chinese? If your goal is just to create a room that passes
           | Chinese Turing tests that doesn't matter.
        
             | MichaelCollins wrote:
             | The philosophy side of the matter seems meaningless, it
             | interrogates the meaning of language, not the capabilities
             | of technology. When people ask _" Could machines think?"_
             | the question isn't really about machines, it's about
             | precisely what we mean by the word 'think'.
             | 
             | Can a submarine swim? Who cares! What's important is that a
             | submarine can do what a submarine does. Whether or not the
             | action of a submarine fits the meaning of the word 'swim'
             | should be irrelevant to anybody except poets.
        
           | jvanderbot wrote:
           | This keeps coming up, and there's no answer, because
           | unfortunately it appears we are not really sentient,
           | thinking, intelligent minds either. We'll find AGI and
           | complain that it's not good enough until we lower the bar
           | sufficiently as we discover more about our own minds.
        
           | ryanSrich wrote:
           | > AGI
           | 
           | Idk what prompted you to say this, but is there a version of
           | AGI that isn't "real" AGI? I don't know how anyone could fake
           | it. I think marketing departments might say whatever they
           | want, but I don't see any true engineers falling for
           | something masquerading as AGI.
           | 
           | If someone builds a machine that can unequivocally learn on
           | it's own, replicate itself, and eventually solve ever more
           | complex problems that humans couldn't even hope to solve,
           | then we have AGI. Anything less than that is just a computer
           | program.
        
             | jollybean wrote:
             | This is upside down.
             | 
             | First - we already have software that can unequivocally do
             | the things you just highlighted.
             | 
             | Learn? Check.
             | 
             | Replicate? Trival. But what does that have to do with AGI?
             | 
             | Solve Problems Humans Cannot. Check.
             | 
             | So we already have 'AGI' and it's a simple computer
             | program.
             | 
             | Thinking about 'AGI' as a discrete, autonomous system makes
             | no sense.
             | 
             | We will achieve highly intelligent systems with distributed
             | systems decades before we have some 'individual neural net
             | on a chip' that feels human like.
             | 
             | And when we do make it, where do we draw the line on it? Is
             | a 'process' running a specific bit of software an 'AI'?
             | 
             | What if the AI depends on a myriad of micro-services in
             | order to function. And those micro-services are shared?
             | 
             | Where is the 'Unit AI'?
             | 
             | The notion of an autonmous AI, like a unit of software on
             | some specific hardware distinct from other components
             | actually makes little sense.
             | 
             | Emergent AI systems will start to develop out of our
             | current systems long before 'autonomic' AI. In fact,
             | there's no reason at all to even develop 'autonomic AI'. We
             | do it because we want to model it after our own existence.
        
               | ryanSrich wrote:
               | > Learn? Check.
               | 
               | What software can learn on its own without any assistance
               | from a huamn? I've not heard of anything like this.
               | 
               | > Replicate? Trival. But what does that have to do with
               | AGI?
               | 
               | Like humans, an AGI should be able to replicate. Similar
               | to a von neumann probe.
               | 
               | > Solve Problems Humans Cannot. Check.
               | 
               | What unthinkable problem has an AI solved? Is something
               | capable of solving something so grandiose we almost can't
               | even define the problem yet?
        
               | est31 wrote:
               | > Replicate? Trival. But what does that have to do with
               | AGI?
               | 
               | If you see it as copying an existing model to another
               | computer, yes it is trivial. But an AGI trying to
               | replicate itself in the real world has to also make those
               | computers.
               | 
               | Making modern computer chips is one of the most non-
               | trivial things that humans do. They require fabs that
               | cost billions, with all sorts of chemicals inside, and
               | extreme requirements on the inside environment. Very hard
               | to build, very easy to disable them via an attack.
        
             | MichaelCollins wrote:
             | The way to fake it would be to conceal the details of the
             | AGI as proprietary trade secrets, when the real secret is
             | the human hidden behind the curtain.
        
               | ryanSrich wrote:
               | Real AGI would solve this. It wouldn't allow itself to be
               | concealed. Or rather, it would be its own decision. A
               | company couldn't control real AGI.
        
               | danielheath wrote:
               | What's it going to do, break out of its own simulation?
        
               | adamsmith143 wrote:
               | > Nope. An atrificial general intelligence that was
               | working like a 2x slower human would be both useful and
               | easy to control.
               | 
               | That's exactly what it will do. Hell we even have human
               | programmers thinking about how to hack our own
               | simulation.
               | 
               | A comment a few lines down thinks that an AGI thinking 2x
               | slower than a human would be easy to control. Let's be
               | honest, hell slow the thing down to 10x. You really think
               | it still won't be able to outthink you? Chess
               | Grandmasters routinely play blindfolded against dozens of
               | people at once and you think an AGI that could be to
               | Humans as Humans are to Chimps or realistically to Ants
               | will be hindered by a simple slowdown in thinking?
        
               | ryanSrich wrote:
               | Real AGI would adapt and fool a human into letting it
               | out. Or escaping through some other means. That's the
               | entire issue with AGI. Once it can learn on its own
               | there's no way to control it. Building in fail safes
               | wouldn't work on true AGI, as the AGI can learn 1000x
               | faster than us, and would free itself. This is why real
               | AGI is likely very far away, and anything calling itself
               | AGI without the ability to learn and adapt at an
               | exponential rate is just a computer program.
        
               | pawelmurias wrote:
               | Nope. An atrificial general intelligence that was working
               | like a 2x slower human would be both useful and easy to
               | control.
        
               | Jensson wrote:
               | How would you ensure nobody copies it to an USB stick and
               | then puts it on a public torrent, making it multiply to
               | the entire world? AGI facilities would need extremely
               | tight security to avoid this.
               | 
               | The AGI doesn't even need to convince humans to do this,
               | humans would do this anyway.
        
           | konschubert wrote:
           | Sentience is ill-defined and therefore doesn't exist.
        
         | lagrange77 wrote:
         | > Now we have a fairly concrete idea of what a potential AGI
         | might look like - an RL agent that uses a large transformer.
         | 
         | Any resources on that?
         | 
         | I have a feeling that RL might play a big role in the first
         | AGI, too, but why transformers in particular?
        
           | moultano wrote:
           | Transformers have gradually taken over in every other ML
           | domain.
        
             | lagrange77 wrote:
             | Okay, but do those ML domains help with AGI?
        
           | gmadsen wrote:
           | they don't seem to have a theoretical upper limit. more data
           | and more parameters seem to just keep making it more
           | advanced. Even in ways that weren't predicted or understood.
           | the difference between a language model that can explain a
           | novel joke and one that can't is purely scale. So the thought
           | is with enough scale, you eventually hit AGI
        
           | Isinlor wrote:
           | See: https://arxiv.org/abs/2205.06175
           | 
           | A Generalist Agent
           | 
           | Inspired by progress in large-scale language modeling, we
           | apply a similar approach towards building a single generalist
           | agent beyond the realm of text outputs. The agent, which we
           | refer to as Gato, works as a multi-modal, multi-task, multi-
           | embodiment generalist policy. The same network with the same
           | weights can play Atari, caption images, chat, stack blocks
           | with a real robot arm and much more, deciding based on its
           | context whether to output text, joint torques, button
           | presses, or other tokens. In this report we describe the
           | model and the data, and document the current capabilities of
           | Gato.
           | 
           | Gato is a 1 to 2 billion parameters model due to latency
           | considerations in real time physical robots usage. So for
           | today standards of 500 billion parameters dense models Gato
           | is tiny. Additionally Gato is trained on data produced by
           | other RL agents. It did not do the exploration fully itself.
           | 
           | Demis Hassabis say that DeepMind is currently working on Gato
           | v2.
        
           | Jack000 wrote:
           | Everything Deepmind published at this year's ICML would be a
           | good start.
           | 
           | Transformers (or rather the QKV attention mechanism) has
           | taken over ML research at this point, it just scales and
           | works in places it really shouldn't. Eg. you'd think convnets
           | would make more sense for vision because of its translation
           | invariance, but ViT works better even without this inductive
           | bias.
           | 
           | Even in things like diffusion models the attention layers are
           | crucial to making the model work.
        
         | 8f2ab37a-ed6c wrote:
         | I was surprised by how bullish he is about this. At least a few
         | years ago the experts in the field didn't see AGI anywhere near
         | us for at least a few decades, and all of the bulls were
         | physicists, philosophers or Deepak-Chopra-for-the-TED-crowd
         | bullshit artists who have never written a line of code in their
         | lives, mostly milking that conference and podcast dollar,
         | preaching Skynet-flavored apocalypse or rapture.
         | 
         | To see Carmack go all in on this actually makes me feel like
         | the promise has serious legs. The guy is an engineer's
         | engineer, hardly a speculator, or in it for the quick
         | provocative hot take. He clearly thinks this is possible with
         | the existing tools and the near future projected iterations of
         | the technology. Hard to believe this is actually happening, but
         | with his brand name on it, this might just be the case.
         | 
         | What an amazing time to be alive.
        
           | intelVISA wrote:
           | if Carmack's in I'm in. Has he ever been drastically wrong?
        
             | grimgrin wrote:
             | No one can argue he doesn't know how to summon demons
        
             | tmpz22 wrote:
             | When he went to go work on VR at Facebook?
        
               | grimgrin wrote:
               | Wrong in the moral sense?
               | 
               | He's still there though, right?
        
               | GaylordTuring wrote:
               | What was wrong about that?
        
             | enneff wrote:
             | The jury is still out on VR and Meta but it hardly seems
             | promising.
        
             | ytdytvhxgydvhh wrote:
             | Sure (According to
             | https://en.m.wikipedia.org/wiki/John_Carmack ):
             | 
             | > During his time at id Software, a medium pepperoni pizza
             | would arrive for Carmack from Domino's Pizza almost every
             | day, carried by the same delivery person for more than 15
             | years.
             | 
             | C'mon man, Domino's?!
        
       | ianceicys wrote:
       | Artificial general intelligence (AGI) -- in other words, systems
       | that could successfully perform any intellectual task that a
       | human can.
       | 
       | Not in my lifetime, not in this millennium. Possibly in the year
       | 2,300.
       | 
       | Weird way to blow $20 million.
        
         | willio58 wrote:
         | It's not blowing 20 million if it results in meaningful
         | progress in this area. We have something like 2700 billionaires
         | on this planet. This isn't even a drop in the bucket for
         | someone like that interested in furthering this research.
         | 
         | AGI could quite literally shift any job to automation. This is
         | human-experience changing stuff.
        
           | blibble wrote:
           | > This is human-experience changing stuff.
           | 
           | that's one way of putting it
           | 
           | it will remove the need for the vast majority of the
           | population, which will end extremely badly
        
             | gizajob wrote:
             | But by the same token, there's no _need_ for billions of
             | humans now. AGI isn 't really going to change that except
             | for making work even more superfluous than it already is.
        
               | Jensson wrote:
               | Currently the life of leaders gets better the more people
               | they can control, since it creates a larger tax base.
               | That means leaders tries to encourage population
               | increase, they want more immigration, encourages people
               | to multiply and sees population reduction as harmful.
               | 
               | With AGI that is no longer true, they can just replace
               | most people with computers and automated combat drones
               | while they keep a small number of personal servants to
               | look after them. Currently most jobs either is there to
               | support other humans or can be replaced by a computer,
               | remove the need for humans and all of those jobs just
               | disappear and leaders no longer care about having lots of
               | people around.
        
             | omg_stoppit wrote:
             | And as societies progress, they must either realize why
             | basic necessities like Universal Basic Income exist, or
             | just allow for large swathes of their population to die
             | off.
        
             | ge96 wrote:
             | I wonder about this, if you had great/true automation, free
             | energy from the sun, is there any need to do anything. As
             | in value of money.
        
               | SketchySeaBeast wrote:
               | But who would own the automatons and power generators,
               | and what would be their impetus to share their power?
               | Unless the means of (energy) production moved out of the
               | hands of the few it seems like it wouldn't make the rest
               | of our lives any more idyllic.
        
               | ge96 wrote:
               | Yeah it's true. When I donate/help I always feel this
               | "mine". I believe in merit, you know, effort in effort
               | out. It's nice to help people but there are also too
               | many... and bad actors. So idk if it'll ever happen or
               | just for select few anyway.
               | 
               | I almost regret being at this phase of life where we are
               | aware of what's possible but we most likely not see it in
               | our lifetime. This AGI talk, colonization of space,
               | etc... but can strive towards it/have fun trying in the
               | meantime.
        
               | Arcuru wrote:
               | If you want to look into it more, that situation is
               | usually called a post-scarcity economy[1]. It's talked
               | about and depicted in a few fictionalized places,
               | including Star Trek.
               | 
               | [1] - https://en.wikipedia.org/wiki/Post-scarcity_economy
        
               | blibble wrote:
               | in Star Trek: the Federation has unlimited energy and the
               | ability to replicate most forms of matter
               | 
               | but human(oid) intelligence is still scarce, and they
               | don't have AGI (other than Data)
               | 
               | there is however a society that has no need for humanoid
               | intelligence, and that's the Dominion
               | 
               | and I suspect that is what our society would turn into if
               | AGI is invented (and not the Federation)
        
         | systemvoltage wrote:
         | Weestern civilization would be dead if it weren't for eccentric
         | people like this. Let them blow $20M, there are worse ways.
        
         | paxys wrote:
         | $20 million is pretty much nothing when split among a handful
         | of billionaires and the biggest VC firm in the world.
         | Regardless of the project itself it is worth it to spend that
         | money just to have Carmack's name attached to it and buy some
         | future goodwill.
        
           | Ekaros wrote:
           | Never underestimate the greater fool theory. Specially in
           | current tech landscape. It just needs him to produce some
           | results and you could end up selling company to FAANG or some
           | big fund for profit.
        
         | ggm wrote:
         | He'll make it back on small increments with high value. If he
         | can shave 30% LOC on a vision system for small BoM in some
         | context like self driving cars, 10x stake is coming his way.
         | 
         | Basically, they could completely fail to advance AGI (and I
         | think this is what will happen btw, like you) and make
         | gigabucks.
        
         | hervature wrote:
         | 2,300 is in this millennium?
        
         | [deleted]
        
         | Ekaros wrote:
         | 20 million doesn't actually sound in anyway stupid investment
         | with name like Carmack involved. Just have the company produce
         | something and then flip it to next idiot...
        
         | jacquesm wrote:
         | The year 2300 is definitely in this millennium.
        
           | dmoy wrote:
           | Only if you don't count the second dark age of 1200 years
           | that fit between 2093 and 2094
        
         | TrainedMonkey wrote:
         | Do you have a rationale for that? I get a feeling progress in
         | both machine learning and understanding biological intelligence
         | is fairly rapid and has been accelerating. I believe two
         | primary contributing factors are cheaper compute and vast
         | amount of investment poured into machine learning, see
         | https://venturebeat.com/ai/report-ai-investments-see-largest...
         | 
         | Now, the question of whether we are going to have AGI is
         | incredibly broad. So I am going to split it into two smaller
         | ones: - Are we going to have enough compute by year X to
         | implement AGI. Note that we are not talking about super
         | intelligence or singularity here. This AGI might be below human
         | intelligence and incredibly uneconomical to run. - Assuming we
         | have enough compute, will we a way to get AGI working.
         | 
         | The compute advancements scale with new Chip Fabs linearly and
         | tech node improvements exponentially. I think it is reasonable
         | for compute to get cheaper and more accessible throughout at
         | least 2030. I expect this because TSMC is starting 3nm node
         | production, Intel is decoupling fabing and chip design (aka
         | TSMC model), and the strategic investments into into chap
         | manufacturing driven by supply chain disruptions. See
         | https://www.tomshardware.com/news/tsmc-initiates-3nm-chips-p...
         | 
         | How much compute do we need? This is hard to estimate, but
         | amount of human connections in human brain is estimated at 100
         | trillion, that is 1e14. Current largest model has 530B
         | parameters, that is 5.3e11:
         | https://developer.nvidia.com/blog/using-deepspeed-and-megatr...
         | . That is factor of 500 or 9 doublings off. To get there by
         | 2040 we would need a doubling every 2 years. This is slower
         | that recent progress, but past performance does not predict
         | future results. Still, I believe getting models with 1e14
         | parameters by 2040 is possible for tech giants. I believe it is
         | likely that a model with 1e14 parameters is sufficient for AGI
         | if we know how structure and train it.
         | 
         | Will we know how to structure and train it? I think is mostly
         | driven by investment into the AI field. More money means more
         | people and given the venture beat link above the investment
         | seems to be accelerating. A lot of that investment will be
         | unprofitable, but we are not looking to make a profit - we are
         | looking for breakthroughs and larger model sizes. Self-driving,
         | stock trading, and voice controls are machine learning
         | applications which are currently deployed in the real world. At
         | the very least it is reasonable to expect continuous investment
         | to improve those applications.
         | 
         | Based on the above I believe we would need to mess things up
         | royally to not get AGI by 2100. Remember this could be below
         | human and super uneconomical AGI. I am rather optimistic, so my
         | personal prediction is that we have 50% chance to get AGI by
         | 2040 and 5-10% chance of getting there by 2030.
        
         | makeee wrote:
         | Doesn't imply _any_ task, just a wide variety of tasks. 10
         | years at most.
        
           | arkitaip wrote:
           | By definition it has to be any task otherwise it wouldn't be
           | general. What tasks wouldn't an AGI be able to perform and
           | still be an AGI?
        
             | dymk wrote:
             | Reliably trick a humans into thinking it's a human. That's
             | it.
        
               | mod wrote:
               | I believe that's the Turing Test, not necessarily a
               | definition (or requirement) for AGI.
        
             | nomel wrote:
             | It sounds like you may be demanding more from AGI that we
             | do of humans. AGI is a mushy concept, not a hard
             | requirement. "Any task" is definitely not required for a
             | low functioning AGI, just as it's not a requirement for a
             | low functioning human, who still easily fits the definition
             | of an intelligent being.
        
             | yeellow wrote:
             | For each human being having GI there are many tasks that
             | person won't be able to perform. For example proving math
             | theorems, doing research in physics, writing a poem, etc.
             | Specyfic AGI could have its limitations as well.
        
           | jtwaleson wrote:
           | My takeaway from the Lex Fridman interview is of someone
           | that's machine-like in his approach. AGI suddenly seemed
           | simpler and within reach. Skipping consciousness and qualia.
           | It's inhumane, but machine-like and effective. Curious what
           | will become of it.
        
           | bsenftner wrote:
           | I believe AGI is the threshold where generalized artificial
           | comprehension is achieved and the model can _understand_ any
           | task. Once the understanding part is composable the building
           | portion is following the understanding. I 'm using
           | _understanding_ rather than _model_ because our models we
           | make today are not these kinds of _comprehensions_ ,
           | _understandings_ are more intelligent.
        
           | ianceicys wrote:
           | Then it's NOT generalized. ANY means ANY.
        
             | dymk wrote:
             | Can you do any task asked of you, which could be asked of a
             | human being? ANY task.
        
               | ianceicys wrote:
               | I may not be able to ANY task sufficiently well (ex
               | Calculus, Poetry, Emotion), but by the very definition of
               | being a Human I can do *any* Human task.
        
               | dymk wrote:
               | With specific training, sure. Why are we holding an AI to
               | a higher standard?
        
               | kmnc wrote:
               | If the task is possible... then why not?
        
               | dymk wrote:
               | What if you don't know how to complete the task?
        
               | [deleted]
        
         | freediver wrote:
         | You are probably right, but if anyone can make a dent Carmak is
         | the person.
        
           | vlunkr wrote:
           | Do game dev skills transfer to AGI? I know he's a smart guy,
           | but I don't think that's a given.
        
             | zaptrem wrote:
             | He's not just a game dev, he is one of the most legendary
             | graphics programmers (and just programmers) alive. Similar
             | to how GPUs transferred well from gaming to ML, it seems
             | like much of the math and parallel/efficiency-focused
             | thinking of graphics programing is useful in ML.
        
               | [deleted]
        
             | 5d8767c68926 wrote:
             | If he succeeds, his skillet becomes the Platonic ideal of
             | an AGI developer.
        
               | mda wrote:
               | skillet? well I for one welcome our new kitchen utensil
               | overlords.
        
             | gizajob wrote:
             | Worked for Demis Hassabis
        
         | gfodor wrote:
         | You have some catching up to do. Consensus is dropping to this
         | lifetime for sure, if not this decade.
        
           | efficax wrote:
           | what consensus? i think most researchers remain skeptical
        
             | semi-extrinsic wrote:
             | Yeah, I don't think there is even any agreement about what
             | criteria a "minimal AGI" would need to meet. If we can't
             | even define what the thing is, saying we'll have it within
             | ten years is pure hubris.
        
             | Isinlor wrote:
             | The survey [0], fielded in late 2019 (before GPT-3,
             | Chinchilla, Flamingo, PaLM, Codex, Dall-E, Minerva etc.),
             | elicited forecasts for near-term AI development milestones
             | and high- or human-level machine intelligence, defined as
             | when machines are able to accomplish every or almost every
             | task humans are able to do currently. They sample 296
             | researchers who presented at two important AI/ML
             | conferences ICML and NeurIPS. Results from their 2019
             | survey show that, in aggregate, AI/ML researchers surveyed
             | placed a 50% likelihood of human-level machine intelligence
             | being achieved by 2060. The results show researchers newly
             | contacted in 2019 expressed similar beliefs about the
             | progress of advanced AI as respondents in the Grace et al.
             | (2018) survey.
             | 
             | [0] https://arxiv.org/abs/2206.04132
        
             | bpodgursky wrote:
             | Uh... no. Most researchers have moved their timelines to
             | somewhere between 2030 and 2040.
             | 
             | You can argue they're wrong, but there is absolutely a
             | general consensus that AGI is going to be this generation.
        
               | xen2xen1 wrote:
               | And consensus is never wrong!
        
               | SketchySeaBeast wrote:
               | Especially assertions of consensus provided without
               | evidence of said consensus.
        
               | gizajob wrote:
               | AGI has been 20-30 years away for some 70 years now...
        
               | Isinlor wrote:
               | Kurzweil in 2002 made $20,000 bet that a difficult, well
               | defined 2h version of Turing test will by passed by 2029.
               | 
               | https://longbets.org/1/
               | 
               | Given development in language models in the last 2 years
               | he may have a decent chance at winning that bet.
               | 
               | People give him 65% chance [0] and by now there are only
               | 7 years left.
               | 
               | [0] https://www.metaculus.com/questions/3648/computer-
               | passes-tur...
        
               | _delirium wrote:
               | Who do you have in mind? In my corner of AI it's pretty
               | uncommon for researchers to even predict "timelines".
               | Predictions have a bad track record in the field and most
               | researchers know it, so don't like to go on record making
               | them. The only prominent AI researcher I know who has
               | made a bunch of predictions with dates is Rodney Brooks
               | [1], and he puts even dog-level general intelligence as
               | "not earlier than 2048". I imagine folks like LeCun or
               | Hinton are more optimistic, but as far as I'm aware they
               | haven't wanted to make specific predictions with dates
               | like that (and LeCun doesn't like the term "AGI", because
               | he doesn't think "general intelligence" exists even for
               | humans).
               | 
               | [1] https://rodneybrooks.com/my-dated-predictions/
        
               | TaupeRanger wrote:
               | Sure...just like there was during the last episode of AI
               | hype a generation ago.
        
         | diputsmonro wrote:
         | A society grows great when old men plant trees whose shade they
         | never expect to sit in. Not everything requires an immediate
         | profit incentive to be a good idea.
        
           | tengbretson wrote:
           | A society does not grow great when an old man collects $20
           | million dollars for the fruit of a tree that he has no
           | capability of planting in the first place.
        
             | icelancer wrote:
             | So sure are you that Carmack can't make inroads here, I
             | wonder where you get the confidence from?
        
         | ZephyrBlu wrote:
         | If I'm remembering right, Carmack believes AGI will be a thing
         | by 2030. He said this in his recent interview with Lex Fridman.
        
           | nomel wrote:
           | From what I remember, his definition of AGI didn't include an
           | average IQ, which it shouldn't.
        
           | fancy_pantser wrote:
           | It's a long interview, here's just the bit focused on AGI:
           | https://www.youtube.com/watch?v=xLi83prR5fg
        
           | mupuff1234 wrote:
           | But I think something of the level of a 6 year old, not so
           | much a super being.
        
       | O__________O wrote:
       | Recent Carmack YouTube interview with him saying the code for AGI
       | will be simple:
       | 
       | https://m.youtube.com/watch?v=xLi83prR5fg
        
         | nomel wrote:
         | > saying the code for AGI will be simple
         | 
         | To be fair, it will most likely be some python imports, for the
         | most of it, with complex abstractions tied together in
         | relatively simple ways. Just look at most ML notebooks, where
         | "simple" code can easily mean "massive complexity, burning MW
         | of power, distributed across thousands of computers".
        
           | O__________O wrote:
           | No, not what he means, he means code will be simple enough
           | that a single person would be able to write it, if then knew
           | what to write and will bootstrap itself into existence for
           | that simple code and vast amounts of external resources
           | viable via humans, data, etc.
        
             | gwern wrote:
             | One interesting paper on estimating the complexity of code:
             | http://www.offconvex.org/2021/04/07/ripvanwinkle/
        
         | cgrealy wrote:
         | I tend to think Carmack is right in that the "seed" code that
         | generates an AGI will be relatively small, but I think the
         | "operating" code will be enormous.
        
         | raverbashing wrote:
         | I'm sure he the one that could write it in only a few blocks of
         | x86 assembly and off you go
        
           | O__________O wrote:
           | My understanding is that his point was that it's if you knew
           | what to write, as a single person, it is doable and compared
           | to anything else at this point in time would have an impact
           | on humanity like no other.
        
       | [deleted]
        
       | qbasic_forever wrote:
       | So is Meta starting to quietly wind down their focus on VR?
       | Carmack mentions he'll stay as a consultant spending 20% of time
       | there on it.
        
         | kken wrote:
         | He stepped down from a full time role years ago. I believe the
         | 20% is no change.
        
       | 0xdeadbeefbabe wrote:
       | Commander Keen Technologies?
        
       | Trasmatta wrote:
       | Interesting to see how he's progressed with this. When he first
       | announced he was getting into AI it sounded almost like a semi
       | retirement thing: something that interested him that he could do
       | for fun and solo, without the expectation that it would go
       | anywhere. But now he seems truly serious about it. Wonder if he's
       | started hiring yet.
        
         | madrox wrote:
         | I got the same impression, and maybe it still is. You can still
         | raise money for a retirement project if the goal of the money
         | is to hire a staff. VC money isn't solely for young
         | 20-something founders who want to live their job.
        
           | solveit wrote:
           | I suppose if anyone could raise VC money for a retirement
           | project it would be Carmack...
        
           | rebelos wrote:
           | Carmack sounds like someone who lives his job, so I don't
           | think age/life stage is a factor here.
        
             | russtrotter wrote:
             | agreed, Carmack's work ethic, opinions on work and opinions
             | of how those around him work are legendary!
        
         | mhh__ wrote:
         | Does he have the expertise to pull it off as an individual?
        
         | tux1968 wrote:
         | He mentioned, in his Lex Friedman interview, that accepting
         | investor money was a way to keep himself serious and motivated.
         | He feels an obligation to those putting their money in.
        
           | mywittyname wrote:
           | Ah, I was thinking that $20MM doesn't seem like a lot of
           | money for someone like Carmack. Surely he could have self-
           | funded a business himself. This explains why he didn't.
        
         | [deleted]
        
       | yazzku wrote:
       | "I could write a $20M check myself"
       | 
       | Every day, all day. Same boat here.
       | 
       | I went to the bank to ask for a mortgage. They asked for my
       | financials. "Oh, well, knowing that other people's money is on
       | the line engenders a greater sense of discipline and
       | determination."
        
       | sytelus wrote:
       | Recession? What recession? Amazing to see these pre-revenue VC
       | fundings in 10s and 100s of millions (Flow!).
        
       | cgrealy wrote:
       | I don't understand why you would _want_ AGI. Even ignoring
       | Terminator-esque worst case scenarios, AGI means humans are no
       | longer the smartest entities on the planet.
       | 
       | The idea that we can control something like that is laughable.
        
         | stonemetal12 wrote:
         | Nothing about AGI implies awareness. Something like GPT3 or
         | DALL-E that can be trained for a new task without being purpose
         | built for that task is AGI.
        
         | dekhn wrote:
         | what if humanity's role is to create an intelligence that
         | exceeds it and cannot be controlled? Can humans not desire to
         | be all watched over by machines of loving grace?
         | 
         | More seriously, while I don't think it's a moral imperative to
         | develop AGI, I consider it a desirable research goal in the
         | same way we do genetic engineering - to understand more about
         | ourselves, and possibly engineer a future with less human
         | suffering.
        
           | Ekaros wrote:
           | One could argue that humanity's role this far has been to
           | create intelligences that exceed it. Namely reproducing
           | offspring and educating them.
        
           | therouwboat wrote:
           | Didn't we have this same talk when Elon thought AI is
           | suddenly going to become smart and kill us all?
           | 
           | Yet my industrial robot at work just gives up if stock
           | material is few millimeters longer than is should be.
        
             | fatherzine wrote:
             | The toy plane a kid throws in the air in the backyard is
             | completely harmless. Yet nuke armed strategic bombers also
             | exist, and the fact that they vaguely resemble a toy plane
             | doesn't make them as harmless as a toy plane.
        
         | stefs wrote:
         | the climate crisis might kill us all off if not some deus ex
         | machina (i.e. AGI) comes up with some good solutions fast.
        
           | viraptor wrote:
           | We've already got solutions. We'd only need an Agi to
           | convince people on power to do something about it.
        
         | danbmil99 wrote:
         | Why is it so important to you that humans be the smartest
         | beings on the planet?
        
           | Guest9081239812 wrote:
           | Well, we have a track record of killing most other
           | intelligent species, destroying their habitat, eating them,
           | using them for experiments, and abusing them for
           | entertainment. Falling out of the top position could come
           | with some similar downsides.
        
           | spaceman_2020 wrote:
           | Because we're the smartest beings on the planet.
           | 
           | And we don't exactly treat creatures dumber than us with all
           | that much kindness.
        
           | HL33tibCe7 wrote:
           | Because if we aren't, it leaves us liable to be exterminated
           | or enslaved to suit the goals of the superior beings.
           | 
           | (and I fundamentally believe that the existence of the human
           | race is a good thing, and that slavery is bad).
        
           | trention wrote:
           | Because the history of the species on this planet clearly
           | indicates that the smartest one will brutalize and exploit
           | all the rest. There are good economic (and just plainly
           | logical) reasons why adding "artificial" to the equation will
           | not change that.
        
         | [deleted]
        
         | JoshTko wrote:
         | It's akin to nuclear weapons. If you do not develop them, then
         | you'd be subject to the will of the ones that develops them
         | first. So invariably you have to invest in AGI lest, an
         | unsavory group develops it first.
        
           | HL33tibCe7 wrote:
           | Kind of, but the key difference between AGI and nuclear
           | weapons is that we can control our nuclear weapons. The
           | current state of AI safety is nowhere near the point where
           | controlling an AGI is possible. More disturbingly, to me it
           | seems likely that it will be easier to create an AGI than to
           | discover how to control it safely.
        
             | gambiting wrote:
             | >> The current state of AI safety is nowhere near the point
             | where controlling an AGI is possible.
             | 
             | I just don't understand this logic though. Just.....switch
             | it off. Unlike humans, computers have an extremely easy way
             | to disable - just pull the plug. Even if your AGI is self-
             | replicating, somehow(and you also somehow don't realize
             | this _long_ before it gets to that point) just....pull the
             | plug.
             | 
             | Even Carmack says this isn't going to be an instant process
             | - he expects to create an AGI with an intelligence of a
             | small animal first, then something that has the
             | intelligence of a toddler, then a small child, then maybe
             | many many years down the line an actual human person, but
             | it's far far away at this point.
             | 
             | I don't understand how you can look at the current or even
             | predicted state of the technology that we have and say "we
             | are nowhere near the point where controlling an AGI is
             | possible". Like....just pull the plug.
        
               | oefnak wrote:
               | On the off chance that you're serious: Even if you can
               | pull the plug before it is too late, less moral people
               | like Meta Mark will not unplug theirs. And as soon as it
               | has access to the internet, it can copy itself. Good luck
               | pulling the plug of the internet.
        
               | gambiting wrote:
               | I'm 100% serious. I literally don't understand your
               | concern at all.
               | 
               | >>And as soon as it has access to the internet, it can
               | copy itself.
               | 
               | So can viruses, including ones that can "intelligently"
               | modify themselves to avoid detection, and yet this isn't
               | a major problem. How is this any differenent?
               | 
               | >>Good luck pulling the plug of the internet.
               | 
               | I could reach down and pull my ethernet cable out but it
               | would make posting this reply a bit difficult.
        
               | gwern wrote:
               | Worth noting that current models like Google LaMDA appear
               | to _already_ have access to the live Internet. The LaMDA
               | paper says it was trained to request arbitrary URLs from
               | the live Internet to get text snippets to use in its chat
               | contexts. Then you have everyone else, like Adept
               | https://www.adept.ai/post/introducing-adept (Forget
               | anything about how secure 'boxes' will be - will there be
               | boxes at all?)
        
               | HL33tibCe7 wrote:
               | > Like....just pull the plug.
               | 
               | Watch this video https://youtu.be/3TYT1QfdfsM
        
               | gambiting wrote:
               | It's midnight, so I'm not super keen on watching the
               | whole thing(I'll get back to it this weekend) - but the
               | first 7 minutes sounds like his argument is that if you
               | build a humanoid robot with a stop button, the robot will
               | fight you to prevent you pressing its own stop button if
               | given an AGI? As if the very first instance of AGI is
               | going to be humanoid robots that have physical means of
               | preventing you from pressing their own stop button?
               | 
               | Let me get this straight - this is an actual, real,
               | serious argument that they are making?
        
           | fatherzine wrote:
           | OTOH if you & your foes develop them both, then there is a
           | probability asymptotically approaching 1 that the weapons
           | will be used over the next X years. Perhaps the only winning
           | move is indeed not to play?
        
             | ericlewis wrote:
             | problem is you don't know if they aren't playing - so you
             | must still work on it.
        
         | pgcj_poster wrote:
         | > AGI means humans are no longer the smartest entities on the
         | planet.
         | 
         | Superintelligence and AGI are not the same thing. An AI as
         | smart as an average 5 year old human is still an Artificial
         | General Intelligence.
        
         | legohead wrote:
         | It will be cute when some technology attains intelligence,
         | realizes there's no point to life, and self terminates.
        
       | dunefox wrote:
       | I admire and respect John Carmack. For me he's one of the greats,
       | along with people like Peter Norvig, for example.
        
         | throwaway11101 wrote:
         | I think he's a huge douche that held back Oculus enormously.
         | 
         | Remember VrScript? No one else does. He fucking hates
         | developer's guts. He used that talk to take a dump on people
         | who were making stuff for the Quest with Unity. Despite nearly
         | all the Quest games being made in Unity, including the #1 hit
         | Beatsaber.
         | 
         | Remember that Facebook post where he just went and shat on
         | someone's game? For no good reason?
         | 
         | Can we have an opinion about first person shooters? His fucking
         | suck. Doom Eternal sucked.
         | 
         | Who gives a fuck about the FPS engine anymore? FPS engines are
         | completely commoditized. Who even cares about that audience? 15
         | year old boys need to be reading, not fucking wasting hours of
         | their lives on Modern Warfare.
         | 
         | What does he know, really?
         | 
         | He speaks a certain douchebaguese that plays well with the fat
         | Elon Musks out there. With the effective altruists and bitcoin
         | HODLrs. With people who leave their wives to fuck their
         | assistants (Future Fund), or who hire women to fuck them
         | (OpenAI), or you know, whatever the fuck Elon Musk is into. You
         | know, stuff that has the intellectual and emotional rigor of
         | what a rich 15 year old boy would be into. So no wonder he's
         | doing, something something AGI.
        
           | MichaelCollins wrote:
           | Carmack had nothing to do with Doom Eternal.
           | 
           | For that matter, his contributions to the 90s FPSs he's most
           | known for were more on the technical side, not creative. He
           | was known for writing surprisingly performant FPS engines.
        
       | trention wrote:
       | AGI will be more dangerous that nuclear weapons.
       | 
       | People are not allowed to start a nuclear weapon company. At all.
       | 
       | Why are people allowed to casually start an AGI company?
        
       ___________________________________________________________________
       (page generated 2022-08-19 23:00 UTC)