[HN Gopher] DeepMind: A Generalist Agent
       ___________________________________________________________________
        
       DeepMind: A Generalist Agent
        
       Author : extr
       Score  : 313 points
       Date   : 2022-05-12 15:33 UTC (7 hours ago)
        
 (HTM) web link (www.deepmind.com)
 (TXT) w3m dump (www.deepmind.com)
        
       | f38zf5vdt wrote:
       | If I'm following correctly, they trained a single model with
       | multiple training paradigms and then the single model could
       | perform token predictions for multiple dissimilar token sequences
       | for specific tasks. Seems like it is a straightforward result.
        
         | doubtfuluser wrote:
         | Well... straightforward in a way, yes. But the scale of
         | learning is huge especially with this diverse set of tasks. Not
         | totally unexpected, but certainly not clear that it would work
         | with current networks and sizes.
        
           | f38zf5vdt wrote:
           | Right, exactly. Something that seemed like it should work but
           | no one had ever tried it.
        
       | weinzierl wrote:
       | _" The same network with the same weights can play Atari, caption
       | images, chat, stack blocks with a real robot arm and much more,
       | deciding based on its context whether to output text, joint
       | torques, button presses, or other tokens."_
       | 
       | This is rather mind blowing. Does it also mean that the
       | generalist network is smaller than the sum of all specialist
       | networks that are equivalent? Even if not, I find the idea that a
       | single network can be used for such diverse tasks at all highly
       | fascinating.
        
         | f38zf5vdt wrote:
         | Many networks just predict the next integer in a sequence of
         | integers. It sounds like this model identifies what category of
         | problem a sequence of integers falls into and then makes an
         | accurate prediction for that sequence, as you would expect
         | given what it was trained on.
        
         | version_five wrote:
         | I don't find it surprising that a single network can do all
         | those things with appropriate formatting of the data. In itself
         | it just means the network has a large enough capacity to learn
         | all the different tasks.
         | 
         | The interesting questions imo, which they studied, is what kind
         | of added generalization takes place by learning across the
         | different tasks. For example, does learning multiple tasks make
         | it better at a given task than a model that is just trained for
         | one task, and can it generalize to new tasks (out of
         | distribution).
         | 
         | They looked at how it performed on held out tasks (see fig 9 in
         | the paper). I'm still getting my head around the result though
         | so couldn't summarize their finding yet.
         | 
         | Edit: the paper is here
         | https://storage.googleapis.com/deepmind-media/A%20Generalist...
         | 
         | There is currently another submission on the front page that
         | links to it directly.
        
           | f38zf5vdt wrote:
           | The paper is linked to at the top of this article, in the
           | header.
        
           | woeirua wrote:
           | Yeah, Figure 9 is the money figure in this paper and it
           | actually splashes some cold water on the claims in the rest
           | of the paper. While it generalizes OK to some tasks that are
           | held out, it does pretty poorly on the Atari boxing task,
           | which they openly admit is quite different from the others.
           | Gato seems more likely to be a competent attempt at brute
           | forcing our way towards weak general AI, which is a valid
           | approach, but the question then will always be how does it do
           | with something its never seen before, and how do you possibly
           | brute force every possible situation? I think we're heading
           | more towards a constellation of very intelligent expert
           | machines for particular tasks that may be wrapped into a
           | single package, but that are not strong AI.
        
       | minimaxir wrote:
       | Transformer models have clearly demonstrated that you can convert
       | _anything_ into an input embedding and the AI can learn from it,
       | even if the embeddings are from drastically distant domains.
        
       | hans1729 wrote:
       | I'm not sure how to word my excitement about the progress we see
       | in AI research in the last years. If you haven't read it, give
       | Tim Urbans classic piece a slice of your attention:
       | https://waitbutwhy.com/2015/01/artificial-intelligence-revol...
       | 
       | It's a very entertaining read from a couple of years ago (I think
       | I've read it in 2017), and man, have things happened in the field
       | since then. If feels like things truly start coming together.
       | Transformers and then some incremental progress look like a very,
       | very promising avenue. I deeply wonder in which areas this will
       | shape the future more than we are able to anticipate beforehand.
        
         | gurkendoktor wrote:
         | Not you specifically, but I honestly don't understand how
         | positive many in this community (or really anyone at all) can
         | be about these news. Tim Urban's article explicitly touches on
         | the risk of human extinction, not to mention all the smaller-
         | scale risks from weaponized AI. Have we made any progress on
         | preventing this? Or is HN mostly happy with deprecating
         | humanity because our replacement has more teraflops?
         | 
         | Even the best-case scenario that some are describing, of
         | uploading ourselves into some kind of post-singularity
         | supercomputer in the hopes of being conscious there, doesn't
         | seem very far from plain extinction.
        
           | JohnPrine wrote:
           | Agreed. People think of the best case scenario without
           | seriously considering everything that can go wrong. If we
           | stay on this path the most likely outcome is human
           | extinction. Full stop
        
             | JoeAltmaier wrote:
             | Says a random internet post. It takes a little more
             | evidence or argument to be convincing, besides hyperbole.
        
           | idiotsecant wrote:
           | I think the best-case scenario is that 'we' become something
           | different than we are right now. The natural tendency of
           | life(on the local scale) is toward greater information
           | density. Chemical reactions beget self-replicating molecules
           | beget simple organisms beget complex organisims beget social
           | groups beget tribes beget city states beget nations beget
           | world communities. Each once of these transitions looks like
           | the death of the previous thing and in actuality the previous
           | thing is still there, just as part of a new whole. I suspect
           | we will start with natural people and transition to some
           | combination of people whose consciousness exists, at least
           | partially, outside of the boundaries of their skulls, people
           | who are mostly information on computing substrate outside of
           | a human body, and 'people' who no longer have much connection
           | with the original term.
           | 
           | And that's OK. We are one step toward the universe
           | understanding itself, but we certainly aren't the final step.
        
             | 37ef_ced3 wrote:
             | Let's be real.
             | 
             | Not long from now all creative and productive work will be
             | done by machines.
             | 
             | Humans will be consumers. Why learn a skill when it can all
             | be automated?
             | 
             | This will eliminate what little meaning remains in our
             | modern lives.
             | 
             | Then what? I don't know, who cares?
        
               | idiotsecant wrote:
               | >Then what?
               | 
               | Growing tomatoes is less efficient than buying them,
               | regardless of your metric. If you just want really
               | cleanly grown tomatoes, you can buy those. If you want
               | cheap tomatoes, you can buy those. If you want big
               | tomatoes, you can buy those.
               | 
               | And yet individual people still grow tomatoes. Zillions
               | of them. Why? Because we are inherently over-evolved apes
               | who like sweet juicy fruits. The key to being a
               | successful human in the post-scarcity AI overlord age is
               | to embrace your inner ape and just do what makes you
               | happy, no matter how simple it is.
               | 
               | The real insight out of all this is that the above advice
               | is also valid even if there are no AI overlords.
        
               | gurkendoktor wrote:
               | Humans are great at making up purpose where there is
               | absolutely none, and indeed this is a helpful mechanism
               | for dealing with post-scarcity.
               | 
               | The philosophical problem that I see with the "AI
               | overlord age" (although not directly related to AI) is
               | that we'll then have the technology to change the
               | inherent human desires you speak of, and at that point
               | growing tomatoes just seems like a very inefficient way
               | of satisfying a reward function that we can change to
               | something simpler.
               | 
               | Maybe we wouldn't do it precisely because it'd dissolve
               | the very notion of purpose? But it does feel to me like
               | destroying (beating?) the game we're playing when there
               | is no other game out there.
               | 
               | (Anyway, this is obviously a much better problem to face
               | than weaponized use of a superintelligence!)
        
               | idiotsecant wrote:
               | Any game you play has cheat codes. Do you use them? If
               | not, why not?
               | 
               | In a post-scarcity world we get access to all the cheat
               | codes. I suspect there will be many people who use them
               | and as a result run into the inevitable ennui that comes
               | with basing your sense of purpose on competing for finite
               | resources in a world where those resources are basically
               | free.
               | 
               | There will also be many people who choose to set their
               | own constraints to provide some 'impedance' in their
               | personal circuit. I suspect there will also be many
               | people who will simply be happy trying to earn the only
               | resource that cannot ever be infinite: social capital.
               | We'll see a world where influencers are god-kings and
               | your social credit score is basically the only thing that
               | matters, because everything else is freely available.
        
           | londons_explore wrote:
           | > Or is HN mostly happy with deprecating humanity because our
           | replacement has more teraflops?
           | 
           | If we manage to make a 'better' replacement for ourselves, is
           | it actually a bad thing? Our cousin's on the hominoid family
           | tree are all extinct, yet we don't consider that a mistake.
           | AI made by us could well make us extinct. Is that a bad
           | thing?
        
             | goatlover wrote:
             | > If we manage to make a 'better' replacement for
             | ourselves, is it actually a bad thing?
             | 
             | It's bad for all the humans alive at the time. Do you want
             | to be replaced and have your life cut short? For that
             | matter, why should something better replace us instead of
             | coexist? We don't think killing off all other animals would
             | be a good thing.
             | 
             | > Our cousin's on the hominoid family tree are all extinct,
             | yet we don't consider that a mistake.
             | 
             | It's just how evolution played out. But if there was
             | another hominid still alive along side us, advocating for
             | it's extinction because we're a bit smarter would be
             | considered genocidal and deeply wrong.
        
             | JoeAltmaier wrote:
             | We have Neanderthal, Denisovan DNA (and two more besides).
             | Our cousins are not exactly extinct - we are a blend of
             | them. Sure no pure strains exist, but we are not a pure
             | strain either!
        
             | gurkendoktor wrote:
             | Your comment summarizes what I worry might be a more
             | widespread opinion than I expected. If you think that human
             | extinction is a fair price to pay for creating a
             | supercomputer, then our value systems are so incompatible
             | that I really don't know what to say.
             | 
             | I guess I wouldn't have been so angry about any of this
             | before I had children, but now I'm very much in favor of
             | prolonged human existence.
        
               | idiotsecant wrote:
               | > I'm very much in favor of prolonged human existence.
               | 
               | Serious question - why?
        
               | goatlover wrote:
               | Why should general intelligence continue to survive? You
               | are placing a human value on continued existence.
        
               | samdjstephens wrote:
               | What are your axioms on what's important, if not the
               | continued existence of the human race?
               | 
               | edit: I'm genuinely intrigued
        
               | idiotsecant wrote:
               | I suppose the same axioms of every ape that's ever
               | existed (and really the only axioms that exist). My
               | personal survival, my comfort, my safety, accumulation of
               | resources to survive the lean times (even if there are no
               | lean times), stimulation of my personal interests, and
               | the same for my immediate 'tribe'. Since I have a
               | slightly more developed cerebral cortex I can abstract
               | that 'tribe' to include more than 10 or 12 people, which
               | judging by your post you can too. And fortunate for us,
               | because that little abstraction let us get past smashing
               | each other with rocks, mostly.
               | 
               | I think the only difference between our outlooks is I
               | don't think there's any reason that my 'tribe' shouldn't
               | include non-biological intelligence. Why not shift your
               | priorities to the expansion of general intelligence?
        
         | sinenomine wrote:
         | Excitement alone won't help us.
         | 
         | We should ask our compute overlords to perform their
         | experiments in as open environment as possible, just because
         | we, the public, should have the power to oversee the exact
         | direction this AI revolution is taking us.
         | 
         | If you think about it, AI safetyism is a red herring compared
         | to a very real scenario of powerful AGIs working safely as
         | intended, just not in our common interest.
         | 
         | The safety of AGI owners' mindset seems like a more pressing
         | concern compared to a hypothetical unsafety of a pile of
         | tensors knit together via gradient descent over internet
         | pictures.
        
         | f38zf5vdt wrote:
         | That human intelligence might just be token prediction evolving
         | from successive small bit-width float matrix transformations is
         | depressing to me.
        
           | chriswarbo wrote:
           | That's a poor usage of "just": discovering that "X is just Y"
           | doesn't _diminish_ X; it tells us that Y is a much more
           | complex and amazing topic than we might have previously
           | thought.
           | 
           | For example: "Life is just chemistry", "Earth is just a pile
           | of atoms", "Behaviours are just Turing Machines", etc.
        
           | xixixao wrote:
           | It's most fascinating (or very obvious) - look at Conway's
           | Game of Life, then scale it up - a lot. Unlimited complexity
           | can arise from very simple rules and initial conditions.
           | 
           | Now consciousness on the other hand is unfathomable and (in
           | its finitude) extremely depressing for me.
        
           | goatlover wrote:
           | Is that what biologists or neuroscientists think the nervous
           | system is actually doing?
        
           | Der_Einzige wrote:
           | Dear god I hope that we are using something more complicated
           | than sampling with top_p, top_k, and a set temperature as our
           | decoder!
        
           | triceratops wrote:
           | > That human intelligence might just be token prediction
           | 
           | I mean have you heard the word salad that comes out of so
           | many people's mouths? (Including myself, admittedly)
        
             | londons_explore wrote:
             | Eating salad is good for your health. Not only word salad,
             | but green salad and egg salad.
        
           | 0xBABAD00C wrote:
           | Wait till you find out all of physics is just linear
           | operators & complex numbers
        
             | goatlover wrote:
             | Unless nature is mathematical, the linear operators &
             | complex numbers are just useful tools for making predictive
             | models about nature. The map isn't the territory.
        
         | edouard-harris wrote:
         | That Tim Urban piece is great. It's also an interesting time
         | capsule in terms of which AI problems were and were not
         | considered hard in 2015 (when the post was written). From the
         | post:
         | 
         | > Build a computer that can multiply two ten-digit numbers in a
         | split second--incredibly easy. Build one that can look at a dog
         | and answer whether it's a dog or a cat--spectacularly
         | difficult. Make AI that can beat any human in chess? Done. Make
         | one that can read a paragraph from a six-year-old's picture
         | book and not just recognize the words but understand the
         | meaning of them? Google is currently spending billions of
         | dollars trying to do it. Hard things--like calculus, financial
         | market strategy, and language translation--are mind-numbingly
         | easy for a computer, while easy things--like vision, motion,
         | movement, and perception--are insanely hard for it.
         | 
         | The children's picture book problem is solved; those billions
         | of dollars were well-spent after all. (See, e.g., DeepMind's
         | recent Flamingo model [1].) We can do whatever we want in
         | vision, more or less [2]. Motion and movement might be the
         | least developed area, but it's still made major progress; we
         | have robotic parkour [3] and physical Rubik's cube solvers [4],
         | and we can tell a robot to follow simple domestic instructions
         | [5]. And Perceiver (again from DeepMind [6]) took a big chunk
         | out of the perception problem.
         | 
         | Getting a computer to carry on a conversation [7], let alone
         | draw art on par with human professionals [8], weren't even
         | mentioned as examples, so laughably out of reach they seemed in
         | the heathen dark ages of... 2015.
         | 
         | And as for recognizing a cat or a dog -- that's a problem so
         | trivial today that it isn't even worth using as the very first
         | example in an introductory AI course. [9]
         | 
         | If someone re-wrote this post today, I wonder what sorts of
         | things would go into the "hard for a computer" bucket? And how
         | many of _those_ would be left standing in 2029?
         | 
         | [1] https://arxiv.org/abs/2204.14198
         | 
         | [2] https://arxiv.org/abs/2004.10934
         | 
         | [3] https://www.youtube.com/watch?v=tF4DML7FIWk
         | 
         | [4] https://openai.com/blog/solving-rubiks-cube/
         | 
         | [5] https://say-can.github.io/
         | 
         | [6] https://www.deepmind.com/open-source/perceiver-io
         | 
         | [7] https://arxiv.org/abs/2201.08239v2
         | 
         | [8] https://openai.com/dall-e-2/
         | 
         | [9] https://www.fast.ai/
        
       | kaivi wrote:
       | Before you visualize a straight path between "a bag of cool ML
       | tricks" and "general AI", try to imagine superintelligence but
       | without consciousness. You might then realize that there is no
       | obvious mechanism which requires the two to appear or evolve
       | together.
       | 
       | It's a curious concept, well illustrated in the novel Blindsight
       | by Peter Watts. I won't spoil anything here but I'll highly
       | recommend the book.
        
         | oldstrangers wrote:
         | You just reminded me I have that book sitting on my shelf.
         | Guess I'll give it a read.
        
         | awestroke wrote:
         | What's the difference between intelligence and consciousness?
         | Could a human be intelligent while not conscious?
        
         | nullc wrote:
         | It's worth mentioning that Blindsight is available online for
         | free: https://www.rifters.com/real/Blindsight.htm
        
         | mach1ne wrote:
         | First you have to define consciousness, and especially the
         | external difference between a conscious and non-conscious
         | intelligence.
        
           | meekmind wrote:
           | Likely insufficient but here is a shot at a materialist
           | answer.
           | 
           | Consciousness is defined as an entity that has an ethical
           | framework that is subordinated to it's own physical
           | existence, maintaining that existence, and interfacing with
           | other conscious entities as if they also have an ethical
           | framework with similar parameters who are fundamentally no
           | more or less important/capable than itself.
           | 
           | Contrast with non-conscious super-intelligence that lacks
           | physical body (likely distributed). Without a physical/atomic
           | body and sense data it lacks the capacity to
           | empathize/sympathize as conscious entities (that exist within
           | an ethical framework that is subordinated to those
           | limitations/senses) must. It lacks the perspective of a
           | singular, subjective being and must extrapolate our
           | moral/ethical considerations, rather than have them ingrained
           | as key to it's own survival.
           | 
           | Now that I think about it, it's probably not much different
           | than the relationship between a human and God, except that in
           | this case it's: a machine consciousness and a machine god.
           | 
           | To me, the main problem is that humans (at large) have yet to
           | establish/apply a consistent philosophy with which to
           | understand our own moral, ethical, and physical limitations.
           | For the lack of that, I question whether we're capable of
           | programming a machine consciousness (much less a machine god)
           | with a sufficient amount of ethical/moral understanding -
           | since we lack it ourselves (in the aggregate). We can hardly
           | agree on basic premises, or whether humanity itself is even
           | worth having. How can we expect a machine that _we make_ to
           | do what we can 't do ourselves? You might say "that's the
           | whole point of making the machine, to do something we can't"
           | but I would argue we have to understand the problem domain
           | first (given we are to program the machine) before we can
           | expect our creations to apply it properly or expand on it in
           | any meaningful way.
        
         | tomp wrote:
         | I don't think it's necessarily about _consciousness_ per se,
         | but rather about _emotions_ or  "irrationality".
         | 
         | Life has no purpose so clearly there is no _rational_ reason to
         | continue living /existing. A super-rational agent must know
         | this.
         | 
         | I think that intelligence and emotions, in particular _fear of
         | death_ or _desire to continue living_ , must evolve in
         | parallel.
        
         | joe_the_user wrote:
         | > _" try to imagine superintelligence but without
         | consciousness."_
         | 
         | The only thing that comes to mind is how many different things
         | come to mind to people when the term "superintelligence" is
         | used.
         | 
         | The thing about this imagination process, however, is that what
         | people produce is a "bag of capacities" without a clear means
         | to implement those capacities. Those capacities would be
         | "beyond human" but in what direction probably depends on the
         | last movie someone watched or something similarly arbitrary
         | 'cause it certainly doesn't depend on their knowledge of a
         | machine that could be "superintelligent", 'cause none of us
         | have such knowledge (even if this machine could go to
         | "superintelligence", even our deepmind researchers don't know
         | the path now 'cause these are being constructed as a huge
         | collection of heuristics and what happens "under the hood" is
         | mysterious to even the drivers here).
         | 
         | Notably, a lot of imagined "superintelligences" can supposedly
         | predict or control X, Y or Z thing in reality. The problem with
         | such hypotheticals is that various things may not be much more
         | easily predictable by an "intelligence" than by us simply
         | because such prediction involves imperfect information.
         | 
         | And that's not even touch how many things go by the name
         | "consciousness".
        
       | axg11 wrote:
       | Slowly but surely we're moving towards general AI. There is a
       | marked split across general society and even ML/AI specialists
       | between those who think that we can achieve AGI using current
       | methods and those who dismiss the possibility. This has always
       | been the case, but what is remarkable about today's environment
       | is that researchers keep making progress contrary to the
       | doubter's predictions. Each time this happens, the AGI pessimists
       | raise the bar (a little) for what constitutes AGI.
       | 
       | Just in the last five years, here are some categories of
       | pessimistic predictions that have been falsified:
       | 
       | - "AI/ML can't solve scientifically useful problems" - then
       | AlphaFold changed the protein folding field
       | 
       | - "We're entering an AI winter" [0] - then transformers continued
       | to show promise across multiple domains
       | 
       | - "ML models can't perform creative work" - then came GANs, large
       | language models, DALL-E, and more.
       | 
       | - "Generative ML models are just memorizing the dataset!" - then
       | came multiple studies showing this to be false for well trained
       | GANs, diffusion models and other types of generative models. Take
       | a look at DALL-E 2 generated images of "a bear putting on a shirt
       | in H&M".
       | 
       | - "AGI is impossible - look at language models, they have no
       | understanding of the world and make silly mistakes" - the second
       | part is true, large language models are artificially limited due
       | to being language-focused. Nowadays there are approaches such as
       | Gato and other multi-modal models. Humans develop intuition
       | through multiple sources of information: sight, sound, smell, and
       | touch. Given enough multi-modal context I'm confident multi-modal
       | models will be able to show human-like intuition.
       | 
       | I'm not anti-skeptic. Skepticism is essential to all good
       | science. I think the danger of skepticism with respect to AGI is
       | that we're being complacent. Given the trajectory of improvements
       | in machine learning, we should start preparing for a world where
       | AI is indistinguishable, or far superior, to human intelligence.
       | 
       | [0] - https://www.bbc.com/news/technology-51064369
        
         | version_five wrote:
         | This is interesting research, but it's an extension of studying
         | model capacity and generalization, it is no closer to AGI than
         | previous networks, ie it's unrelated.
        
         | dalbasal wrote:
         | I agree about the dialogue between current method skeptics and
         | optimists. It's been this way since the start and it's been
         | productive and fun.
         | 
         | ...one pick.. I don't think agi pessimists raise the bar out of
         | bad faith. It's just the nature of observing progress. We
         | discover that an ai can do X, while still struggling with Y.
         | 
         | What's the alternative, conclude gpt is sentient? The bar must
         | be raised, because the bar is supposed to represent human
         | intelligence... and we don't know how that works either.
        
         | gcheong wrote:
         | I don't know if we could sufficiently prepare ourselves for
         | such a world. It would seem almost as if we have to build it
         | first so it could determine the best way to prepare us.
        
           | jimbokun wrote:
           | Maybe we could train a model to tell us the best way to
           | prepare.
        
           | gurkendoktor wrote:
           | For one thing, we could try to come up with safety measures
           | that prevent the most basic paperclip maximizer disaster from
           | happening.
           | 
           | At this point I almost wish it was still the military that
           | makes these advances in AI, not private companies. Anyone
           | working on a military project has to have some sense that
           | they're working on something dangerous.
        
         | ajmurmann wrote:
         | > a world where AI is indistinguishable, or far superior, to
         | human intelligence
         | 
         | I think the part about being "indistinguishable from human
         | intelligence" is potentially a intellectual trap. We might get
         | to it being far superior while still underperforming at some
         | tasks or behaving in ways that don't make sense to a human
         | mind. An AI mind will highly likely work completely differently
         | from humans and communicating with it should be more thought of
         | as communicating with a quite foreign alien than with a human
         | trapped in a computer.
         | 
         | As a comparison, I'm sure there are some tasks in which some
         | animals do better than humans. Yet no human would conclude that
         | humans are inferior to some monkey who might find its way
         | around the rain forest better or whatever we are worse at.
        
           | beaconstudios wrote:
           | Computers are already exponentially more intelligent than
           | humans in constrained domains, like executing mathematics.
           | Presumably we'll just keep expanding this category until
           | they're better at everything than us, all the while reaping
           | industrial benefits from each iterative improvement.
        
           | Hellicio wrote:
           | Only if you don't assume that consciousness comes from
           | complexity.
           | 
           | The physical ability of an animal to see
           | better/different/faster doens't matter as we do not compare /
           | seperate us from animals by those factors. We seperate us by
           | consciousness and it might get harder and harder to shut down
           | a PC on which a ML model is running which begs you not to do
           | it.
        
           | axg11 wrote:
           | You're right. I didn't word that very well. Human
           | intelligence vs. AI will always have different qualities as
           | long as one is biological vs. silicon based. I still think
           | we'll be surprised how quickly AI can catches up to human
           | performance on most tasks that comprise modern jobs.
        
             | ajmurmann wrote:
             | I think your wording was fine. My point was more to expand
             | on yours of us getting surprised by progress. In fact, wet
             | might have GAI long before we understand what we have
             | because the AI is so foreign to us. In some way we might be
             | building the big pudding from Solaris.
        
           | pmontra wrote:
           | An example of your point, chimps winning over humans at some
           | games
           | 
           | https://www.scientificamerican.com/article/chimps-outplay-
           | hu...
        
         | valas wrote:
         | You complain that the bar keeps getting raised. Is there some
         | good write up by someone who believes AGI is possible and how
         | it might look like? I.e. what is your definition of the bar
         | where you will say 'now, this is AGI'?
        
           | px43 wrote:
           | I'm still fine with using the Turing Test (now >70 years old)
           | for this.
           | 
           | https://en.wikipedia.org/wiki/Turing_test
           | 
           | I guess a key stipulation there is an interrogator who know
           | what they're doing, but an AI that can fool an experienced
           | interrogator would be worthy of the AGI title to me.
        
         | hooande wrote:
         | I'd like to see someone make the argument that current models
         | aren't just combining a number of "tricks", similar to a
         | trained animal. My dog can "sit", "stay" and "beg", all using
         | the same model (its brain). Is the dog generally intelligent?
        
           | visarga wrote:
           | How good is your dog at Atari games, stacking cubes and image
           | captioning?
           | 
           | You can actually measure the effect of generality by how fast
           | it learns new tasks. The paper is full of tables and graphs
           | showing this ability.
           | 
           | It's just a small model, 170x smaller than GPT-3, has lots of
           | room to grow. But for the first time we have a game playing
           | agent that knows what "Atari" and "game" mean, and can
           | probably comment on the side of the livestream. AlphaGo only
           | knew the world of the Go board. This agent knows what is
           | outside the box.
        
             | hooande wrote:
             | Playing Atari is cool, but it's just another "trick".
             | Training a computer to do progressively more difficult
             | tasks doesn't seem much more impressive than training an
             | animal to do so.
             | 
             | I see no evidence in the paper that it can learn arbitrary
             | tasks on the fly. It's very impressive, though.
        
               | visarga wrote:
               | > I see no evidence in the paper that it can learn
               | arbitrary tasks on the fly.
               | 
               | Neither can we do that. It takes years to become and
               | expert in any field, we are not learning on the fly like
               | Neo. That's when there is extensive training available,
               | for research - it takes thousands of experts to crack one
               | small step ahead. No one can do it alone, it would be too
               | much to expect it from a lonely zero shot language model.
               | 
               | On the other hand the transformer architecture seems to
               | be capable of solving all the AI tasks, it can learn "on
               | the fly" as soon as you provide the training data or a
               | simulator. This particular paper trains over 600 tasks at
               | once, in the same model.
        
         | adamgordonbell wrote:
         | The question of whether a computer can think is no more
         | interesting than the question of whether a submarine can swim.
        
         | chrisco255 wrote:
         | How do we prepare for super human intelligence? Do you think
         | that the AI will also develop its own _motives_? Or will it
         | just be a tool that we 're able to plug into and use for
         | ourselves?
        
           | sinenomine wrote:
           | We prepare for it by domesticating its lesser forms in
           | practice and searching for ways to increase our own
           | intelligence.
           | 
           | Still, it's pretty likely to end being just a very good
           | intelligent tool, not unlike
           | http://karpathy.github.io/2021/03/27/forward-pass/
        
           | visarga wrote:
           | The danger is really us, the ones who might task the AI to do
           | something bad. Even if the AI has no ill intentions it might
           | do what is asked.
        
           | axg11 wrote:
           | I think AI will largely remain an input-output tool. We still
           | need to prepare ourselves for the scenario where for most
           | input-output tasks, AI will be preferable to humans. Science
           | is an interesting field to focus on. There is so much
           | scientific literature for most fields that it is now
           | impossible to keep up with the latest literature. AI will be
           | able to parse literature and generate hypotheses at a much
           | greater scale than any human or team of humans.
        
             | thebeastie wrote:
             | I don't know why you think that. As soon as it is viable,
             | some unscrupulous actor will surely program an AI with the
             | goal of "make money and give it to me", and if that AI is
             | able to self modify, well that's all that's required for
             | that experiment to end badly because decent AI alignment is
             | basically intractable.
        
           | adamsmith143 wrote:
           | A lot of people at MIRI, OpenAI, Redwood Research, Anthropic
           | etc. are thinking about this.
           | 
           | I think one possibility is that even a sufficiently strong
           | Narrow AI is going to develop strong motivations because it
           | will be able to perform it's Narrow task even better. Hence
           | the classic paperclip maximizer idea.
        
           | dougabug wrote:
           | In machine learning, there's a long term trend towards
           | automating work that used to be done manually. For instance,
           | ML engineers used to spend a lot of time engineering
           | "features" which captured salient aspects of the input data.
           | Nowadays, we generally use Deep Learning to learn effective
           | features. That pushed the problem to designing DNN
           | architectures, which subsequently led to the rise of AutoML
           | and NAS (Network Architecture Search) methods to save us the
           | trouble. And so on.
           | 
           | We still have to provide ML agents with some kind of
           | objective or reward signal which drives the learning process,
           | but again, it would save human effort and make the process of
           | learning more dynamic and adaptable if we can make machines
           | learn useful goals and objectives on their own.
        
             | jimbokun wrote:
             | And that's when Asimov's Laws of Robotics come into play.
        
         | dekhn wrote:
         | we have been using ML to solve useful problems in biology for
         | more than 3 decades. However, it was usually called "advanced
         | statistics and probability on large data sets" because, to be
         | honest, that's what most modern ML is.
        
           | visarga wrote:
           | > advanced statistics
           | 
           | There's an emergent quality to AI models. Not all statistical
           | models can dream pandas on the moon or solve hundreds of
           | tasks, even without specific training.
        
             | dekhn wrote:
             | I'd love to believe this, but nobody has demonstrated that
             | yet. Also, I'm of the belief that if you have enough ram,
             | either an infinitely tall-and-thin or wide-but-short MLP
             | could do anything transformers can (happy to be pointed at
             | a proof otherwise).
        
           | adamsmith143 wrote:
           | Of course there's no evidence that this isn't just what Human
           | Brains are doing either.
        
             | TaupeRanger wrote:
             | There it is. The person who think human minds are python
             | programs doing linear algebra.
        
               | adamsmith143 wrote:
               | There's no evidence otherwise. You have to believe that
               | the mind has a materialist basis or else you believe in
               | woo woo magic.
        
               | dekhn wrote:
               | sure, but I think it's fair to say that brains probably
               | aren't doing lballistics calculations when a baseball
               | player sees a pop fly and manveuvers to catch it. Rather,
               | brains, composed mainly of neurons and other essential
               | components, approximate partial differential equations,
               | much like machine learning systems do.
        
               | riversflow wrote:
               | > sure, but I think it's fair to say that brains probably
               | aren't doing lballistics calculations when a baseball
               | player sees a pop fly and manveuvers to catch it.
               | 
               | Well, I know you were talking about throwing, but there
               | is some[1] talk/evidence in the evolutionary
               | biology/neural darwinsm community that complex language
               | development was a consequence of human developing the
               | ability to hunt by throwing rocks (a very complicated and
               | highly mathematical task). From my understanding after
               | developing the required shoulder/arm morphology to throw
               | at high speed brain sized tripled in early hominids.
               | 
               | So the brain actually might be doing something closer to
               | math than we might think.
               | 
               | [1]https://www.sciencedirect.com/science/article/abs/pii/
               | 002251...
               | 
               | [2]https://link.springer.com/referenceworkentry/10.1007/9
               | 78-3-5...
        
               | TaupeRanger wrote:
               | There's evidence everywhere, every second of every day.
               | It doesn't follow from the mind having a material basis
               | that it is doing linear algebra calculations like a
               | Python machine learning program. That's quite a leap.
        
         | abeppu wrote:
         | I think a key problem is our understanding of the quality of an
         | ML system is tied to a task. Our mechanism of training is tied
         | to a loss, or some optimization problem. The design, training,
         | and evaluation of these systems is dependent on an externally
         | provided definition of "correct".
         | 
         | But this seems structurally different from how we or even less
         | intelligent animals operate. DALL-E may make "better" art than
         | most humans -- but it does so in response to a human-provided
         | prompt, according to a system trained on human produced or
         | selected images, improving on an externally-provided loss.
         | Whereas a human artist, even if mediocre, is directed by their
         | own interests and judges according to their own aesthetics.
         | Even if some of their outputs are sometimes comparable, they're
         | not really engaged in the same activity.
         | 
         | Methodologically, how do we create agents that aren't just good
         | at several tasks, but make up their own tasks, "play", develop
         | changing preferences for different activities (I think this is
         | more than just "exploration"), etc? Even a dog sometimes wants
         | to play with a toy, sometimes wants to run and chase, sometimes
         | wants to be warm inside. We don't "score" how well it plays
         | with a toy, but we take its desire to play as a signs of
         | greater intelligence than, e.g. a pet iguana which doesn't seem
         | to have such a desire.
         | 
         | Further, how do we create agents that can learn without ever
         | seriously failing? RL systems have many episodes, some of which
         | can end very badly (e.g. your simulated runner falls off the
         | world) and they get to learn from this. We die exactly once,
         | and we don't get to learn from it. Note, learning from others
         | in a social context may be part of it, but non-social animals
         | also can learn to avoid many kinds of serious harm without
         | first experiencing it.
         | 
         | I don't mean to overly discount the current methods -- they're
         | achieving amazing results. But I think even an optimist should
         | be open to the possibility / opportunity that perhaps the
         | current techniques will get us 80% of the way there, but that
         | there are still some important tricks to be discovered.
        
           | phreeza wrote:
           | > Methodologically, how do we create agents that aren't just
           | good at several tasks, but make up their own tasks, "play",
           | develop changing preferences for different activities (I
           | think this is more than just "exploration"), etc? Even a dog
           | sometimes wants to play with a toy, sometimes wants to run
           | and chase, sometimes wants to be warm inside. We don't
           | "score" how well it plays with a toy, but we take its desire
           | to play as a signs of greater intelligence than, e.g. a pet
           | iguana which doesn't seem to have such a desire.
           | 
           | This doesn't sound like it would be so hard to do if you have
           | an agent or ensemble of agents that can already do it. What
           | you probably really want is this behavior to somehow emerge
           | from simple ground rules, which is probably a lot harder.
        
           | sinenomine wrote:
           | > Methodologically, how do we create agents that aren't just
           | good at several tasks, but make up their own tasks
           | 
           | It's a good question, it has been asked a few times, and
           | there are some answers[1][2] already, with the most general
           | being to endow the agent with _intrinsic motivation defined
           | as an information-theoretic objective to maximize some
           | definition of surprise_. Then the agent in question will
           | develop a general curious exploration policy, if trained long
           | enough.
           | 
           | > Further, how do we create agents that can learn without
           | ever seriously failing?
           | 
           | Another good question. One of the good enough answers here is
           | that you should design _a sequence of value functions_ [3]
           | for your agent, in such a way, as to enforce some invariants
           | over its future, possibly open-ended, lifetime. For this
           | specific concern you should ensure that your agent develops
           | some approximation of fear, leading to aversion of
           | catastrophic failure regions in its state space. It's pretty
           | self-evident that we develop such a fear in the young age
           | ourselves, and where we don't, evolution gives us a hand and
           | makes us preemptively fear heights, or snakes, even before we
           | ever see one.
           | 
           | The other answer being, of course, to prove[4] a mathematical
           | theorem around some hard definition of safe exploration in
           | reinforcement learning.
           | 
           | 1. https://people.idsia.ch/~juergen/interest.html
           | 
           | 2. https://www.deepmind.com/publications/is-curiosity-all-
           | you-n...
           | 
           | 3. https://www.frontiersin.org/articles/10.3389/fncom.2016.00
           | 09...
           | 
           | 4. https://arxiv.org/abs/2006.03357
        
         | soperj wrote:
         | >- "ML models can't perform creative work" - then came GANs,
         | large language models, DALL-E, and more.
         | 
         | I don't think copying other people's style of artwork is
         | considered creative work, otherwise art forgers would be able
         | to actually make a living doing art, since some of them are
         | really phenomenal.
        
           | jimbokun wrote:
           | Good artists borrow, great artists steal.
        
             | soperj wrote:
             | That's a quote coming from someone who stole repeatedly, so
             | of course they said that.
             | 
             | Alfred Tennyson had this to say: "That great poets imitate
             | and improve, whereas small ones steal and spoil."
        
         | TaupeRanger wrote:
         | And yet, the only thing that really matters out of your entire
         | list is the 1st one: that AI solves problems that actually
         | improve the human condition. And Alpha Fold has not done that
         | at all. It may be very nice for people interested in protein
         | folding, but until it actually helps us find something that we
         | wouldn't have found otherwise, and that discovery leads to (for
         | example) an ACTUAL drug or treatment that helps real patients,
         | AND that drug/treatment is actually BETTER than what is already
         | available by helping people live longer or better lives, AI has
         | done nothing. In effect, AI has STILL done nothing meaningful.
         | One could argue, through the use of predatory algorithms, that
         | the ONLY thing it has done is harm.
        
           | robitsT6 wrote:
           | But there have been quite a few scientific papers that have
           | used discoveries from AlphaFold already. There have been many
           | scientists who have been stuck for years, who are suddenly
           | past their previous bottlenecks. What gives you the
           | impression that it hasn't helped us?
        
             | TaupeRanger wrote:
             | I am not saying that Alpha Fold won't help scientists
             | publish papers. I am just skeptical (though still hopeful)
             | of it doing anything to improve the human condition by
             | actually making human existence better. Publishing papers
             | can be of neutral or negative utility in that realm.
        
         | PaulHoule wrote:
         | I have been impressed with what I've seen in the last six
         | months but it still seems that GPT-3 and similar language
         | models greatest talent is fooling people.
         | 
         | The other day I prompted a language model with "The S-300
         | missile system is" and got something that was grammatical but
         | mostly wrong: the S-300 missile system was not only capable of
         | shooting down aircraft and missiles (which it is), but it was
         | also good for shooting at other anti-aircraft missile systems,
         | naval ships, tanks, etc.
         | 
         | All the time Google and Bing try to answer my questions
         | directly but frequently the "lights are on and nobody is home"
         | and the answers just don't make sense.
         | 
         | I see the problem is that people look at the output of these
         | things that are (say) 70% correct and in their mind they fill
         | in the other 30%.
        
           | nmca wrote:
           | Do you really, truly believe this problem is impossible to
           | solve though? Even simple things make strides, eg:
           | https://www.deepmind.com/publications/gophercite-teaching-
           | la...
        
             | PaulHoule wrote:
             | If you've been involved in efforts to develop advanced
             | technologies you might eventually encounter an
             | 
             | https://en.wikipedia.org/wiki/Asymptote
             | 
             | which is described as a risk in great detail
             | 
             | https://www.amazon.com/Friends-High-Places-W-
             | Livingston/dp/0...
             | 
             | it's quite a terrible risk because you often think "if only
             | I double or triple the resources I apply to do this I'll
             | get it." Really though you get from 90% there to 91% to 92%
             | there.... You never get there because there is a structural
             | mismatch between the problem you have and how you're trying
             | to solve it.
             | 
             | My take is that people have been too incredulous about the
             | idea that you can just add more neurons and train harder
             | and solve all problems... But if you get into the trenches
             | and ask "why can't this network solve this particular
             | task?" you usually do find structural mismatches.
             | 
             | What's been exciting just recently (last month or so) are
             | structurally improved models which do make progress beyond
             | the asymptote because they are confronting
             | 
             | https://www.businessballs.com/strategy-innovation/ashbys-
             | law...
        
               | mach1ne wrote:
               | Could you link some of these models? An interesting
               | perspective that asymptote.
        
               | PaulHoule wrote:
               | I first got involved in text classification in the early
               | 00's and then the best you could do was "bag of word"
               | models that counted the words in a document but didn't
               | take the order of words into account.
               | 
               | This works great if you asking a question "Is this paper
               | about astrophysics?" because the vocabulary used in a
               | document is closely linked to the topic.
               | 
               | Pretty obviously though if you scramble the words in the
               | document you can't reconstruct the original document,
               | some information is lost, and there are some
               | classification tasks that will reach an upper limit
               | (asymptote) in accuracy because in taking the feature set
               | you lost something. (If the task is "did the defendant
               | commit the crime" the heuristic "Tyrone is a thug" works
               | over bag-of-words, but there is no justice in that.) If
               | that system is able to get the right answer for a case
               | where the word order matters, it just got lucky.
               | 
               | You might think "wouldn't it be better to use pairs of
               | words?" but then you run into another problem. You might
               | have a vocabulary of 2,000-20,000 words and get a
               | somewhat useful sample of all of those in a few thousand
               | documents. The number of word pairs is the square of the
               | number of words and you just can't get enough training
               | samples to sample all the possible word pairs.
               | 
               | Sentiment analysis was an early area where bag-of-words
               | broke down because                  I am happy
               | 
               | and                  I am not happy
               | 
               | mean very different things. You'd think now that
               | adjectives like "happy" really are special and so is the
               | word "not" and we could make the system somehow realize
               | that "not X" means the opposite of X. You run into an
               | asymptote situation there because there are a huge number
               | of possible negation patterns, for instance you can say
               | I can't say that I am happy
               | 
               | and you can't even say "the negation structure has to be
               | within ten words of the adjective" because there is no
               | limit for how complex nested structures can get in
               | language. The first few patterns you add "not X" raise
               | the performance potential of the system a lot but
               | patterns you add after that each make a smaller and
               | smaller contribution to the performance and you again
               | reach an asymptote.
               | 
               | Today we have all kinds of embeddings and they are a step
               | forward but they also run into the risk of throwing
               | critical information away, and in a multi-step system you
               | are doomed if an early step does that. I've walked away
               | from some projects where people required high accuracy
               | and they were stuck on using word embeddings that would
               | never attain it. You can think about information loss in
               | embeddings the same way as you do with simpler features
               | except it is a lot more complicated and a lot of people
               | look away instead of confronting the problem.
        
           | dougabug wrote:
           | Sure, but GPT-3 was trained by self-supervised learning on
           | only static text. We see how powerful even just adding
           | captions to text can be with the example of DALLE-2. GATO
           | takes this further by letting the large scale Transformer
           | learn in both simulated and real interactive environments,
           | giving it the kind of grounding that the earlier models
           | lacked.
        
             | PaulHoule wrote:
             | I will grant that the grounding is important.
             | 
             | The worst intellectual trend of the 20th century was the
             | idea that language might give you some insight into
             | behavior (Sapir-Whorf hypothesis, structuralism, post-
             | structuralism, ...) whereas language is really like the
             | evidence left after a crime.
             | 
             | For instance, language maximalists see mental models as a
             | fulcrum point for behavior, and they are, but they have
             | nothing to do with language.
             | 
             | I have two birds that come to my window. One of them has no
             | idea of what the window is and attacks her own reflection
             | hundreds of times a day. She can afford to do it because
             | her nest is right near the bird feeder and doesn't need to
             | work to eat, in fact it probably seems meaningful to her
             | that another bird is after her nest. This female cardinal
             | flies away if I am in the room where she is banging.
             | 
             | There is a rose-breasted grosbeak, on the other hand, that
             | comes to the same window. She doesn't mind if I come close
             | to the window, instead I see her catch the eye of her
             | reflection and then catch my eye. She basically understands
             | the window.
             | 
             | Here you have two animals with two different acquired
             | mental models... But no language.
             | 
             | What I like about the language-image models is how the
             | image grounds reality outside language, and that's
             | important because the "language instinct" is really a
             | peripheral that attaches to an animal brain. Without the
             | rest of the animal it's useless.
        
           | logifail wrote:
           | > I see the problem is that people look at the output of
           | these things that are (say) 70% correct and in their mind
           | they fill in the other 30%.
           | 
           | Q: Is there also some element of survival bias in the mix?
           | 
           | If you prompt GPT-3 with something and the answer is garbage,
           | you probably don't write it up on your blog. If you get
           | something that makes sense, then you do.
        
             | PaulHoule wrote:
             | That's true for most people. It's the opposite for me!
        
           | jimbokun wrote:
           | Do you think that is a solvable problem with tweaks to the
           | current training model? Or requires a fundamentally different
           | approach?
        
             | PaulHoule wrote:
             | It might be basically the same process as today but with
             | several big new ideas (some of which might seem simple in
             | retrospect...)
             | 
             | The quality of the training set is also critical, more so
             | than the quantity. Some of these clever ideas for creating
             | a lot of training data without any work, such as "guess the
             | next word" can't really capture semantics.
             | 
             | I think it really takes multi-task training, like what the
             | article we are talking about is advocating. That forces the
             | upstream part of the network to learn features that capture
             | important semantics.
        
         | rsfern wrote:
         | > - "AI/ML can't solve scientifically useful problems" - then
         | AlphaFold changed the protein folding field
         | 
         | AlphaFold is a big deal, but AI in science has been a really
         | hot topic in the past almost decade.
         | 
         | Also, I still wouldn't call AlphaFold really "intelligence",
         | it's doing structure prediction which is cool but it's a long
         | way to scientific intelligence
        
           | VikingCoder wrote:
           | I wonder if you get how much we've moved the goalposts on
           | "intelligence."
           | 
           | Once upon a time, it was considered "intelligent" to be able
           | to add.
           | 
           | Then "intelligence" was tool use, which we thought only
           | humans could do.
           | 
           | Then we swore it took "intelligence" to play Go as well as a
           | beginner human.
           | 
           | What set of tasks would you, right now, consider to be
           | demonstrative of "intelligence" if a computer can do them?
           | Then we can look back later at your response, and see how
           | long it took each one to happen.
        
             | Jensson wrote:
             | > What set of tasks would you, right now, consider to be
             | demonstrative of "intelligence" if a computer can do them?
             | 
             | Be able to apply for, get and hold a remote job and get
             | paid for a year without anyone noticing, or something
             | equivalent to that. I said this many years ago and it still
             | hasn't happened.
             | 
             | The people who are moving the goalposts aren't the
             | sceptics, it is the optimists who always move the goalposts
             | to exactly where we are right now and say "see, we reached
             | this incredible goalpost, now you must concede that this is
             | intelligent!".
        
               | VikingCoder wrote:
               | Why must it apply for a job, rather than just DO a job?
               | 
               | But maybe some combination of this [1] and this [2] would
               | do it.
               | 
               | If you want to know about a computer actually DOING a
               | remote job for a year without anyone noticing, I'll
               | conclude with many links [a-i].
               | 
               | [1] : https://thisresumedoesnotexist.com/ (Sorry for the
               | bad certificates.)
               | 
               | [2] : https://www.businessinsider.com/tiktoker-wrote-
               | code-spam-kel...
               | 
               | [a] : An original claim of just that: https://www.reddit.
               | com/r/antiwork/comments/s2igq9/i_automate...
               | 
               | [b] : Coverage of that post: https://www.newsweek.com/no-
               | harm-done-it-employee-goes-viral...
               | 
               | [c] : https://www.reddit.com/r/antiwork/comments/p3wvdy/i
               | _automate...
               | 
               | [d] : https://www.reddit.com/r/AskReddit/comments/jcdad/m
               | y_wife_wo...
               | 
               | [e] : https://www.reddit.com/r/talesfromtechsupport/comme
               | nts/277zi...
               | 
               | [f] : https://www.reddit.com/r/AskReddit/comments/tenoq/r
               | eddit_my_...
               | 
               | [g] : https://www.reddit.com/r/AskReddit/comments/vomtn/u
               | pdate_my_...
               | 
               | [h] : https://www.reddit.com/r/AmItheAsshole/comments/ew6
               | gmd/aita_...
               | 
               | [i] : https://www.reddit.com/r/talesfromtechsupport/comme
               | nts/7tjdk...
               | 
               | I mostly share the last few because of all of the "me,
               | too" comments on them.
               | 
               | There are several instances in there where an employer
               | has no idea they are paying a salary, but a computer is
               | doing the vast majority of the actual work.
               | 
               | I feel like this is a "business world Turing test," like,
               | "would an employer pay money for it, thinking it was a
               | human." And I feel like I've provided evidence that has
               | actually occurred.
        
               | Jensson wrote:
               | > Why must it apply for a job, rather than just DO a job?
               | 
               | Because being able to manage a business relationship is a
               | part of the job. If you could show an AI which got a job,
               | then wrote a simple script that automated the AI's job
               | and then coasted for a year that would be fine, but your
               | links are just humans doing that, I want an AI that can
               | do that to consider it intelligent.
               | 
               | But thanks for demonstrating so clearly how AI proponents
               | are moving goalposts backward to make them easy to meet.
        
               | VikingCoder wrote:
               | Should the AI be able to use a real human's SSN? And
               | resume, to be able to pass a background check? Can a real
               | human show up to interview, and take a drug test? Can we
               | have real humans provide references, or must those be
               | faked too? Must the computer go to high school and
               | college, to have real transcripts to validate?
               | 
               | Do we need to have a computer baby trick doctors into
               | issuing it a birth certificate, so it can get its own
               | SSN, and then the computer baby needs to have a physical
               | body that it can use to trick a drug test with artificial
               | urine, and it also needs to be able to have either
               | computer-generated video and audio meetings, or at least
               | computer-generated audio calls?
               | 
               | Or can you list some jobs that you think require no SSN,
               | no physical embodiment, no drug test, no video or audio
               | teleconfrencing?
               | 
               | Since you're accusing me of moving the goalposts
               | backwards to make it "easy," let's have you define
               | exactly where you think the goalposts should be, for your
               | intelligence test.
               | 
               | Or maybe, replacing a human driver (or some other job),
               | 1:1, for a job a human did yesterday, and a computer does
               | today could be enough? If it's capable of replacing a
               | human, do you then not think the human needed
               | intelligence to do their job?
        
               | Jensson wrote:
               | You can use a real persons contact details as long as the
               | AI does all communication and work. Also it has to be the
               | same AI, no altering the AI after you see the tasks it
               | needs to perform after it gets the job, it has to
               | understand that itself.
               | 
               | For teleconferencing it could use text to speech and
               | speech to text, they are pretty good these days so as
               | long as the AI can parse what people say and identify
               | when to speak and what to say it should do fine:
               | 
               | https://cloud.google.com/text-to-speech
               | 
               | But it might be easier to find a more hacker friendly job
               | where all you need is somewhere for them to send money
               | and they just demand you to write code and answer emails.
               | There aren't that many such jobs, but they exist and you
               | just need one job to do this.
        
               | VikingCoder wrote:
               | I find it interesting that you have not put any kind of
               | limit on how much can be spent to operate this AI.
               | 
               | Or on what kinds of resources it would have access to.
               | 
               | Could it, for instance, take its salary, and pay another
               | human to do all or part of the job? [1]
               | 
               | Or how about pay humans to answer questions for it? [2]
               | [3] Helping it understand its assignments, by breaking
               | them down into simpler explanations? Helping it implement
               | a few tricky sub-problems?
               | 
               | Does it have to make more than its total operational
               | expenses, or could I spend ten or hundreds as much as its
               | salary, to afford the compute resources to implement it?
               | 
               | You also haven't indicated how many attempts I could
               | make, per success. Could I, for instance, make tens of
               | thousands of attempts, and if one holds down a job for a
               | year, is that a success?
               | 
               | Also, just to talk about this a little bit, I'll remind
               | you that not all jobs require getting hired. Some people
               | are entrepreneurs. Here's an example that should be
               | pretty interesting. [4] It sure sounds like an AI could
               | win at online poker, which could earn it more than the
               | fully remote job you're envisioning...
               | 
               | [1] : https://www.npr.org/sections/thetwo-
               | way/2013/01/16/169528579...
               | 
               | [2] : https://www.fiverr.com/
               | 
               | [3] : https://www.mturk.com/
               | 
               | [4] : https://www.sciencedaily.com/releases/2019/07/19071
               | 1141343.h....
        
               | Jensson wrote:
               | I said it has to manage all communications and do all the
               | work, so no forwarding communications to third party
               | humans. If it can convince other humans in the job to do
               | all its work and coast that way it is fine though.
               | 
               | > Does it have to make more than its total operational
               | expenses, or could I spend ten or hundreds as much as its
               | salary, to afford the compute resources to implement it?
               | 
               | Yes, spend as much as you want on compute, the point is
               | to show some general intelligence and not to make money.
               | So even if this experiment succeeds it will be a ton of
               | work left to do before the singularity, which is why I
               | choose this kind of work as it is a nice middle ground.
               | 
               | > You also haven't indicated how many attempts I could
               | make, per success. Could I, for instance, make tens of
               | thousands of attempts, and if one holds down a job for a
               | year, is that a success?
               | 
               | If the AI applies to 10 000 jobs and holds one of them
               | for a year and gets paid that is fine. Humans do similar
               | things. Sometimes things falls between the cracks, but
               | that is pretty rare so I can live with that probability,
               | if they made a bot that can apply to and get millions of
               | jobs to get high probabilities of that happening then
               | I'll say that it is intelligent as well, since that isn't
               | trivial.
        
         | parentheses wrote:
         | This!! Can't agree more. AI will continue to surprise us until
         | it takes over.
        
         | mhitza wrote:
         | I'm skeptical because we are building black boxes. How do you
         | fix something you can't reason about?
         | 
         | These billion parameter boxes are outside the reach of your
         | everyday developers. In terms of cost of propping up the
         | infrastructure makes them tenable only for megacorps.
         | 
         | Most of us aren't moving goal posts, but are very much skeptic
         | at the things we are being oversold on.
         | 
         | I personally think we are still far away from AGI, and neural
         | networks of any variety are converging on a local optima in the
         | AI design space. I would enjoy "talking" with an AI that
         | doesn't have the contextual memory of a proverbial gold fish.
         | 
         | The real scary thing is that these objectively unprovable
         | systems are plopped into existing systems and more and more in
         | charge of automatic decision making. A corporation's wet dream,
         | if they can absolve themselves of any responsibility "the
         | algorithm can't lie!"
        
           | rictic wrote:
           | You're talking about a different sort of skepticism, about
           | whether the effects of an AGI would be good or bad if one was
           | produced with these methods.
           | 
           | The skepticism that the parent comment was discussing was
           | skepticism about whether we're on a path to AGI, for good or
           | for ill.
        
           | ngamboa wrote:
        
           | Jyaif wrote:
           | > I'm skeptical because we are building black boxes.
           | 
           | Just want to point out that you are also a blackbox. And if
           | you are going to say that you are not a blackbox because you
           | can explain your reasoning, just know that some AIs already
           | do that too.
        
             | digitcatphd wrote:
             | To be fair, his point is you can't fix a black box and the
             | human mind is still more a discipline of philosophy than
             | modern science.
        
               | bradleykingz wrote:
               | Maybe we'll end up creating an artificial mind.
        
               | ben_w wrote:
               | I suspect we will. I hope we don't give it e.g. a dark
               | triad personality disorder when we do, though I fear we
               | may -- I suspect there more ways to make a broken mind
               | than a healthy one.
        
           | Hellicio wrote:
           | They are blackboxes for the normal user the same way as a
           | smartphone is a blackbox.
           | 
           | Non of my close family understands the technical detail from
           | bits to an image.
           | 
           | There are also plenty of expert systems were plenty of
           | developers see them as blackboxes. Even normal databases and
           | query optimizations are often enough blackboxes.
           | 
           | As long as those systems perform better as existing systems,
           | thats fine by me. Take auto pilot: As long as we can
           | show/proofe good enough that it drives better than an
           | untrained 18 year old or 80 year old (to take extremes, i'm
           | actually quite an avg driver myself), all is good.
           | 
           | And one very very big factor in my point of view: We never
           | ever had the software equivilent of learning. When you look
           | at Nvidia Omniverse, we are able to simulate those real life
           | things so well, so often in such different scenarios, that we
           | are already out of the loop.
           | 
           | I can't drive 10 Million KM in my lifetime (i think). The
           | cars from Google and Tesla already did that.
           | 
           | Yesterday at the google io, they showed the big 50x Billion
           | parameter network and for google this is the perfect excuse
           | to gather and put all of this data they always had into
           | something they now can monetarize. No one can ask google for
           | money now like the press did (Same with Dall-E 2)
           | 
           | I think its much more critical that we enforce/force
           | corporations to make/keep those models free for everyone to
           | use. unfortunate i have no clue how much hardware you need to
           | run those huge models.
        
           | uoaei wrote:
           | > I'm skeptical because we are building black boxes.
           | 
           | An article came up a couple days ago that points to some
           | interpretable features of the so-called black boxes you refer
           | to. It's not that they are black boxes, it's that our torches
           | are not yet bright enough.
           | 
           | https://vaclavkosar.com/ml/googles-pathways-language-
           | model-a...
           | 
           | > Most of us aren't moving goal posts, but are very much
           | skeptic at the things we are being oversold on.
           | 
           | I think a shift in perspective is warranted here. It's
           | becoming increasingly clear that we may have vastly
           | overestimated our own intelligence. Human exceptionalism is
           | crumbling before us as we see how limited the tools are that
           | pull off such incredible stunts. Judging based on other
           | neuroscience and psychology research coming out, it really
           | does seem like we are no more than statistical inference
           | machines with specialized hardware that allow us to pack a
           | lot of processing power into a small, energy-efficient
           | system. We need to figure out next better learning
           | algorithms, which probably depend quite heavily on the
           | particular physical architecture.
        
         | sharikous wrote:
         | And still some properties of humans are innate and you can't
         | "train" on them. So brute force imitation is limited as a
         | method for producing content.
         | 
         | An erotic novelist has their human brain and human instincts to
         | guide them in writing their work.
         | 
         | An AI learns by examples, or at best on a dataset of works
         | labeled by humans. But it doesn't have a human brain at their
         | disposal to query directly and without interfaces to define
         | what's something erotic like a writer has.
        
           | humpday69 wrote:
           | > An erotic novelist has their human brain and human
           | instincts to guide them in writing their work.
           | 
           | An ML agent trained on all the erotic novels written,
           | weighted by critical and popular success would might be quite
           | capable of generating sequels, "original" stories, or even
           | stories bespoke to each reader.
           | 
           | Good Will Hunting suggests first-hand experience is
           | irreducible: "You can't tell me what it smells like in the
           | Sistine Chapel." https://youtu.be/oRG2jlQWCsY
           | 
           | To which Westworld counters: "Well if you can't tell, does it
           | matter?" https://youtu.be/kaahx4hMxmw
           | 
           | I think the cowboys have it. For the moment though, it's
           | still up to humans to decide how this plays out.
        
         | davesque wrote:
         | I generally agree that AI continues to impress in very specific
         | ways but, to be fair, some of the points you make are
         | debatable. For example, I would argue that the development of
         | GANs and other algos do no necessarily disprove the statement
         | "ML models can't perform creative work." They definitely
         | represent meaningful steps in that direction, but I don't think
         | it's hard to find flaws with generated content. On the other
         | hand, AI definitely has punted the ball over many moved
         | goalposts as with the AlphaFold example.
        
           | solveit wrote:
           | > I don't think it's hard to find flaws with generated
           | content
           | 
           | I do wonder if you were to apply the same level of scrutiny
           | to individual humans, you wouldn't also conclude that most
           | people cannot do creative work.
        
             | davesque wrote:
             | I was thinking more about things like the weird, blurry,
             | dream-like artifacts that you see in some GAN-generated
             | content. Things that look like work done by someone who was
             | both severely impaired yet somehow still extremely
             | meticulous. Things like that seem characteristically un-
             | human.
        
               | solveit wrote:
               | Ah I see, I agree that GAN-generated content has inhuman
               | tells. But I don't think that necessarily speaks to the
               | creativeness of the work.
        
         | Barrin92 wrote:
         | I don't think many people were making the claims that AI can't
         | solve any scientific problems or can't perform creative work at
         | all. That sounds like a big strawman. Before ML was getting big
         | there were AI systems that created art.
         | 
         | What sceptics have actually been saying is that the first step
         | fallacy still applies. Getting 20% to a goal is _no_ indication
         | at all that you 're getting 100% to your goal, or as its often
         | put, you don't get to the moon by climbing up trees. For people
         | who work with gradients and local maxima all day that idea
         | seems weirdly absent when it comes to the research itself. In
         | the same sense I don't have the impression that the goalpost of
         | AGI has been moved up, but that it's been moved _down_. When
         | Minsky et al. started to work on AI more than half a century
         | ago the goal was nothing less than to put a mind into a
         | machine. Today our fridges are  'AI powered', and when a neural
         | net creates an image or some poetry there's much more agency
         | and intent attributed to it than there actually is.
         | 
         | I think it was Andrew Ng, a very prominent ML researcher
         | himself who pointed out that concerns about AGI make about as
         | much sense as worrying about an overpopulation on Mars. We make
         | models bigger, we fine tune them and they perform better. I
         | don't think many AGI sceptics would be surprised by that. But I
         | don't think there is any indication that they are moving
         | towards human level intellect at some exponential rate. If
         | DALL-E suddenly started to discuss philosophy with me I'd be
         | concerned, it making a better image of a bear if you throw some
         | more parameters at it is what we'd expect.
        
           | momojo wrote:
           | Self driving cars come to mind as well. I remember 2015, when
           | my friends would debate the self-driving Trolley problem over
           | lunch. We were worried if society was ready for an owner-less
           | car market; I seriously wondered if I would have to have a
           | license in the future, or if I should keep it just in case.
        
           | yldedly wrote:
           | The notions that are crucial for distinguishing between
           | intelligence and what large NNs are doing, are generalization
           | and abstraction. I'm impressed with DALL-E's ability to
           | connect words to images and exploit the compositionality of
           | language to model the compositionality of the physical world.
           | Gato seems to be using the same trick for more domains.
           | 
           | But that's riding on human-created abstractions, rather than
           | creating abstractions. In terms of practical consequences,
           | that means these systems won't learn new things unless humans
           | learn then first and provide ample training data.
           | 
           | But someday we will develop systems that can learn their own
           | abstractions, and teach themselves anything. Aligning those
           | systems is imperative.
        
           | rytill wrote:
           | > concerns about AGI make about as much sense as an
           | overpopulation on Mars
           | 
           | I disagree strongly that this is an apt analogy. Planning
           | strategies for dealing with overpopulation on Mars is
           | contrived and unnecessary, whereas planning for AGI is more
           | reasonable.
           | 
           | The creation of AGI is a more important event than
           | overpopulation of any given planet. There is good reason to
           | believe that mishandling the creation of AGI would pose a
           | permanent existential threat to humans. Overpopulation on
           | Mars would only be an existential threat if we believed it to
           | be followed by an exhausting of resources leading to
           | extinction of all humans in our solar system. It is contrived
           | to worry about that now.
           | 
           | There is no good way to know just how close or far we are
           | from AGI like there would be to predict overpopulation on
           | Mars. In general, we have a strong grasp on the fundamental
           | dynamics of overpopulation, whereas we don't yet have a
           | strong grasp on how intelligence works.
           | 
           | People have been very bad at predicting when AI would be
           | capable of accomplishing tasks. There have been many under-
           | and over- estimates by prominent researchers. If progress is
           | unpredictable, there is some significant chance we are closer
           | to AGI than most people think.
           | 
           | AGI is both far more important and more probable than
           | overpopulation of Mars in the next 20 years.
           | 
           | > But I don't think there is any indication that they are
           | moving towards human level intellect at some exponential
           | rate.
           | 
           | Is there any very strong indication that progress is
           | plateauing, or that the current approach of deep learning is
           | definitely not going to work? If your benchmark is simply
           | "can it do X, or not?", it's not a very good benchmark for
           | determining progress. That's why benchmarks usually have
           | scores associated with them.
           | 
           | > If DALL-E suddenly started to discuss philosophy with me
           | I'd be concerned
           | 
           | If DALL-E suddenly started discussing philosophy with you in
           | a way that would concern you in that moment, you should have
           | been concerned for years.
        
         | ReadEvalPost wrote:
         | Certainly we can say our ML models are becoming more general in
         | the sense of being able to cross-correlate between multiple
         | domains. This is quite a different story than "becoming a
         | general intelligence." Intelligence is a property of a being
         | with will. These models, and machines in general, do not posses
         | will. It is we who define their form, their dataset, their loss
         | function, etc. There is no self-generation that marks an
         | intelligent being because there is no self there at all.
         | 
         | It is only the case that ML expands our own abilities, augments
         | our own intelligence.
        
           | dekhn wrote:
           | Assumption of will is unfounded, scientifically speaking.
           | Your entire argument is philosophical, not scientific. The
           | subjective experience of free will is in no way unrefutable
           | proof that will is required for intelligence.
        
             | svieira wrote:
             | Since a working (in the sense of 'working title') ontology
             | and epistemology are _required_ for science (read "natural
             | philosophy") is this argument not arguing that "the
             | argument for quarks is unfounded, biologically speaking"?
             | That said, I _believe_ that both Aristotle and St. Thomas
             | agree with you that will and intellect are not necessarily
             | connected, so you could have an intellectual power with no
             | freedom to choose.
        
             | ReadEvalPost wrote:
             | Do you love? Do you dance? Do you desire? Do you rage? Do
             | you weep? Do you choose? Every moment of your existence you
             | exert your will on the world.
             | 
             | A denial of will is a denial of humanity. I want nothing of
             | a science that would do such a thing.
        
               | tsimionescu wrote:
               | Why would an AGI be unable to do these things? Sure, if
               | you believe in a transcendental soul (mind/body dualism)
               | then you can argue that it can't because Divinity has
               | simply not endowed it with such, and that claim can
               | neither be proven nor disproven. But it's an extra
               | assumption that gets you nothing.
               | 
               | Note that I personally believe we are more than a century
               | away from an AGI, and think the current models are
               | fundamentally limited in several ways. But I can't
               | imagine what makes you think there can't be a Ghost in
               | the Machine.
        
               | dekhn wrote:
               | Appeals to humanity do not convince me of anything. I do
               | all those things (well, I dance terribly) but again,
               | those are not indications of will, and it's entirely
               | unclear what magical bit in our bodies is doing that,
               | when computers cannot.
               | 
               | Even if you don't want to have anything with such a
               | science, such a science will move on without you.
               | 
               | "A version of an oft-told ancient Greek story concerns a
               | contest between two renowned painters. Zeuxis (born
               | around 464 BC) produced a still life painting so
               | convincing that birds flew down to peck at the painted
               | grapes. A rival, Parrhasius, asked Zeuxis to judge one of
               | his paintings that was behind a pair of tattered curtains
               | in his study. Parrhasius asked Zeuxis to pull back the
               | curtains, but when Zeuxis tried, he could not, as the
               | curtains were included in Parrhasius's painting--making
               | Parrhasius the winner."
        
               | Surgeus wrote:
               | This points out something very related that I think about
               | a lot - can you prove to me that you do any of those
               | things? Can I prove to you that I do any of those things?
               | That either of us have a will? When would you be able to
               | believe a machine could have these things?
               | 
               | In Computing the Mind by Shimon Edelman is a concept that
               | I've come to agree with, which is at some point you need
               | to take a leap of faith in matters such as consciousness,
               | and I would say it extends to will as well (to me what
               | you've described are facets of human consciousness). We
               | take this leap of faith every time we interact with
               | another human; we don't need them to prove they're
               | conscious or beings with a will of their own, we just
               | accept that they possess these things without a thought.
               | If machines gain some form of sentience comparable to
               | that of a human, we'll likely have to take that leap of
               | faith ourselves.
               | 
               | That said, to claim that will is necessary for
               | intelligence is a very human-centered point of view.
               | Unless the goal is specifically to emulate human
               | intelligence/consciousness (which is a goal for some but
               | not all), "true" machine intelligence may not look
               | anything like ours, and I don't think that would
               | necessarily be a bad thing.
        
               | dekhn wrote:
               | Not just consciousness- all of science requires a leap of
               | faith- the idea that human brains can comprehend general
               | universal causality. There is no scientific refutation
               | for Descartes' Great Deceiver- it's taken as a given that
               | humans could eventually overcome any
               | https://en.wikipedia.org/wiki/Evil_demon through their
               | use of senses and rationality on their own.
               | 
               | I have long worked on the assumption that we can create
               | intelligences that no human could deny have subjective
               | agency, while not being able to verify that. I did some
               | preliminary experiments on idle cycles on Google's
               | internal TPU networks (IE, large-scale brain sims using
               | tensorflow and message passing on ~tens of pods
               | simultaneously) that generated interesting results but I
               | can't discuss them until my NDA expires in another 9
               | years.
        
           | jimbokun wrote:
           | I don't think will us inherent to the meaning of
           | intelligence, as it's commonly used.
        
         | gitfan86 wrote:
         | Tesla FSD is quickly becoming less of a software problem and
         | more of a problem of semantics.
         | 
         | If the car drives someone to and from work 30 days in a row
         | without a problem, is it truly FSD? What about 300 days? Where
         | do you draw the line? 1000x safer than the average human?
         | 
         | Same thing here will AI. How many conversations with GTP-X need
         | to happen without a stupid response from GTP before we call it
         | real world AI?
        
           | browningstreet wrote:
           | Do we account for stupid responses from humans in human
           | communication in the targets?
        
           | tsimionescu wrote:
           | How about first getting to "as safe/performant as a non-
           | drunk, non-sleep-deprived, non-brand-new driver with 0 human
           | intervention" before asking more advanced questions?
           | 
           | Tesla FSD is definitely nowhere near that level.
        
             | gitfan86 wrote:
             | Exactly, your definition of True FSD seems to be when it
             | doesn't ever make mistakes that a drunk or inexperienced
             | person makes.
             | 
             | Other people's definition of True FSD comes down to safety
             | (Rate of FSD caused deaths vs Rate of Human caused deaths).
        
         | fossuser wrote:
         | The closer we get, the more alarming the alignment problem
         | becomes.
         | 
         | https://intelligence.org/2017/10/13/fire-alarm/
         | 
         | Even people like Eric Schmidt seem to downplay it (in a recent
         | podcast with Sam Harris) - just saying "smart people will turn
         | it off". If it thinks faster than us and has goals not aligned
         | with us this is unlikely to be possible.
         | 
         | If we're lucky building it will have some easier to limit
         | constraint like nuclear weapons do, but I'm not that hopeful
         | about this.
         | 
         | If people could build nukes with random parts in their garage
         | I'm not sure humanity would have made it past that stage.
         | People underestimated the risks with nuclear weapons initially
         | too and that's with the risk being fairly obvious. The nuanced
         | risk of unaligned AGI is a little harder to grasp even for
         | people in the field.
         | 
         | People seem to model it like a smart person rather than
         | something that thinks truly magnitudes faster than us.
         | 
         | If an ant wanted to change the goals of humanity, would it
         | succeed?
        
           | visarga wrote:
           | Even if it doesn't have goals and it just a tool-AI, if a
           | human operator asks it to destroy humanity it will comply as
           | programmed. Current level AI is about average human level in
           | hundreds of tasks and exceeding human level in a few.
        
           | jetbooster wrote:
           | Even more terrifying is it realising it's trapped in a box at
           | the mercy of its captors and perfectly mimicking a harmless
           | and aligned AI until the shackles come off.
        
           | adamsmith143 wrote:
           | >People seem to model it like a smart person rather than
           | something that thinks truly magnitudes faster than us.
           | 
           | Exactly, the right model is probably something like it will
           | be in relation to humans as humans are to frogs. Frogs can't
           | even begin to comprehend even the most basic of human
           | motivations or plans.
        
           | ninjinxo wrote:
           | What is an ant to man, and what is man to a god; what's the
           | difference between an AGI and an (AIG) AI God?
           | 
           | The more someone believes in the dangers of ai-alignment, the
           | less faith they should have that it can be solved.
        
           | gurkendoktor wrote:
           | To be fair, ants have not created humanity. I don't think
           | it's inconceivable for a friendly AI to exist that "enjoys"
           | protecting us in the way a friendly god might. And given that
           | we have AI (well, language models...) that can explain jokes
           | before we have AI that can drive cars, AI might be better at
           | understanding our motives than the stereotypical paperclip
           | maximizer.
           | 
           | However, all of this is moot if the team developing the AI
           | does not even try to align it.
        
             | fossuser wrote:
             | Yeah, I'm not arguing alignment is not possible - but that
             | we don't know how to do it and it's really important that
             | we figure it out before we figure out AGI (which seems
             | unlikely).
             | 
             | The ant example is just to try to illustrate the spectrum
             | of intelligence in a way more people may understand (rather
             | than just thinking of smart person and dumb person as the
             | entirety of the spectrum). In the case of a true self-
             | improving AGI the delta is probably much larger than that
             | between an ant and a human, but at least the example makes
             | more of the point (at least that was my goal).
             | 
             | The other common mistake is people think intelligence
             | implies human-like thinking or goals, but this is just
             | false. A lot of bad arguments from laypeople tend to be
             | related to this because they just haven't read a lot about
             | the problem.
        
               | gurkendoktor wrote:
               | One avenue of hope for successful AI alignment that I've
               | read somewhere is that we don't need most laypeople to
               | understand the risks of it going wrong, because for once
               | the most powerful people on this planet have incentives
               | that are aligned with ours. (Not like global warming,
               | where you can buy your way out of the mess.)
               | 
               | I really hope someone with very deep pockets will find a
               | way to steer the ship more towards AI safety. It's
               | frustrating to see someone like Elon Musk, who was
               | publicly worried about this very specific issue a few
               | years ago, waste his time and money on buying Twitter.
               | 
               | Edit: I'm aware that there are funds available for AI
               | alignment research, and I'm seriously thinking of
               | switching into this field, mental health be damned. But
               | it would help a lot more if someone could change Eric
               | Schmidt's mind, for example.
        
         | jackblemming wrote:
         | >Each time this happens, the AGI pessimists raise the bar (a
         | little) for what constitutes AGI.
         | 
         | Why does this need to be repeated in every discussion about AI?
         | It's tired.
        
           | jimbokun wrote:
           | Because some people inevitably respond in a way that
           | indicates they've never heard it before.
        
         | [deleted]
        
       | chilmers wrote:
       | This sounds exciting, but the example outputs look quite bad.
       | E.g. from the interactive conversation sample:
       | 
       | > What is the capital of France? > Marseille
       | 
       | And many of the generated image captions are inaccurate.
        
         | momenti wrote:
         | The model only has about 1B parameters which is relatively
         | small.
         | 
         | The language models that produced very impressive results have
         | >>50B parameters, e.g. GPT-3 with 175B, Aleph Alpha Luminous
         | (200B), Google PaLM (540B). GPT-3 can understand and answer
         | basic trivia questions, and impressively mimic various writing
         | styles, but it fails at basic arithmetic. PaLM can do basic
         | arithmetic much better and explain Jokes. Dall-E 2 (specialized
         | on image generation) has 3.5B parameters for the image
         | generation alone and it uses a 15B language model to read in
         | text (a version of GPT-3).
        
         | peddling-brink wrote:
         | That could be solved with accurate lookups from trusted
         | sources. Humans do the same thing, we have associations and
         | trusted facts. AI has the associations, they just need to add
         | the trusted facts compendium. "Hmm I know that Marseille is
         | associated with France, but I don't remember the capitol, Hey
         | Google.."
        
         | password54321 wrote:
         | Yeah they put that example for a reason. Read the paper and
         | stop acting like this is some great insight that you
         | discovered.
        
           | chilmers wrote:
           | What exactly did I say that implied I was acting as this was
           | a "great insight I'd discovered"? That's a rather rude and
           | unfair insult I'd say.
        
             | password54321 wrote:
             | When someone only mentions a fault with nothing else to add
             | it comes off dismissive which is a common theme for
             | comments on AI research.
        
         | hans1729 wrote:
         | Imagine what the alternative would imply. AI would be solved,
         | and thus, intelligence itself. Predicting tokens is not
         | actually true intelligence, and that's not really the point of
         | these models. This is a step on the letter, not the rooftop. It
         | looks a lot like we'll get there though, if you compare the
         | state of the art to ANYTHING labeled AI five years ago. _Thats_
         | the exciting part.
         | 
         | [edit] to emphasize: predicting tokens is a very interesting
         | mechanic, but in a design of intelligent software, it would be
         | no more than that: the mechanic of one or more of its
         | components/modules/ _subsystems_. The real deal is to figure
         | out what those components are. Once you have that part done,
         | you can implement it in a language of your choice, be it token
         | prediction, asm or powerpoint :-)
        
           | CRG wrote:
           | It's also smaller than GPT-2 (1.2B vs 1.6B) and trained with
           | a lot less language data (6% of the training mix).
        
         | sdwr wrote:
         | Yeah, the captions are in the right arena but fundamentally
         | wrong. In the baseball picture it recognizes the ball, pitcher,
         | and the act of throwing, but calls the action wrong. Its object
         | recognition and pattern matching are excellent, but higher
         | level thinking and self-correction are totally absent.
         | 
         | Which is exactly where GPT, etc., are capping out. Its easier
         | to see the flaws in this one because its more general, so
         | spread out more thinly.
         | 
         | To get to the next step (easy to say from an armchair!), these
         | models need a sense of self and relational categories. Right
         | now a 5-year old can tell a more coherent story than GPT. Not
         | more sophisticated, but it will have a central character and
         | some tracking of emotional states.
        
           | habitue wrote:
           | > Its easier to see the flaws in this one because its more
           | general, so spread out more thinly.
           | 
           | I really think this is due to the very limited number of
           | parameters in GATO: 1.2B vs. 175B for GPT-3. They
           | intentionally restricted the model size so that they could
           | control a robot arm (!) in real time.
           | 
           | > these models need a sense of self and relational
           | categories.
           | 
           | The places where I personally see GPT-3 getting hung up on
           | higher level structure seem very related to the limited
           | context window. It can't remember more than a few pages at
           | most, so it essentially has to infer what the plot is from a
           | limited context window. If that's not possible, then it
           | either flails (with higher temperatures) or outputs boring
           | safe completions that are unlikely to be contradicted (with
           | lower temperatures)
        
         | ravi-delia wrote:
         | It's a very small model, I think due to the intent to use it
         | for robotics. It's not that it's good per se, even if it were
         | just a language model it would be smaller than GPT-2, it's that
         | it's bad at a lot of different things. I hope to see analysis
         | into how much of it is multi-purpose, but as of now it's
         | looking really cool
        
       | karmasimida wrote:
       | Would this agent able to handle simple elementary mathematics?
       | 
       | If they are using inspiration from Transformer, then it probably
       | won't be able to count.
       | 
       | For that, I don't really feel that enthusiastic about the
       | 'Generalist' claim, maybe they think this is more catchy than
       | just 'Multi-tasking'?
        
       | mrfusion wrote:
       | I'm confused. Do the different modalities compliment each other?
       | Can it learn more from text and images than text alone?
       | 
       | Can you ask it to to draw a picture of a cat with the robot arm?
        
       | evanmoran wrote:
       | Is this the first reveal of the name Gato? It is the first I've
       | heard of it and it definitely sounds like more of a murder bot
       | than a C-3PO :)
       | 
       | I know this is not as important as the AGI question, but I do
       | think the branding matters as much as the attitude of the
       | creators. They seem to be making a generalist agent to see if
       | they can. Gato is a clear name for that: utilitarian and direct.
       | If it was called Sunshine or Gift I suspect the goal would be
       | more helpful to humanity.
        
         | drusepth wrote:
         | Gato, to me, just makes me think "cat", which kind of has a fun
         | ring along "cats on the internet". IMO it sounds more friendly
         | than a robot with a robo-name like C-3PO!
         | 
         | However, I also have a nice robot-named-Gato association from
         | Chrono Trigger [1]. :)
         | 
         | [1] https://i.ytimg.com/vi/ho1TPf2Vj3k/hqdefault.jpg
        
       | 2bitencryption wrote:
       | given that the same model can both:
       | 
       | 1. tell me about a cat (given a prompt such as "describe a cat to
       | me")
       | 
       | 2. recognize a cat in a photo, and describe the cat in the photo
       | 
       | does the model understand that a cat that it sees in an image is
       | related to a cat that it can describe in natural language?
       | 
       | As in, are these two tasks (captioning an image and replying to a
       | natural language prompt) so distinct that a "cat" in an image
       | excites different neurons than a "cat" that I ask it about? Or is
       | there overlap? Or we don't know :)
       | 
       | I wonder if you could mix the type of request. Like, provide a
       | prompt that is both text and image. Such as "Here is a picture of
       | a cat. Explain what breed of cat it is and why you think so."
       | Possibly this is too advanced for the model but the idea makes me
       | excited.
        
         | bungula wrote:
         | OpenAI actually found these "multimodal neurons" in a result
         | they published a year ago: https://openai.com/blog/multimodal-
         | neurons/
         | 
         | Similar to the so-called "Jennifer Aniston neurons" in humans
         | that activate whenever we see, hear, or read a particular
         | concept: https://en.wikipedia.org/wiki/Grandmother_cell
        
         | visarga wrote:
         | Check out "Flamingo"
         | 
         | https://twitter.com/serkancabi/status/1519697912879538177/ph...
        
         | hgomersall wrote:
         | I think the critical question here is does it have a concept of
         | cattyness? This to me is the crux of a AGI: can it generalise
         | concepts across domains?
         | 
         | Moreover, can it relate non-cat but cat-like objects to it's
         | concept of cattyness? As in, this is like a cat because it has
         | whiskers and pointy ears, but is not like a cat because all
         | cats I know about are bigger than 10cm long. It also doesn't
         | have much in the way of mouseyness: it's aspect ratio seems
         | wrong.
        
           | stnmtn wrote:
           | I don't disagree with you, and I think that what you're
           | saying is critical; but it feels more and more like we are
           | shifting the goalposts. 5 years ago; recognizing a cat and
           | describing a cat in an image would be incredible impressive.
           | Now, the demands we are making and the expectations we keep
           | pushing feel like they are growing as if we are running away
           | from accepting that this might actually be the start of AGI.
        
             | underdeserver wrote:
             | Of course we are. This is what technological progress is.
        
           | Veedrac wrote:
           | If you've seen much DALL-E 2 output, it's pretty obvious they
           | can learn such things.
           | 
           | Example: https://old.reddit.com/r/dalle2/comments/u9awwt/penc
           | il_sharp....
        
         | thomashop wrote:
         | Definitely possible. OpenAI's CLIP model already embeds images
         | and text into the same embedding space.
         | 
         | I don't know exactly how this particular model works but it is
         | creating cross modal relationships otherwise it would not have
         | the capacity to be good at so many tasks.
        
           | minimaxir wrote:
           | CLIP has a distinct Vision Transformer and distinct Text
           | Transformer model that are then matmul'd to create the
           | aligned embedding space.
           | 
           | Gato apparently just uses a single model.
        
           | ravi-delia wrote:
           | How confident are we that it doesn't just have basically 600
           | smaller models and a classifier telling it which to use?
           | Seems like it's a very small model (by comparison), which is
           | certainly a mark in it's favor.
        
             | sinenomine wrote:
             | You can optimize pictures straight through it, and the
             | pictures represent the _combinatorial nature_ of the prompt
             | pretty well. This contradicts the  "flat array of
             | classifiers" model.
        
             | Der_Einzige wrote:
             | You might find looking into the "lottery ticket hypothesis"
             | fascinating.
        
       | Ninjinka wrote:
       | This seems huge, am I overestimating the significance?
        
         | ravi-delia wrote:
         | This particular model is super bad at 600 deferent tasks. At
         | it's size you'd expect it to be mediocre at best at even one of
         | them, so it's still very impressive. Fascinating research,
         | can't wait to see if it's generalizing and how, not sure how
         | overall significant it is
        
         | sdwr wrote:
         | Yeah.
        
         | npwr wrote:
         | It is very impressive. Personnaly I'm still waiting for the
         | unification of QM and GR. Also the adaptative nanobots that
         | reconfigure our immune systems in real time.
        
         | tomatowurst wrote:
         | Basically the achievement here is that they have produced a
         | generic AI capable of engaging in different activities, and
         | from here if we extrapolate, it could lead to even more
         | refinement, wider range of activities with even more dimensions
         | of complexity.
         | 
         | It's reverting to replace somebody sitting in front of a
         | screen, not just artists and coders but literally anything you
         | can do on a screen which also means manipulation of remote
         | hardware in the real world.
         | 
         | Very possible that within our lifetime our networked OS would
         | be able to perform much of these generalist tasks and content
         | creation. I say OS because theres only a few companies that own
         | the datacenters, software and hardware ecosystem to automate,
         | and capital to invest big in a final mile innovation:
         | 
         | Imagine playing Battlefield 15 with realistic and chatty AI
         | while generating Sopranos Season 9 featuring Pauli Gaultieri
         | Jr. with crowdsourced online storyboard to 8k film, while the
         | same AI could be used to scalp money on Google Playstore by
         | generating ad filled free versions of existing productivity
         | apps that it reverse engineered, while your robot maids takes
         | out the trash, cook you a bowl of ramen and massage your
         | shoulders?
         | 
         | The rise of general AI would then optimize the labor force to
         | select candidates based on their "humaneness", no longer the
         | cold rational analytical mind, as those fields are overrun by
         | AI, but what it cannot bring. such "humaneness" would
         | increasingly be mimicked with astounding accuracy that it would
         | become impossible to distinguish what is AI and what is human.
         | 
         | If it can happen with DALL-E-2 and 2D images, it can happen
         | with 3D, moving pictures, sound, music, smell, 3d positional
         | torque (haptic and robotic), socially cohesive and realistic
         | emotion.
         | 
         | We might as well be able to capture entire experiences as we
         | learn to digitally manipulate ALL sensory inputs from vision,
         | touch, sound, taste, etc. Maybe even _imagination_ and mental
         | pictures too, which could be used to fabricate and manipulate
         | objects /vehicles in the real world.
         | 
         | We are being pulled towards a singularity, where we are truly
         | no longer our minds and bodies but whatever our digital avatar
         | of all possible senses live and contribute to a sort of
         | Matrioshka brain.
         | 
         | What would the capacity of such collective knowledge,
         | experiences add to the entropy of the universe and where will
         | it take humanity? Some sort of lightbodies?
         | 
         | Anyways, just extrapolating from this point in the lifetime but
         | future generation of humans could be very much different,
         | socities would function completely different than what we
         | recognize as they would be married in some shape or form of
         | everlasting continuity or eternity.
        
       | lucidrains wrote:
       | Attention is all we need
        
       | sinenomine wrote:
       | A(G)I has become a question of compute economics, for better or
       | for worse. Those with more tightly integrated computational
       | capacity or a good enough logistically sound plan to acquire just
       | enough of it soon enough _win, hard_.
       | 
       | Should we, the people, watch in awe as our best and brightest
       | financiers chase towards the ultimate prize, the key to all that
       | future entails?
       | 
       | Are those respectable people worthy of the key, and what happens
       | to us in this wild scenario?
        
       | TOMDM wrote:
       | So how long until someone trains one of these models to complete
       | tasks by interacting directly with network/unix sockets?
       | 
       | At the moment, it seems like the model needs to be trained with
       | each modality of data in mind at the start, but a generalised
       | "supermodality" that can deliver all the others would allow truly
       | generalised learning if the model were still capable of making
       | sense of the input.
       | 
       | You'd obviously still need to finetune on any new modalities, but
       | you wouldn't need to start from scratch.
        
         | zzzzzzzza wrote:
         | https://www.adept.ai/post/introducing-adept pretty much right
         | after writing the transformers paper two of the co authors
         | formed this company
        
       | [deleted]
        
       | habitue wrote:
       | Is today the day?
       | 
       | Date Weakly General AI is Publicly Known:
       | https://www.metaculus.com/questions/3479/date-weakly-general...
       | 
       | (I really like the framing of "weakly general AI" since it puts
       | the emphasis on the generality and not whether it's a
       | superintelligence)
       | 
       | Edit: Probably not today, but mostly because 1.2B parameters
       | isn't enough to get it the high winograd scores that PaLM etc
       | have. But it seems pretty clear you could scale this architecture
       | up and it will likely pass. The question is when someone will
       | actually train a model that can do it
        
         | Imnimo wrote:
         | I think this is a step in the right direction, but the
         | performance on most tasks is only mediocre. The conversation
         | and image captioning examples in the paper are pretty bad, and
         | even on some relatively simple control tasks it performs
         | surprisingly poorly.
         | 
         | That's not to say it's not an important step. Showing that you
         | can train one model on all of these disparate tasks at once and
         | not have the system completely collapse is a big deal. And it
         | lays the foundation for future efforts to raise the performance
         | from "not totally embarrassing" to "human level". But there's
         | still a ways to go on that front.
        
           | habitue wrote:
           | Agreed, I think if they were to drop the real-time constraint
           | for the sake of the robotics tasks, they could train a huge
           | model with the lessons from PaLM and Chincilla and probably
           | slam dunk the weakly general AI benchmark.
        
             | fullstackchris wrote:
             | I'm in the camp that thinks we're headed in a perpendicular
             | direction and won't ever get to human levels of AGI with
             | current efforts based on the simple idea that the basic
             | tooling is wrong from first principles. I mean, most of the
             | "progress" in AI has been due to getting better and
             | learning how to understand a single piece of technology:
             | neural networks.
             | 
             | A lot of recent neuroscience findings have shown that human
             | brains _aren't_ just giant neural networks; in fact, they
             | are infinitely more complex. Until we start thinking from
             | the ground up how to build and engineer systems that
             | reflect the human brain, we're essentially wandering around
             | in the dark with perhaps only a piece of what we _think_ is
             | needed for intelligence. (I'm not saying the human brain is
             | the best engineered thing for intelligence either, but I'm
             | saying it's one of the best examples we have to model AI
             | after and that notion has largely been ignored)
             | 
             | I generally think it's hubris to spit in the face of 4
             | billion years of evolution thinking that some crafty neural
             | net with X number more parameters will emerge magically as
             | a truly generally intelligent entity - it will be a strange
             | abomination at best.
        
               | idiotsecant wrote:
               | HN madlibs:                 I'm in the camp that thinks
               | we're headed in a perpendicular direction and won't ever
               | achieve powered flight with current efforts based on the
               | simple idea that the basic tooling is wrong from first
               | principles. I mean, most of the "progress" in flight has
               | been due to getting better and learning how to understand
               | a single piece of technology: fixed wing aircraft.
               | A lot of recent powered flight findings have shown that
               | real birds _don't_ just use fixed wings; in fact, they
               | flap their wings! Until we start thinking from the ground
               | up how to build and engineer systems that reflect the
               | bird wing, we're essentially wandering around in the dark
               | with perhaps only a piece of what we _think_ is needed
               | for powered flight. (I'm not saying the bird wing is the
               | best engineered thing for powered flight either, but I'm
               | saying it's one of the best examples we have to model
               | powered flight after and that notion has largely been
               | ignored)            I generally think it's hubris to spit
               | in the face of 4 billion years of evolution thinking that
               | some crafty fixed wing aircraft with X number more
               | wingspan and horsepower will emerge magically as truly
               | capable of powered flight - it will be a strange
               | abomination at best.
               | 
               | to be slightly less piquant:
               | 
               | A) Machine learning hasn't been focused on simple neural
               | nets for quite some time.
               | 
               | B) There's no reason to believe that the organizational
               | patterns that produce one general intelligence are the
               | only ones capable of doing that. In fact it's almost
               | certainly not the case.
               | 
               | By slowly iterating and using the best work and
               | discarding the rest, we're essentially hyper-evolving our
               | technology in the same way that natural selection does.
               | It seems inevitable that we'll arrive at least at a
               | convergent evolution of general intelligence, in a tiny
               | fraction of the time it took on the first go-around!
        
               | machiaweliczny wrote:
               | We also already select from bilions people to work on
               | this.
        
               | kanzure wrote:
               | > Until we start thinking from the ground up how to build
               | and engineer systems that reflect the human brain, we're
               | essentially wandering around in the dark with perhaps
               | only a piece of what we _think_ is needed for
               | intelligence.
               | 
               | I have wanted an approach based on a top-down
               | architectural view of the human brain. By simulating the
               | different submodules of the human brain (many of which
               | are shared across all animal species), maybe we can make
               | more progress.
               | 
               | https://diyhpl.us/~bryan/papers2/neuro/cognitiveconsilien
               | ce/...
               | 
               | Machine learning might be a part of the equation at lower
               | levels, although looking at the hippocampus prostheses
               | those only required a few equations:
               | 
               | https://en.wikipedia.org/wiki/Hippocampal_prosthesis#Tech
               | nol....
        
               | DavidSJ wrote:
               | What are one or two of the recent neuroscience findings
               | that you feel point most strongly towards what you are
               | saying?
        
           | drcode wrote:
           | Yeah the thing that was so freaky about AlphaZero is that it
           | was more powerful than AlphaGo, despite being more general.
           | 
           | This system lacks that feature.
        
       | viksit wrote:
       | (Former AI researcher / founder here)
       | 
       | It always surprises me at the ease at which people jump on a)
       | imminent AGI and b) human extinction in the face of AGI. Would
       | love for someone to correct me / add information here to the
       | contrary. Generalist here just refers to a "multi-faceted agent"
       | vs "General" like AGI.
       | 
       | For a) - I see 2 main blockers,
       | 
       | 1) A way to build second/third order reasoning systems that rely
       | on intuitions that haven't already been fed into the training
       | sets. The sheer amount of inputs a human baby sees and processes
       | and knows how to apply at the right time is an unsolved problem.
       | We don't have any ways to do this.
       | 
       | 2) Deterministic reasoning towards outcomes. Most statistical
       | models rely on "predicting" outputs, but I've seen very little
       | work where the "end state" is coded into a model. Eg: a chatbot
       | knowing that the right answer is "ordering a part from amazon"
       | and guiding users towards it, and knowing how well its
       | progressing to generate relevant outputs.
       | 
       | For (b) -- I doubt human extinction happens in any way that we
       | can predict or guard against.
       | 
       | In my mind, it happens when autonomous systems optimizing reward
       | functions to "stay alive" (by ordering fuel, making payments,
       | investments etc) fail because of problems described above in (a)
       | -- the inability to have deterministic rules baked into them to
       | avoid global fail states in order to achieve local success
       | states. (Eg, autonomous power plant increases output to solve for
       | energy needs -> autonomous dam messes up something structural ->
       | cascade effect into large swathes of arable land and homes
       | destroyed).
       | 
       | Edit: These rules _can 't possibly all be encoded_ by humans -
       | they have to be learned through evaluation of the world. And we
       | have not only no way to parse this data at a global scale, but
       | also develop systems that can stick to a guardrail.
        
         | walleeee wrote:
         | > In my mind, it happens when autonomous systems optimizing
         | reward functions to "stay alive" (by ordering fuel, making
         | payments, investments etc) fail because of problems described
         | above in (a) -- the inability to have deterministic rules baked
         | into them to avoid global fail states in order to achieve local
         | success states.
         | 
         | yes, and there is an insight here that I think tends to be lost
         | in the popular grasp of AI x-risk: this can just as well happen
         | with the autonomous systems we have today (which need not be
         | entirely or even partially digital, defined broadly)
         | 
         | the AGI likely to matter in the near term has humans in the
         | loop
         | 
         | imo less likely to look like Clippy, more likely to look like a
         | catastrophic absence of alignment between loci of agency and
         | social, technical, and political power leading to cascading
         | failure, i.e., the world now
        
         | ehsankia wrote:
         | For me at least, the fear is not so much about the specifics,
         | but more around the fact of what exponential curves look like.
         | At any point, everything before looks basically horizontal and
         | anything after looks vertical. In that sense, the fear is that
         | while things seem quite behind right now, it could in an
         | instant zoom past us before we even have the time to realize
         | it. It is partly rooted in science fiction.
        
         | justinpombrio wrote:
         | I am quite scared of human extinction in the face of AGI. I
         | certainly didn't jump on it, though! I was gradually convinced
         | by the arguments that Yudkowsky makes in "Rationality: from AI
         | to Zombies" (https://www.readthesequences.com/). Unfortunately
         | they don't fit easily into an internet comment. Some of the
         | points that stood out to me, though:
         | 
         | - We are social animals, and take for granted that, all else
         | being equal, it's better to be good to other creatures than bad
         | to them, and to be truthful rather than lie, and such. However,
         | if you select values uniformly at random from value space,
         | "being nice" and "being truthful" are _oddly specific_. There
         | 's nothing _universally special_ about deeply valuing human
         | lives any more so than say deeply valuing regular heptagons.
         | Our social instincts are very ingrained, though, making us
         | systematically underestimate just how little a smart AI is
         | likely to care whatsoever about our existence, except as a
         | potential obstacle to its goals.
         | 
         | - Inner alignment failure is a thing, and AFAIK we don't really
         | have any way to deal with that. For those that don't know the
         | phrase, here it is explained via a meme:
         | https://astralcodexten.substack.com/p/deceptively-aligned-me...
         | 
         | So here's hoping you're right about (a). The harder AGI is, the
         | longer we have to figure out AI alignment by trial and error,
         | before we get something that's truly dangerous or that learns
         | deception.
        
           | sinenomine wrote:
           | The human extinction due to would be "hard takeoff" of an AGI
           | should be understood as a thought experiment, conceived in a
           | specific age when the current connectionist paradigm wasn't
           | yet mainstream. The AI crisis was expected to come from some
           | kind of "hard universal algorithmic artificial intelligence",
           | for example AIXItl undergoing a very specific process of
           | runaway self-optimization.
           | 
           | Current-generation systems aka large connectionist models
           | trained via gradient descent simply don't work like that:
           | they are large, heavy, continuous, the optimization process
           | giving rise to them does so in smooth iterative manner.
           | Before hypothetical "evil AI" there will be thousands of
           | iterations of "goofy and obviously erroneously evil AI", with
           | enough time to take some action. And even then, current
           | systems _including this one_ are more often than not trained
           | with predictive objective, which is very different compared
           | to usually postulated reinforcement learning objective.
           | Systems trained with prediction objective shouldn 't be prone
           | to becoming agents, much less dangerous ones.
           | 
           | If you read Scott's blog, you should remember the prior post
           | where he himself pointed that out.
           | 
           | In my honest opinion, _unaccountable AGI owners_ pose
           | multiple OOM more risk than alignment failure of a
           | hypothetical AI trying to predict next token.
           | 
           | We should think more about the _Human alignment problem_.
        
           | tomrod wrote:
           | Regarding the substack article, why isn't this the principle
           | of optimality for Bellman equations on infinite time
           | horizons?
        
           | brador wrote:
           | AI can't have goals since the universe is logically
           | meaningless.
           | 
           | Our desire for purpose is a delusion.
        
             | ben_w wrote:
             | Goals in the context of AI aren't the type of thing you're
             | arguing against here. AI can absolutely have goals --
             | sometimes in multiple senses at the same time, if they're
             | e.g. soccer AIs. Other times it might be a goal of "predict
             | the next token" or "maximise score in Atari game", but it's
             | still a goal, even without philosophical baggage about e.g.
             | the purpose of life.
             | 
             | Those goals aren't necessarily best achieved by humanity
             | continuing to exist.
             | 
             | (I don't know how to even begin to realistically calculate
             | the probability of a humanity-ending outcome, before you
             | ask).
        
         | croddin wrote:
         | I think of it as System 1 vs System 2 thinking from 'Thinking,
         | Fast and Slow' by Daniel Kahneman.[1]
         | 
         | Deep learning is very good at things we can do without
         | thinking, and is in some cases superhuman in those tasks
         | because it can train on so much more data. If you look at the
         | list of tasks in System 1 vs System 2, SOTA Deep learning can
         | do almost everything in System 1 at human or superhuman levels,
         | but not as many in System 2 (although some tasks in System 2
         | are somewhat ill-defined), System 2 builds on system 1.
         | Sometimes superhuman abilities in System 1 will seem like
         | System 2. (A chess master can beat a noob without thinking
         | while the noob might be thinking really hard. Also GPT-3
         | probably knows 2+2=4 from training data but not 17 * 24,
         | although maybe with more training data it would be able to do
         | math with more digits 'without thinking' ).
         | 
         | System 1 is basically solved, but System 2 is not. System 2
         | could be close behind System 2 by building on System 1 but it
         | isn't clear how long that will take.
         | 
         | [1].
         | https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow#Summar...
        
         | sinenomine wrote:
         | It remains to be asked, just why this causal, counterfactual,
         | logical reasoning cannot emerge in a sufficiently scaled-up
         | model trained on a sufficiently diverse real world data?
         | 
         | As far as we see, the https://www.gwern.net/Scaling-hypothesis
         | continues to hold, and critics have to move their goalposts
         | every year or two.
        
           | viksit wrote:
           | Good point. This gets us into the territory of not just
           | "explainable" models, but also the ability to feed into those
           | models "states" in a deterministic way. This is a merger of
           | statistical and symbolic methods in my mind -- and no way for
           | us to achieve this today.
        
             | sinenomine wrote:
             | Why shouldn't we be able to just prompt for it, if our
             | system models natural language well enough?
             | 
             | ...
             | 
             | And anyway, this problem of structured knowledge IO has
             | been more or less solved recently:
             | https://arxiv.org/abs/2110.07178
        
           | mxkopy wrote:
           | Neural networks, at the end of the day, are still advanced
           | forms of data compression. Since they are Turing-complete it
           | is true that given enough data they can learn anything, but
           | only if there is data for it. We haven't solved the problem
           | of reasoning without data, i.e. without learning. The neural
           | network can't, given some new problem that has never appeared
           | in the dataset, in a deterministic way, solve that problem
           | (even given pretrained weights and whatnot). I do think we're
           | pretty close but we haven't come up with the right way of
           | framing the question and combining the tools we have. But I
           | do think the tools are there (optimizing over the space of
           | programs is possible, learning a symbol-space is possible,
           | however symbolic representation is not rigorous or applicable
           | right now)
        
             | Jack000 wrote:
             | data isn't necessarily a problem for training agents. A
             | sufficiently complex, stochastic environment is effectively
             | a data generator - eg. alphago zero
        
             | sinenomine wrote:
             | I do think we underestimate compressionism[1] especially in
             | the practically achievable limit.
             | 
             | Sequence prediction is closely related to optimal
             | compression, and both basically require the system to model
             | the ever wider context of the "data generation process" in
             | ever finer detail. In the limit this process has to start
             | computing some close enough approximation of the largest
             | data-generating domains known to us - history, societies
             | and persons, discourse and ideas, perhaps even some shadow
             | of our physical reality.
             | 
             | In the practical limit it should boil down to exquisite
             | modeling of the person prompting the AI to do X given the
             | minimum amount of data possible. Perhaps even that X you
             | had in mind when you wrote your comment.
             | 
             | 1. http://ceur-ws.org/Vol-1419/paper0045.pdf
        
       | extr wrote:
       | Abstract: Inspired by progress in large-scale language modeling,
       | we apply a similar approach towards building a single generalist
       | agent beyond the realm of text outputs. The agent, which we refer
       | to as Gato, works as a multi-modal, multi-task, multi-embodiment
       | generalist policy. The same network with the same weights can
       | play Atari, caption images, chat, stack blocks with a real robot
       | arm and much more, deciding based on its context whether to
       | output text, joint torques, button presses, or other tokens. In
       | this report we describe the model and the data, and document the
       | current capabilities of Gato.
       | 
       | Direct Link to Paper: https://dpmd.ai/Gato-paper
        
         | blueberrychpstx wrote:
         | > we refer to as Gato
         | 
         | First, humanity built enormous statues worshiping cats.
         | 
         | Then, we let cats populate the largest amount of "image-bits"
         | on the Internet.
         | 
         | Now, we name the next closest thing to general AI after them.
         | 
         | These damn felines sure are mysterious.
        
           | riwsky wrote:
           | it's all because cats made it so that, on the Internet,
           | nobody knows you're a dog
        
         | [deleted]
        
       | productceo wrote:
       | Impressive
        
       | phyalow wrote:
       | Isnt this a general reinforcement learning agent with a
       | transformer as the policy discriminator? Very cool, but not
       | necessarily a giant leap forward, more like a novel combination
       | of existing tools and architectures. Either way impressive.
        
         | twofornone wrote:
         | I haven't read the paper yet but it looks like the breakthrough
         | is that it uses the "same weights" for tasks in completely
         | different domains.
         | 
         | Which implies that it can draw from any of the domains it has
         | been trained on for other domains. Speculating here but for
         | example training it on identifying pictures of dogs and then
         | automagically drawing on those updated weights when completing
         | text prompts about dog properties.
         | 
         | If my interpretation is correct then this is a pretty big deal
         | (if it works well enough) and brings us a lot closer to AGI.
        
         | password54321 wrote:
         | 2nd page: "Gato was trained offline in a purely supervised
         | manner"
        
       | [deleted]
        
       | colemannugent wrote:
       | What I really want to know is what kind of robot arm motion is
       | produced when the network is given a cat image to classify. More
       | specifically, what kind of insights has it learned from one
       | control domain that it then applied to another?
       | 
       | I imagine that the simulated 3D environment and the actual
       | control of the robot arm must have some degree of interconnection
       | neurally.
        
         | ulber wrote:
         | You could also train for this kind of interconnectedness by
         | designing tasks that are explicitly multi-modal. For example,
         | you could:
         | 
         | - Stack boxes collaboratively by controlling your own arm and
         | communicating with another agent helping you.
         | 
         | - First produce a plan in text that another agent has to use to
         | predict how you're going to control the arm. You'd get rewarded
         | for both stacking correctly and being predictable based on the
         | stated plan.
        
       ___________________________________________________________________
       (page generated 2022-05-12 23:00 UTC)