[HN Gopher] Google DeepMind
       ___________________________________________________________________
        
       Google DeepMind
        
       Author : random_moonwalk
       Score  : 495 points
       Date   : 2023-04-20 17:11 UTC (5 hours ago)
        
 (HTM) web link (www.deepmind.com)
 (TXT) w3m dump (www.deepmind.com)
        
       | alecco wrote:
       | Microsoft was smart on letting OpenAI keep doing their thing.
       | Pichai seems to have chosen to micromanage DeepMind. The board
       | should find an actual CEO ASAP.
        
       | woeirua wrote:
       | This move makes sense from the perspective that DeepMind has some
       | street cred in their ability to produce novel models that solve
       | interesting problems. The only issue is that DeepMind has also
       | suffered from the same problems that the mothership has: an
       | inability to execute. Are there any documented success stories of
       | DeepMind making serious money off their models? They've been
       | great at producing interesting and valuable research, but all of
       | their partnerships have failed as far as I know.
       | 
       | Google's screwed because LLMs offer us a fundamentally different
       | business model for search, and I'm not convinced though that you
       | can actually make a company out of LLMs that is as wildly
       | profitable as Google was during its hayday. If that's true, then
       | I just don't see how any CEO could go to the shareholders and
       | say: "in order for us to survive, we have to accept that we're
       | going to be a much smaller company in 5 years, both in terms of
       | head count and profit." Sundar would be overthrown in a matter of
       | days.
        
         | rcme wrote:
         | Was making money a goal for Deep Mind?
        
           | woeirua wrote:
           | I have to assume so, if not why else did Google acquire them?
        
       | xg15 wrote:
       | I guess Google is dancing alright.
        
       | FrustratedMonky wrote:
       | OpenAI, Musk's whatever the name, Google Mind, the other dozen
       | projects that spring up every single day. -- I just read Scott
       | Alexanders Meditations on Moloch for the first time, and this mad
       | rush to monetize AI seems to be right on track.
        
       | mason96 wrote:
       | [dead]
        
       | SilverBirch wrote:
       | I feel like this is a bad sign. What this announcement reads like
       | is "Hey! I won this internal political struggle!". Ok, sure, not
       | sure why anyone outside the company should take this as good
       | news. This announcement either means the AI outside of Demis'
       | team has been neutered, or they're lining Demis up to be the
       | scape goat for missing AI. Remember - what this announcement
       | means is that Demis now has a load of people reporting to him who
       | previously were rooting for his failure. Trying to synthesize
       | those two separate teams (half of which wanted you to fail) into
       | one productive and world-leading team is a hell of an ask.
        
         | [deleted]
        
         | jutrewag wrote:
         | This sounds like poor, unsubstantiated gossip and drama from
         | TMZ. Sounds like you have an axe to grind.
        
         | bookofjoe wrote:
         | File under: A. Lincoln, team of rivals
        
         | dougmwne wrote:
         | Yeah, absolutely lining him up to be the scapegoat. His chance
         | for success seems severely compromised and his mission was
         | always to invent AGI, not create some kind of lowly ad-search
         | product. The party sounds over for him and I bet he will be out
         | and off to the next research think-tank soon.
        
         | hervature wrote:
         | My outsider perspective is that DeepMind was the research arm
         | and Brain was specifically tasked with making the company money
         | through AI/ML applications. This appears to me that Google is
         | combining the two to make sure that DeepMind starts turning a
         | profit by adopting Brain's mission. Of course, DeepMind's brand
         | is orders of magnitude more valuable so it makes sense to keep
         | the name around. Would be happy to hear more knowledgeable
         | takes on if this is an incorrect reading of the tea leaves.
        
           | williamcotton wrote:
           | Will we ever live in a world where people once again use
           | first-person pronouns?
        
             | sebzim4500 wrote:
             | I asked GPT-4 to do it and it scattered first-person
             | pronouns everywhere.
             | 
             | https://pastebin.pl/view/77273c05
             | 
             | Happy now?
        
               | williamcotton wrote:
               | In all seriousness, can we not agree that the version
               | with the correct grammar is a better aesthetic reading
               | experience?
        
             | JLCarveth wrote:
             | How is this relevant to the comment you were replying to?
        
               | williamcotton wrote:
               | What's the point of earning magic internet beans if not
               | to spend them pointlessly policing bad grammar?
        
           | loudmax wrote:
           | Google has (or until very recently had) some of the best
           | researchers in the industry. Google's problem isn't
           | developing new stuff, it's turning stuff they come up with
           | into a viable product and then developing a market around it.
           | All of their most successful products (search, gmail, maps,
           | youtube) were developed at least a decade ago. They've come
           | up with decent technology since then, but they seem to have
           | developed a catastrophic inability to actually build a
           | business around any of it. The failures of Google+, Duo/Allo,
           | Inbox, Stadia have nothing to do with technology and
           | everything to do with managerial incompetence.
           | 
           | Google could be sitting on the most advanced AI on the
           | planet, but none of that matters as long as they're under the
           | current leadership.
        
             | Workaccount2 wrote:
             | They made the enormous mistake of releasing bard with a
             | lightweight model. It instantly made google look like they
             | were way behind, and made bard largely irrelevant.
             | 
             | Bard should have been limited access and been the absolute
             | most power model they had.
             | 
             | Now everyone is questioning if Google actually can compete
             | with OpenAI at all, despite decades headstart and far more
             | research and funding.
        
           | spokesbeing wrote:
           | This is not correct. Both DeepMind and Brain had/have
           | separate applied groups. A lot of Brain research was/is not
           | product focused at all. Transformers I'd say are more
           | impactful than any other research innovation in the current
           | AI boom and came from Brain not DM. DM does do great PR.
        
         | manux wrote:
         | > Demis now has a load of people reporting to him who
         | previously _were rooting for his failure_
         | 
         | Having been in both places (Brain and DM), this feels so far
         | from what I experienced that I must ask, what are you basing
         | this on?
        
           | nr2x wrote:
           | Good managers insulate reports from the politics, if you
           | weren't plugged into it it's either your manager did a good
           | job or it's the only part of Google that isn't 90% politics.
           | 
           | Signed, "didn't work at brain or dm but was involved in a lot
           | of alphabet level decision making".
        
           | uptownfunk wrote:
           | This reads pretty normal for big tech corporate politics.
        
             | khazhoux wrote:
             | I never like the word "politics." It carries the
             | association of a bunch of people just playing backstabbing
             | games to further themselves.
             | 
             | While this does occur, in general what I see is that with
             | any large-enough group of people, there will be strong
             | differences of opinions on how to steer the project to
             | success.
             | 
             | In fact, I don't think I can remember a single "political
             | battle" that didn't stem from a legitimate concern in how
             | some project was being run and what they had decided to
             | focus on.
        
               | uptownfunk wrote:
               | Like it or not, politics is pretty much your day to day
               | life at vp+ level at these companies.
               | 
               | But we can all pretend to live in idealism la la land
               | where everything is operating on someone's best
               | intention.
        
           | caminante wrote:
           | You worked there and didn't know of any of the intra-company
           | autonomy infighting that leaked into the news? [0] [1]
           | 
           | [0] https://techcrunch.com/2023/04/20/google-consolidates-ai-
           | res...
           | 
           | [1]https://www.wsj.com/articles/google-unit-deepmind-
           | triedand-f...
        
             | sangnoir wrote:
             | If you've worked at a large organization, you'll know the
             | news can paint a cartoonishly distorted picture largely
             | informed by the perspective of the anonymous sources,
             | journalist and news organization.
        
               | caminante wrote:
               | The WSJ article expressly considers that factor and goes
               | into detail on what's under the surface.
               | 
               |  _> The end of the long-running negotiations, which
               | hasn't previously been reported, is the latest example of
               | how Google and other tech giants are trying to strengthen
               | their control over the study and advancement of
               | artificial intelligence._
        
             | f5e4 wrote:
             | Could you provide a quote from either of these articles
             | that supports the statement being questioned:
             | 
             | > Demis now has a load of people reporting to him who
             | previously were rooting for his failure
        
               | caminante wrote:
               | Yes.
               | 
               | Further, I don't understand how explicit examples of
               | company infighting over autonomy doesn't already address
               | your point.
        
               | f5e4 wrote:
               | Fighting between Deepmind and Google leadership over
               | autonomy doesn't really directly support that Google
               | Brain employees and Deepmind had infighting. They seem to
               | me to be quite different things.
               | 
               | It seems like a big leap to take these articles as
               | support the statement:
               | 
               | > Demis now has a load of people reporting to him who
               | previously were rooting for his failure
               | 
               | It certainly might be true, but I'm missing the
               | connection between these articles and the statement.
        
               | caminante wrote:
               | _> They seem to me to be quite different things._
               | 
               | Only if you use vague standards like
               | 
               | -"doesn't really directly support"
               | 
               | -"Google Brain employees"
               | 
               | How are "Google Brain employees" distinct from "Google
               | leadership with Google Brain personnel in their
               | respective reporting line?" What is the criteria for that
               | distinction?
        
           | sangnoir wrote:
           | One of HNs failure-modes is inaccuracies get voted to the top
           | if they _seem_ correct to the majority of voters whose biases
           | resonate with poster 's.
           | 
           | Aside: thank you for asking. When I previously encountered
           | incorrect top-level comments that I knew to be wrong (insider
           | information), I'd simply ignore and move on. You've inspired
           | me to push back more often.
        
             | bookofjoe wrote:
             | But not always! There are those among us who like nothing
             | better than to double down in a flame war. One nice thing
             | about having visited often over the past 7 years is that I
             | know whom to avoid responding to (for the most part).
        
             | hazn wrote:
             | This is a accurate reflection of biases of humans in
             | general. A good story trumps truth.
        
               | nostrademons wrote:
               | And also why AI and LLMs are hot right now. A good story
               | trumps truth.
        
             | return_to_monke wrote:
             | this is actually what an AI would do!
        
         | victor106 wrote:
         | As much as I would like Google to compete strongly with
         | OpenAI(i.e., ClosedAI) I somehow have this feeling that they
         | are going to end up like IBM Watson.
        
           | 10xDev wrote:
           | ChatGPT Wouldn't even exist without Google e.g. Transformers,
           | Deep RL.
        
             | nr2x wrote:
             | so sick of this line of corporate worship. "Google" didn't
             | invent anything, the employees did, and the top talent are
             | leaving. Ilya Sutskever was a very key person at Google
             | before he left to start OpenAI, and at most one person from
             | the context is all you need paper still even work there.
             | 
             | the view of outside google of how great they are has zero
             | bearing on the realities inside the company. the company is
             | people - and google is no longer the place to be if you
             | have talent. simple as that.
        
               | 10xDev wrote:
               | > "Google" didn't invent anything, the employees did
               | 
               | No shit, way to be pedantic over a common simple
               | abstraction. Do you want a list of every author for every
               | thing that is ever invented when someone refers to
               | something?
        
             | shmoogy wrote:
             | And that's the last thing they probably contribute with how
             | that turned out for Google
        
             | ErneX wrote:
             | Remember Xerox PARC and the GUI.
        
         | WastingMyTime89 wrote:
         | > This announcement either means the AI outside of Demis' team
         | has been neutered, or they're lining Demis up to be the scape
         | goat for missing AI.
         | 
         | I read that to mean the party is over, we are treating that as
         | a strategic subject and are streamlining our organisation. As
         | you rightfully pointed Google basically had two competing
         | organisations with all the complexity associated with that.
         | That's now over. From now on, there is only one captain
         | steering the ship.
        
         | matrix_overload wrote:
         | Don't worry, they will sunset it in 3 years, like every other
         | project.
        
         | deelowe wrote:
         | Do you have any prior knowledge of these teams? They weren't
         | working against each other. One group focused on research and
         | the other focused on products.
        
           | ChuckMcM wrote:
           | Not to be snarky but do you realize that what you have stated
           | is the _definition_ of working against each other? Research
           | teams are about getting to the paper and a deeper
           | understanding, product teams are about getting something out
           | the door that helps you capture value whether you understand
           | it or not. Engineering research teams are notorious for being
           | both ungovernable and spending so much time  "understanding"
           | their ideas that they miss the market window. The canonical
           | book on the subject for me was "Fumbling the Future" which
           | talked about Xerox PARC, I worked in Sun Labs ("where good
           | ideas go to die"), hired people out of Microsoft's BARC (Bay
           | Area Research Center), and worked in IBM's Watson group which
           | pulled a bunch of people out of research to "make a product
           | out of AI".
           | 
           | It is a really hard problem to "commercialize" imagination or
           | innovation. Two very different mindsets between "doing
           | product" and "doing research." DOW Chemical did a pretty good
           | job of it, but they have always been more "components of the
           | solution" rather than the full solution.
        
             | deelowe wrote:
             | It wasn't engineering research, it was pure computer
             | science. They published papers, attended conferences, etc.
             | The other team, whom I personally interacted with more were
             | engaged in solution design. They would have a goal (e.g.
             | alpha go) and architect a solution for that specific
             | problem. The two teams were somewhat orthogonal from what I
             | recall.
        
           | jstx1 wrote:
           | Is it bad that I can't tell which one was focused on
           | products? It seems like neither of them was.
        
             | deelowe wrote:
             | Why should you? Neither were public facing.
        
       | [deleted]
        
       | karmasimida wrote:
       | Summarized by ChatGPT:
       | 
       | > DeepMind and Google Research's Brain team are merging to form a
       | new unit called Google DeepMind, which will combine their talents
       | and resources to accelerate progress towards building ever more
       | capable and general AI, safely and responsibly. This will create
       | the next wave of world-changing breakthroughs and AI products
       | across Google and Alphabet, while transforming industries,
       | advancing science, and serving diverse communities. The new unit
       | will be led by DeepMind CEO Demis Hassabis, with Eli Collins
       | joining the leads team as VP of Product, and Zoubin Ghahramani
       | joining the research leadership team reporting to Koray
       | Kavukcuoglu. A new Scientific Board for Google DeepMind will also
       | be created to oversee research progress and direction.
        
       | walnutclosefarm wrote:
       | Google has to be freaked out at the rapidity with which OpenAI
       | and Microsoft are taking their generative language models into
       | various markets. Look at the way Microsoft is (fairly
       | successfully) grabbing attention-share through the efforts of
       | Peter Lee and others in healthcare with GPT-4, e.g. - Google is
       | floundering in comparison (despite having a huge head start,
       | particular through DeepMind). I don't know that I'm convinced
       | Microsoft can actually make good on the promises they are
       | suggesting, but it's be a daft bet on Google's part to assume
       | they can't.
        
       | aix1 wrote:
       | A lot of folks here seem to be jumping to the conclusion that
       | this means that DeepMind is losing its independence.
       | 
       | Other than the addition of the word "Google" - which could simply
       | be a rebranding exercise - I am yet to see any evidence in
       | support of that.
       | 
       | P.S. In particular, there haven't been any indications that
       | Demis's reporting line is changing.
        
       | Mandatum wrote:
       | We're still a long, long, long way from AGI.
       | 
       | Releases like this are more about stock price and investment than
       | anything else.
       | 
       | I'm glad we've put more investment into this area as ultimately
       | AGI will be able to uplift a large sector of the population that
       | historically went underserved, or at least level the playing
       | field.
       | 
       | But statements like this are meaningless wank.
        
       | krn wrote:
       | I am a big fan of Alphabet as a company, but this is how I read
       | the first two paragraphs...
       | 
       | > When Shane Legg and I launched DeepMind back in 2010, many
       | people thought general AI was a farfetched science fiction
       | technology that was decades away from being a reality.
       | 
       | Translation: "We were not able to see what the founders of OpenAI
       | saw back in 2015".
       | 
       | > Now, we live in a time in which AI research and technology is
       | advancing exponentially. In the coming years, AI - and ultimately
       | AGI - has the potential to drive one of the greatest social,
       | economic and scientific transformations in history.
       | 
       | Translation: "Now we live in a time in which AI research and
       | technology has advanced exponentially thanks to the great
       | achievements by our competitors - and we clearly feel left
       | behind."
        
         | auggierose wrote:
         | Right. You know that DeepMind did AlphaGo, right? It made its
         | entrance in 2015.
        
           | krn wrote:
           | And how does that compare with what OpenAI has accomplished
           | since 2015?
           | 
           | I'm not blaming DeepMind here.
           | 
           | It was Google's job not to start loosing ground to Microsoft
           | in the age of AI.
        
             | scottyah wrote:
             | What has OpenAI accomplished other than a lot more
             | publicity for AI? If I've been following the story right,
             | Google and a few others have created all the tech
             | breakthroughs and OpenAI just created a way for common folk
             | to play with it then sold out to Microsoft.
        
               | krn wrote:
               | Microsoft has been an investor in OpenAI since 2019[1].
               | 
               | Now Samsung is considering replacing Google with Bing as
               | the default search engine on all Galaxy phones[2].
               | 
               | I think that's a big accomplishment for OpenAI. And it's
               | still and independent company.
               | 
               | [1] https://openai.com/blog/openai-and-microsoft-extend-
               | partners...
               | 
               | [2] https://www.sammobile.com/news/samsung-galaxy-phones-
               | tablets...
        
             | auggierose wrote:
             | Personally, I feel AlphaGo was the biggest deal ever. It
             | put AI truly on the map. OpenAI is just a corollary to
             | that, and would not exist without DeepMind in the first
             | place.
        
               | krn wrote:
               | That's probably true, because OpenAI was formed just two
               | months after AlphaGo defeated the European Go champion
               | Fan Hui.
               | 
               | But Google has missed a lot of opportunities since then,
               | and is now trying to catch up.
        
               | nicetryguy wrote:
               | > Personally, I feel AlphaGo was the biggest deal ever.
               | 
               | Right, but the cashier at McDonalds is using ChatGPT for
               | night school.
        
       | earthboundkid wrote:
       | > Now, we live in a time in which AI research and technology is
       | advancing exponentially. In the coming years, AI - and ultimately
       | AGI - has the potential to drive one of the greatest social,
       | economic and scientific transformations in history.
       | 
       | I'm not an AI Doomer, but is there some kind of scenario where
       | the coming of AGI doesn't trigger a communist revolution and a
       | lot of death and destruction along the way? I dunno, maybe it
       | could be a Fabian revolution, but seems pretty unlikely. Seems
       | more like AGI - everyone is pissed off that they still have to
       | work for a living - a lot of rich people with heads on pikes. Is
       | there some other scenario that's more likely? Doesn't feel that
       | way to me. Then again, I'm the creator of
       | https://bellriots.netlify.app/, so maybe I'm a Revolution Doomer.
        
         | scottyah wrote:
         | It'll get a lot of people off the internet, and wreck the lives
         | of those who stay addicted.
        
       | rvba wrote:
       | Reorganizing two teams that did their own thing will not reap
       | immediate benefits. It will take time.
       | 
       | Sounds like a PR move.
        
       | waselighis wrote:
       | There's an interesting history behind the RCA CED (Capacitance
       | Electronic Disk), an attempt to put video on vinyl. While the
       | full history behind it's failure is complicated, a large factor
       | was the differing priorities between research and other
       | departments that delayed the product by several years.
       | 
       | https://en.wikipedia.org/wiki/Capacitance_Electronic_Disc
       | 
       | https://www.youtube.com/watch?v=PnpX8d8zRIA
       | 
       | Considering some of the other comments about merging two AI
       | departments together (DeepMind and Brain) and injecting more
       | bureaucracy into DeepMind, it seems to have some parallels with
       | the story of the RCA CED. You can't just let researchers do
       | research. There needs to be a clear goal/priority that this
       | research can eventually be converted into a profitable product or
       | service. Otherwise, the researchers will continue to work on
       | "cool projects" and publishing papers with their name on them,
       | with little consideration given to how to monetize this research.
       | 
       | Personally, I'm not a fan of this AI gold rush trying to inject
       | AI into everything. It's just interesting to ponder.
        
       | tgtweak wrote:
       | I remain confident that it is impossible for a "startup" or
       | properly competitive standalone org to exist under the roof of
       | Google.
       | 
       | From reading these comments, it looks like this is at best
       | mitigating some internal conflict.
        
       | zmmmmm wrote:
       | Two specifics here that seem problematic for me and I am curious
       | about
       | 
       | 1) DeepMind was given very significant autonomy since day 1 it
       | was acquired. I find it very hard to believe that any attempt to
       | take that away won't result in huge internal problems and / or
       | attrition
       | 
       | 2) Sundar Pichai has been coming in for a lot of criticism in
       | general because he seems to be constantly out-maneuvered by
       | Microsoft and we have seen very little new emerge from Google
       | under his watch. Putting himself at the helm of this is going to
       | really accentuate this and actually seems high risk - if he is
       | the the reason Google is struggling to deliver elsewhere then
       | positioning himself at the apex of an existentially important
       | effort could be lethal.
       | 
       | Added together, there seems like a high risk this could go
       | catastrophically wrong for Google, and Pichai in particular.
       | Maybe it will work, but the downside is enormous.
        
       | astrange wrote:
       | Do you think they'll remember they own Waze now?
        
       | schappim wrote:
       | TL;DR:                 * DeepMind and Google Research's Brain
       | team merging into single unit: Google DeepMind       * Goal:
       | accelerate progress in AI and AGI development safely and
       | responsibly       * Demis Hassabis leading the new unit       *
       | Close collaboration with Google Product Areas       * Aim:
       | improve lives of billions, transform industries, advance science,
       | serve diverse communities       * Greater speed, collaboration,
       | and execution needed for biggest impact       * Combining world-
       | class AI talent with resources and infrastructure       *
       | DeepMind and Brain teams' research laid foundations for current
       | AI industry       * New Scientific Board for Google DeepMind
       | overseeing research progress and direction       * Upcoming town
       | hall meeting for further information and clarity
        
       | dahwolf wrote:
       | It all sounds so fancy, such grand vision.
       | 
       | In reality, this is just Sundar looking through the org chart and
       | saying: wow, these things seem related. Let's combine them
       | because surely that will mean that it starts working. Just so
       | that he can announce "something" as a growing army of sharks are
       | snapping at his feet.
        
       | sdfghswe wrote:
       | I hear that Demis had been fighting this for a while. I guess he
       | lost.
       | 
       | Which..... of course he did. They don't make any money. That's
       | ultimately how these decisions are made.
       | 
       | I talked to one of their in-house recruiters (or HR or whatever)
       | some 5-6(?) years ago. I asked them how they make money, they
       | gave me a really muddled answer. It had the word "clients" in
       | there. I didn't understand, so I tried to clarify, I said "oh,
       | you make revenue from consulting for your clients?". Then they
       | gave me a crystal clear answer, they said: "No, we're a lab". I
       | noped outta there really fast.
       | 
       | In retrospect, I was right that I wouldn't have made any money,
       | but might've been a good boost for my CV to do for a couple of
       | years.
        
       | paxys wrote:
       | I have commented this many times on such articles, and will say
       | it again:
       | 
       | Google still thinks of AI as a research project, or at best a way
       | to produce better search results. They essentially created the
       | entire current generation of the AI space and then... gave it
       | away, because no one on the product side understood what they had
       | actually built. Handing the reins to the DeepMind team - who have
       | never launched a single product in their history - seems to be a
       | doubling down on that same failed strategy.
       | 
       | Google doesn't need more smart AI researchers, academics or
       | ethicists. They need product managers who understand the
       | underlying technology and can commercialize it. They need
       | pragmatic engineers who can execute, launch and maintain
       | services. That has always been their problem as a company.
        
         | momojo wrote:
         | I don't doubt your analysis of Google, but what was OpenAI able
         | to do differently to get them here? Aren't they just another
         | research hub?
        
         | bartwr wrote:
         | As someone who's been at Google Research ~5y, this nails is it
         | 100%.
         | 
         | I was at the non-Brain part of Research and it was seen as
         | Google Brain is the "cool", pure research one, dealing with
         | some future abstract AI and not caring for the products,
         | feasibility, or even if the research "could" be made practical
         | any day.
         | 
         | Deepmind was an "extreme" version of it, with some animosities
         | and politics between the two, which I didn't follow too
         | closely. There were attempts at making Deepmind useful, called
         | "Deepmind for Google", but the people there were... clueless.
         | Though one really cool thing came out of it (XManager).
         | 
         | (I was at a closer to the product part, "Perception", which I
         | loved. And still got to publish, explore, pursue my own
         | research goals, etc.)
        
           | dmix wrote:
           | That's what great about competition. It kicks you in the
           | pants and reminds you you need to try.
           | 
           | iPhone scared the shit out of the phone market and today we
           | have great phones from Samsung and Google which dominate the
           | market. If everyone was trying to predict the smartphone
           | market in 2007 they'd be talking about how Nokia missed the
           | boat but excited to see their response (or
           | Motorola/Sony/Blackberry etc). The market today won't
           | necessarily be the market in 10yrs from now. It might be
           | Google, they have a solid head start to be #2 and future #1,
           | but who knows what will happen and whether that
           | talent/advantage stays in Google.
           | 
           | It could just as easily be other companies we don't even
           | consider serious players today.
        
           | xiphias2 wrote:
           | Ilya _was_ at Google Brain, so something doesn't add up
           | there. I believe people wanted to launch things, but higher
           | management stopped it.
           | 
           | I was next to the team that created Allo's chat bot, but they
           | said that they had to take out most cool stuff because legal
           | didn't allow it to launch, so they had to dumb it down
           | totally.
           | 
           | I believe the main problem was all the ethics/safety teams
           | that just hired a lot of non-programmers, while OpenAI
           | management treated safety as an engineering problem that has
           | to be solved with a technical solution.
        
             | fnbr wrote:
             | Yeah, but Ilya left. Doesn't that prove your point?
        
         | safog wrote:
         | The cat's out of the bag now, it doesn't take a genius PM to
         | work it out. Maybe a genius PM could've worked out how
         | revolutionary generative AI is going to be pre- chat GPT
         | release but I really doubt that a random MBA who knows nothing
         | about AI can do that. Every single day there's a cool new AI
         | application. The problem space is fairly fleshed out. It's a
         | matter of executing.
         | 
         | How do you make enterprise tools better? (Photoshop + AI, Code
         | + AI etc.) How do you make consumer tools better? (YT tools +
         | AI) How do you make search better?
         | 
         | etc. etc.
        
         | medler wrote:
         | Is there anyone who has successfully productized AI? ChatGPT
         | isn't a profitable product, at least not yet. Google Photos and
         | Spotify recommendations are the best AI products I can think of
         | with clear revenue, and in these examples AI is just a cherry
         | on top of a product people would use anyway.
        
           | paxys wrote:
           | OpenAI is growing revenues from ~0 in 2022 to $300M in 2023
           | to $1B in 2024. That sounds like a product to me.
        
             | okdood64 wrote:
             | It's not 2024 yet, and 2023 just started.
        
             | tensor wrote:
             | "Hopes to grow" revenues. Current estimates put hardware
             | costs alone at $700k/day, so even if they hit $300M in 2023
             | that won't make them profitable. This isn't even counting
             | the people costs and other operation costs required to run
             | a company.
             | 
             | edit: order of magnitude was wrong on costs per day.
        
               | vasco wrote:
               | Do you have the order of magnitude right at $700/day?
               | That's not much at all.
        
               | generalizations wrote:
               | And yet, $300M is only 1.25M subscribers at the current
               | $20/mo rate. If we say that they need a $1B / year to be
               | comfortably profitable, that's ~4.2M subscribers. A good
               | rule of thumb is that you can hope to convert about 10%
               | of your free user base to paid; one random source says
               | they have 100M monthly active users - which at 10%
               | conversion, is $2.4B / year. I think they'll be fine.
        
               | ZephyrBlu wrote:
               | 10% is insanely high conversion for B2C freemium. It's
               | closer to 1% for most products.
        
               | illiarian wrote:
               | How much money will they spend servicing requests for
               | those 4.2M subscribers?
        
             | medler wrote:
             | It is currently April 2023. "~0 in 2022" is the only part
             | of that that seems credible. I not convinced of OpenAI's
             | rosy predictions of future explosive growth.
        
               | brokencode wrote:
               | The fact that Microsoft is baking GPT into all of their
               | products guarantees explosive growth.
               | 
               | ChatGPT is also one of the fastest growing consumer
               | products in history by number of users. At $20 a month
               | for plus, it could be a significant revenue stream.
               | 
               | Then add all the companies like Duolingo and Snapchat
               | that are using GPT as well.
               | 
               | If you don't see this as explosive growth, then I don't
               | know what to tell you.
        
             | illiarian wrote:
             | Revenues mean nothing if your expenses outpace them. What's
             | the net profit?
        
           | dougmwne wrote:
           | Github Copilot seems like a pretty clear example. They charge
           | a subscription that's in excess of the marginal cost of
           | inference.
        
             | medler wrote:
             | That's a good point. I forgot about Copilot.
        
             | deanc wrote:
             | I'd be astonished if they're even close to breaking even on
             | copilot. In its current incarnation it wouldn't even lace
             | the boots of what's coming out of OpenAI.
             | 
             | CopilotX with its OpenAI collab will be the real winner -
             | if it ever gets released to those on the waitlist. I'm not
             | aware of anyone who got in yet, which leads me to believe
             | it doesn't yet exist.
        
               | [deleted]
        
               | bugglebeetle wrote:
               | I got access to the Copilot CLI, which is supposed to be
               | part of the full package eventually. Dunno anyone who has
               | gotten access to Copilot Chat yet, which I expect is what
               | everyone really wants.
        
             | andygeorge wrote:
             | curious if/when MS will get desirable returns on the
             | significant investment needed to run/train copilot
        
               | dougmwne wrote:
               | It's a good question, but also helpful to point out that
               | one of the beauties of these models is that you can train
               | them once and deploy to many use cases. The same model
               | can be used by Github, Bing, Office 365, Azure and so on.
               | 
               | And as for the big multi-billion investment in OpenAI,
               | they may have more than made that back up on their
               | valuation already. Plus the deal was structured that
               | OpenAI would pay it's revenues into Microsoft till the
               | investment was paid back and MS would sill end up with a
               | 49% stake.
               | 
               | All in all, sounds like a smart investment from MS and,
               | cerry on top, managed to majorly embarrass a main rival.
        
               | andygeorge wrote:
               | agree on it being a good play by MS. will interesting to
               | see if they do spin it out to their other realms
        
           | Takennickname wrote:
           | GPT API is a successful product. All those start ups that are
           | just a thin layer over GPT that are funded by YCombinator are
           | paying for API use and that's profitable for OpenAI.
        
             | nomel wrote:
             | > and that's profitable for OpenAI.
             | 
             | Reference?
             | 
             | "Profitable" means they're making more money than they're
             | using, at this moment.
        
               | Takennickname wrote:
               | Are you implying openai is selling access to their API at
               | a loss?
        
               | nomel wrote:
               | No, I would like facts, not assumptions. It's definitely
               | not safe to assume they are making a profit, as a whole,
               | or per transaction. It's more complicated than that.
               | 
               | Profit has a strict definition of $revenue - $cost, for a
               | business operation as a whole, which leaves money in the
               | bank at the end of the month.
               | 
               | They could be making more money for a single query than
               | the cost of compute time for that single query, but that
               | may not cover the engineering and idle servers. They
               | could be running at a loss with the assumption that they
               | can improve efficiency per transaction soon. They could
               | be running at a "loss" because they're giving some of the
               | compute away for free right now, to improve the training
               | with the user responses. Or maybe they are making
               | fistfuls of money. "Profitable" has a strict meaning,
               | shouldn't be assumed, and definitely isn't required, at
               | this point in their operation.
               | 
               | I'm very interested to know if they are profitable, at
               | the moment, but I don't think that's been publicly
               | disclosed yet, and I can't find anything. A reference is
               | required.
        
           | riffraff wrote:
           | if you consider Spotify recommendations as AI then you should
           | consider also Youtube and every social network based on a
           | non-time based timeline and ads, no?
        
           | gilbetron wrote:
           | I've seen MidJourney's estimated revenue at about
           | $750k/month. Not bad.
        
             | yunwal wrote:
             | This seems likely lower than their costs. Is there a
             | breakdown somewhere?
        
         | walnutclosefarm wrote:
         | I completely agree. I was involved as a tech executive at a
         | large medical center trying to get collaborative work with both
         | Google per se, and DeepMind, to a usable or product stage, and
         | it was essentially impossible. DeepMind in particular was more
         | interested in pushing the research envelope, and getting more
         | papers in Nature, than in building products.
         | 
         | I wouldn't underestimate the degree to which this is by design,
         | from the very top of Google. Different Google and other
         | Alphabet companies' executives more than once told me they just
         | weren't interested in products that didn't have an obvious path
         | to more than 1 billion users. The companies don't have a clue
         | how to make money retail. If they can't print money with an
         | idea, they don't have the tools and skills to bring it to
         | market.
        
         | lallysingh wrote:
         | They're the new Xerox, looking at PARC's output and asking,
         | "How will this make our copiers better?"
         | 
         | Internally, everyone's asking "How will this help my promo
         | packet?"
        
           | wmeredith wrote:
           | Absolutely. You could also sub in Microsoft, looking at the
           | internet in 2002 and asking, "how could this make Windows
           | desktop better"?
        
             | crop_rotation wrote:
             | IE was already dominant by 2002. Microsoft didn't ignore
             | the Internet. They went hell bent on it during Gates era
             | and won decisevely. It's only when they had no competition
             | IE got stuck and then surpassed.
        
               | paxys wrote:
               | If talking about bundling a web browser inside their OS,
               | sure. It was more that Microsoft missed the entire
               | potential of the internet as a whole and how
               | fundamentally transformational it could be. They had no
               | presence in online services, e-commerce, search and more
               | until they saw competitors eating their lunch, and have
               | been lagging behind ever since.
        
               | crop_rotation wrote:
               | They had a presence in online services, MSN is older than
               | even IE.
        
               | paxys wrote:
               | AOL was already dominant by the time MSN launched. Yahoo
               | was close behind. Even back then Microsoft was playing
               | catch up in the space.
        
         | attractivechaos wrote:
         | > _they need product managers who understand the underlying
         | technology and can commercialize it._
         | 
         | I would say they more need engineers who care about and can
         | make good products. In my limited experience, it takes time to
         | turn a research-focused group into a product-oriented team.
         | Research vs production requires different skill sets.
        
       | pcj-github wrote:
       | Will Google stop sharing progress in foundational AI research to
       | it's competitors now?
        
         | jeffbee wrote:
         | Why? The foundational research papers have all come from Google
         | Brain/Google Research and not from DeepMind.
        
       | ugh123 wrote:
       | I'm curious what this will mean for DeepMind's work in medical
       | and bioscience applications vs. now what may be more aligned with
       | Google products and Anthropic which seem to be prioritizing
       | commercializing consumer applications over science.
        
         | theGnuMe wrote:
         | There's always Isomorphic Labs.
        
       | xnx wrote:
       | I'm surprised to see so many comments in this thread criticizing
       | Google for not milking more money out of their AI research
       | sooner. Not being a shareholder, I'm pretty happy with how they
       | catalyzed the modern AI revolution and have worked on very hard
       | and meaningful problems like protein folding.
        
         | dougmwne wrote:
         | People can be negative and critical for no reason. In this
         | case, I think criticism is due because Google's failure to
         | productize has lead them to a potential existential disaster.
         | Most of their revenue depends on search and there being an
         | ecosystem of websites to link to and display even more of their
         | ads. Generative AI is an existential risk to their current
         | search interface, the ability to insert ads into that
         | experience and there even being any ad-supported websites with
         | free content to link to.
         | 
         | With the reports that Samsung may switch to Bing, you could
         | quickly see an exodus in users over to chat search. It wouldn't
         | take much lost revenue to implode Google's business model and
         | the business model of every ad-supported site on the internet.
        
           | xnx wrote:
           | Fair. ChatGPT is definitely the most serious event in
           | Google's long and utterly dominant history of web search.
           | Samsung is definitely using ChatGPT as a negotiating tactic
           | with Google. I assume that ad revenue share is part of the
           | arrangement. There's no way that Bing Ads could match the
           | amount that Google Ads pays out to Samsung. This whole space
           | is moving excitingly fast, but it's still too early to claim
           | that anyone has "won" the space. If I had to bet who had the
           | most advanced and most used AI service 5 years from now, I
           | would definitely bet on Google.
        
       | fudged71 wrote:
       | I hear there's office space they could use in Edmonton ;)
        
       | gojomo wrote:
       | Sure, a reorg'll fix things.
        
         | disgruntledphd2 wrote:
         | Refactoring for directors.
        
       | felixfurtak wrote:
       | Good name. I guess 'Deep Thought' was already taken
        
       | javier_e06 wrote:
       | [flagged]
        
       | nwoli wrote:
       | Sounds like a smart decision short term questionable long term
       | (more focus on product instead of fundamental research).
        
         | luma wrote:
         | Their existing "all research/no product" approach is becoming
         | increasingly untenable. This had to happen if Google wants to
         | remain a going concern.
        
           | nwoli wrote:
           | Transformers were fundamental research that ended up having
           | huge side benefits. (Google has plenty of money to keep
           | spending on fundamental research, especially focused on smart
           | things like ML.)
        
             | dougmwne wrote:
             | Inventing transformers may prove to be Google's undoing.
             | Being a research paper factory doesn't seem to be good for
             | shareholders.
        
               | politician wrote:
               | Still, inventing XMLHttpRequest wasn't Microsoft's
               | undoing, but for many years we suspected it might be.
               | This might be a similar situation.
        
       | dflock wrote:
       | If only Sundar Pichai was a good at executing on product &
       | strategy as he is at winning internal fights at Google.
        
       | mnd999 wrote:
       | Expecting all their products to get cancelled in ~6months.
        
       | HopenHeyHi wrote:
       | If Google were to go on a startup acquisition spree in this hot
       | new competitive space in a further attempt to catch up - how
       | would they locate and assess potential companies?
       | 
       | Asking for a friend.
        
         | ipaddr wrote:
         | They would buy someone they have started a relationship with
         | already most likely.
         | 
         | Your friend needs an introduction
        
       | mebazaa wrote:
       | For context: this is pretty surprising, given the significant
       | amount of independence Deepmind had within Google. So much so, in
       | fact, that they tried for a long time to be spun off from Google:
       | https://www.wsj.com/articles/google-unit-deepmind-triedand-f...
        
         | cubefox wrote:
         | Yeah. DeepMind has sought more independence than they had, but
         | now they have lost it completely. It seems there was an
         | internal power struggle and Google won.
        
           | pneumonic wrote:
           | Sounds like some key people will be leaving "Google Deepmind"
           | in the next few months.
        
         | mFixman wrote:
         | One of the biggest liberties DeepMind had was a completely
         | separate hiring process and pipeline than Google.
         | 
         | Their open positions mysteriously disappeared on November last
         | year and they are still closed outside of specific senior roles
         | and a very open ended "register your interest if you have a
         | PhD".
         | 
         | Big loss for DeepMind if the separate pipeline is lost. Being
         | able to hire for their priorities instead of whatever Google's
         | hiring for is one of the reasons it was so successful.
        
       | bgirard wrote:
       | > When Shane Legg and I launched DeepMind back in 2010, many
       | people thought general AI was a farfetched science fiction
       | technology that was decades away from being a reality.
       | 
       | Well it was at least a decade away.
        
         | skilled wrote:
         | I apologize for the confusion.
        
       | [deleted]
        
       | [deleted]
        
       | hintymad wrote:
       | > Sundar, Jeff Dean, James Manyika, and I have built a fantastic
       | partnership as we've worked to coordinate our efforts over recent
       | mo We're also creating a new Scientific Board for Google DeepMind
       | to oversee research progress and direction of the unit, which
       | will be led by Koray and will have representatives from across
       | the orgs. Jeff, Koray, Zoubin, Shane and myself will be
       | finalising the composition of this board together in the coming
       | days.
       | 
       | How is it different from Google's structure of having reviewing
       | committees over everything? I hope that this is not yet another
       | layer of gatekeepers. In a large enough organization, the high-
       | level leads have such fragmented attention and such ingrained
       | tendency towards avoiding political mistakes that they mainly
       | contribute concerns instead of ideas, especially product ideas.
       | As a result, they become gatekeepers and projects slow down. The
       | larger an oversight committee is, the more concerns a project
       | will receive, and the more mediocre the project will be because
       | the team will focus on making the committee happy instead of
       | making hard trade-offs with fast iterations. Of course, the
       | Scientific Board consists of people way over my caliber, so they
       | may well do a fantastic job for Google.
        
       | rvz wrote:
       | About time and finally for some very _serious_ competition
       | against OpenAI.com but unsurprising that DeepMind would be
       | directly involved [0] and merged with Brain.
       | 
       | Now lets get on with accelerating the real AI race to zero and
       | the big fight against OpenAI.com, X.AI and the other stragglers.
       | 
       | Stay very tuned to this.
       | 
       | [0] https://news.ycombinator.com/item?id=35508997
        
         | open592 wrote:
         | Funnily your comment made me wonder who owns "ai.com" - So I
         | tried it out, and got redirected to "https://chat.openai.com/".
        
           | cpeterso wrote:
           | ai.com's domain registration doesn't seem to be owned by
           | OpenAI. And, ironically, ai.com's registrar (not registrant)
           | is Google LLC.
           | 
           | https://whois.domaintools.com/openai.com
           | 
           | https://whois.domaintools.com/ai.com
        
             | endorphine wrote:
             | They're probably renting it from the original owner.
        
           | SXX wrote:
           | I guess we can just call them that without "open" part.
        
       | vessenes wrote:
       | Sundar's email mentions something critical - Jeff Dean is going
       | to be the Chief Scientist in DeepMind, and coordinate back to
       | Sundar. This is a big deal; that move tells you that Google is
       | taking being behind on public-facing AI seriously, Dean is an
       | incredibly valuable, incredibly scarce resource.
       | 
       | If we wind way back to Google Docs, Gmail and Android strategy,
       | they took market share from leaders by giving away high quality
       | products. If I were in charge of strategy there, I would double
       | down on the Stability / Facebook plan, and open source PaLM
       | architecture Chinchilla-optimal foundation models stat. Then I'd
       | build tooling to run and customize the models over GCP, so open +
       | cloud. I'd probably start selling TPUv4 racks immediately as
       | well. I don't believe they can win on a direct API business model
       | this cycle. But, I think they could do a form of embrace and
       | extend by going radically open and leveraging their research +
       | deployment skills.
        
         | tempusalaria wrote:
         | Jeff Dean is clearly one of the greatest software
         | developers/engineers ever but there isn't much evidence that he
         | is a brilliant ML researcher
         | 
         | And indeed Google AI has achieved very little product wise
         | during his time as CEO. Kind of suggests he is a big part of
         | bureaucratic challenges they have faced
        
           | karmasimida wrote:
           | He had the perfect balance of being legendary engineer and an
           | ML researcher
           | 
           | Can't emphasize more on how much rigorous engineering
           | practice could accelerate research delivery. It is THE key to
           | have a productive research oriented team.
           | 
           | Good research engineers are underrated, and very difficult to
           | find.
        
           | ReptileMan wrote:
           | >Jeff Dean is clearly one of the greatest software
           | developers/engineers ever but there isn't much evidence that
           | he is a brilliant ML researcher
           | 
           | Google have oversupply of brilliant ML researchers. What they
           | need is a engineer that sees the applications of the
           | technology so it can be turned into a product. Someone that
           | can bridge the gap between the R&D team and the Bureaucracy.
           | 
           | Want an idea for a stupid product - input - description of a
           | girl, hobbies, some minor flaws - output - create a poem.
           | Have been using Vicuna quite successfully for that purpose.
        
           | HarHarVeryFunny wrote:
           | > Jeff Dean is clearly one of the greatest software
           | developers/engineers ever
           | 
           | Based on what? I've heard all the Chuck Norris type jokes,
           | but what has Jeff Dean actually accomplished that is so
           | legendary as a software developer (or as a leader) ?
           | 
           | Per his Google bio/CV his main claims to fame seem to have
           | been work on large scale infrastructure projects such as
           | BigTable, MapReduce, Protobuf and TensorFlow, which seem more
           | like solid engineering accomplishments rather than the stuff
           | of legend.
           | 
           | https://research.google/people/jeff/
           | 
           | Seems like he's perhaps being rewarded with the title of
           | "Chief Scientist" rather than necessarily suited to it, but I
           | guess that depends on what Sundar is expecting out of him.
        
             | ericjang wrote:
             | Jeff was very early on in the "just scale up the big brain"
             | idea, perhaps as early as 2012 (Andrew Ng training networks
             | on 1000s of CPUs). This vision is sort of summarized in
             | https://blog.google/technology/ai/introducing-pathways-
             | next-... and fleshed out more in
             | https://arxiv.org/abs/2203.12533, but he had been
             | internally promoting this idea since before 2016.
             | 
             | When I joined Brain in 2016, I had thought the idea of
             | training billion/trillion-parameter sparsely gated mixtures
             | of experts was a huge waste of resources, and that the idea
             | was incredibly naive. But it turns out he was right, and it
             | would take ~6 more years before that was abundantly obvious
             | to the rest of the research community.
             | 
             | Here's his scholar page (H index of 94) https://scholar.goo
             | gle.com/citations?hl=en&user=NMS69lQAAAAJ...
             | 
             | As a leader, he also managed the development of TensorFlow
             | and TPU. Consider the context / time frame - the year is
             | 2014/2015 and a lot of academics still don't believe deep
             | learning works. Jeff pivots a >100-person org to go all-in
             | on deep learning, invest in an upgraded version of Theano
             | (TF) and then give it away to the community for free, and
             | develop Google's own training chip to compete with Nvidia.
             | These are highly non-obvious ideas that show much more
             | spine & vision than most tech leaders. Not to mention he
             | designed & coded large parts of TF himself!
             | 
             | And before that, he was doing systems engineering on non-ML
             | stuff. It's rare to pivot as a very senior-level engineer
             | to a completely new field and then do what he did.
             | 
             | Jeff certainly has made mistakes as a leader (failing to
             | translate Google Brain's numerous fundamental breakthroughs
             | to more ambitious AI products, and consolidating the
             | redundant big model efforts in google research) but I would
             | consider his high level directional bets to be incredibly
             | prescient.
        
               | panabee wrote:
               | thanks for this insightful perspective.
               | 
               | 1. what was the reasoning behind thinking
               | billion/trillion parameters would be naive and wasteful?
               | perhaps part are right and could inform improvements
               | today.
               | 
               | 2. can you elaborate on the failure to translate research
               | breakthroughs, of which there are many, into ambitious AI
               | products? do you mean commercialize them, or pursue
               | something like alphafold? this question is especially
               | relevant. everyone is watching to see if recent changes
               | can bring google to its rightful place at the forefront
               | of applied AI.
        
             | summerlight wrote:
             | > large scale infrastructure projects such as BigTable,
             | MapReduce, Protobuf and TensorFlow
             | 
             | If you initiated and successfully landed large scale
             | engineering projects and products that has transformed the
             | entire industry more than 10 times, that's something
             | qualified for being a "legend".
        
           | VirusNewbie wrote:
           | It seems like that being an incredible software engineer is
           | more important these days, look at Greg Brockman's
           | background.
        
             | earthboundkid wrote:
             | That's my bias as well. To me, it seems like every day
             | someone releases a new AI toy, but the thing you would
             | actually want is for a real software engineer to take the
             | LLM or whatever, put it inside a black box, and then write
             | actually useful software around it. Like off the top of my
             | head, LLM + Google Calendar = useful product for managing
             | schedules and emailing people. You could make it in a day
             | of tinkering as a langchain demo, but actually making a
             | real product that is useful and doesn't suck will require
             | good old fashioned software engineering.
        
               | tempusalaria wrote:
               | Based on the multitask generalisation capabilities shown
               | so far of LLMs I'm kinda in the opposite camp - if we can
               | figure out more data efficient and reliable architectures
               | base language models will likely be enough to do just
               | about anything and take general instructions. Like you
               | can just tell the language model to directly operate on
               | Google calendar with suitable supplied permissions and it
               | can do it no integration needed
        
               | danielmarkbruce wrote:
               | Exactly this. There is a reasonable chance the GUI goes
               | the way of the dodo and some large (75% or something)
               | percentage of tasks are done just by typing (or speaking)
               | in natural language and the response is words and very
               | simple visual elements.
        
               | BarryMilo wrote:
               | What you're describing is AGI levels of autonomy. There
               | are quite a lot of missing pieces for that to happen I
               | think.
        
               | danielmarkbruce wrote:
               | Have you used GPT-4? People are already building agents
               | to do things the above comment refers to.
        
             | tempusalaria wrote:
             | Right but Ilya is the Chief Scientist of OpenAI not Greg
             | Brockman
        
               | fdgsdfogijq wrote:
               | I heard Ilya wasnt behind the big innnovations at OpenAI.
               | It was lesser known scientists
        
               | tempusalaria wrote:
               | OpenAI from a research point of view haven't really had
               | any "big innovations". At least I struggle to think of
               | any published research they have done that would qualify
               | in that category. Probably they keep the good stuff for
               | themselves
               | 
               | But Ilya definitely had some big papers before and he is
               | widely acknowledged as a top researcher in the field.
        
               | fdgsdfogijq wrote:
               | I think the fact that there are no other systems publicly
               | available that are comparable to GPT-4 (and I dont think
               | Bard is as good), points to innovation they havent
               | released
        
           | forgot-my-pw wrote:
           | He worked on TensorFlow. So even if the doesn't do ML
           | research himself, at least he works on the tooling.
        
             | modeless wrote:
             | TensorFlow was honestly not that good. It had a lot of
             | effort put into it, so it worked, but there are reasons
             | people moved away from it.
             | 
             | I think Jeff Dean is a great engineer, but I wouldn't hold
             | up TensorFlow as a great example.
        
               | theGnuMe wrote:
               | I always thought he pair programmed so it's been Jeff +
               | Sanjay.
        
             | hungryforcodes wrote:
             | But who uses TensorFlow -- really -- these days.
        
               | hyperhopper wrote:
               | What else are people using then?
        
               | acmj wrote:
               | PyTorch everywhere
        
               | modeless wrote:
               | Or JAX
        
             | tempusalaria wrote:
             | Of course he is someone any technology organisation would
             | want to have as a resource. But probably not as chief
             | scientist or ceo of an ML company based on the available
             | evidence
        
           | kortilla wrote:
           | Being a brilliant ML researcher has approximately zero
           | overlap with making products for people to use.
        
           | bartwr wrote:
           | Furthermore, Jeff is not a great (or even good...)
           | manager/director/leader. There were a lot of internal and
           | external dramas because of his leadership, that he failed to
           | address. How often you hear about dramas about other Chef
           | Scientists at other, comparably sized, companies?
           | 
           | He should stay a Fellow, in a "brilliant consultant" role.
        
             | q7xvh97o2pDhNrh wrote:
             | > How often you hear about dramas about other Chef
             | Scientists
             | 
             | I know this is a typo, but I would _love_ to hear about
             | high drama involving Chef Scientists at large companies.
             | 
             | It would have all the nonstop action of _Iron Chef_ ,
             | combined with the multilayered scheming of _Succession_...
             | I think there 's something there.
        
         | HarHarVeryFunny wrote:
         | There was also a separate DeepMind announcement made by
         | Hassabis:
         | 
         | https://www.deepmind.com/blog/announcing-google-deepmind
         | 
         | It seems that DeepMind has now gone from what had appeared to
         | be a blue sky research org to almost a product group, with
         | Google Research now being the primary research group.
         | 
         | Jeff Dean's reputation has always been as an uber-engineer, not
         | any kind of visionary or great leader, so it's not obvious how
         | well suited he's going to be to this somewhat odd role of Chief
         | Scientist both to Google DeepMind and Google Research.
         | 
         | How things have changed since OpenAI was founded on the fear
         | that Google was becoming an unbeatable powerhouse in AI!
        
         | neximo64 wrote:
         | Jeff Dean is the reason OpenAI has beaten Google as it stands
         | today. Not much weight in it being a good decision. He was too
         | risk averse.
        
           | [deleted]
        
         | uptownfunk wrote:
         | The AI primitives are pretty basic. The real brains are
         | figuring out how to make the best model. The engineering
         | integration are pretty straightforward
        
         | dougmwne wrote:
         | That sounds like a fairly brilliant counter to "Open"AI.
         | Something tells me that Google is still too scared of this tech
         | in the hands of the public to go there though.
        
           | owlglass wrote:
           | What indicates that Google is "too scared" as opposed to
           | merely lagging behind?
        
             | dougmwne wrote:
             | Piles of previous statements talking about safety and a
             | general unwillingness to put any of its models in the hands
             | of anyone not under NDA. Bard is the first counterexample I
             | can think of and that was forced by OpenAI.
        
             | scottyah wrote:
             | Anecdotally, Bard is much better at guiding the responses
             | than chatGPT. They're slowly releasing more and more
             | "features" as they feel comfortable. You can see what
             | they're up to with the TestAI kitchen app.
             | 
             | I kinda liked how open chatGPT was before the heavy
             | filtering, but I see why we need to reign in chaos overall.
        
         | jstx1 wrote:
         | This announcement, including the leadership changes, sounds
         | more like they've shut down DeepMind and moved everyone over to
         | Google Brain. Keeping the DeepMind name for the new team is a
         | clever trick to make it look like more positive news than it
         | actually is.
        
           | aix1 wrote:
           | Demis remains at the helm though, doesn't he?
        
             | andygeorge wrote:
             | ehhhhhhh sure but for how long? these statements stick out
             | to me:
             | 
             | - "I'm sure you will have lots of questions about what this
             | new unit will look like" aka we're not going to talk about
             | specifics in public comms
             | 
             | - "Jeff Dean will take on the elevated role ... reporting
             | to me. ... Working alongside Demis, Jeff will help set the
             | future direction of our AI research" aka Demis isn't the
             | only Big Dog in the room anymore
        
             | soVeryTired wrote:
             | Honestly is Demis that big a deal? I always figured it was
             | the researchers a layer or two down (e.g. David Silver) who
             | were doing the real work.
        
               | aix1 wrote:
               | He is key to defining the culture (secrecy etc). There is
               | a huge culture difference between Brain and DM and, with
               | Demis at the helm, I'm concerned that it'll be Brain
               | moving towards DM culture, not vice versa.
        
       | [deleted]
        
       | SeanAnderson wrote:
       | A couple of thoughts:
       | 
       | - This does not seem unexpected. Google is panicked about losing
       | the AI race and pushing resources into DeepMind is a logical step
       | to mitigating those fears.
       | 
       | - Google has currently given ~300M to Anthropic and has a
       | partnership with them. I assume Google continues to see potential
       | in both avenues and won't neglect one AI team for the other. I'm
       | guessing that DeepMind will be their primary focus because of the
       | numerous, real-world applications already at play.
       | 
       | - It's tough for me to compare Google DeepMind to OpenAI GPT4.
       | They seem to be very different approaches. Yet, they both have
       | support for language and imagery. So, perhaps they aren't that
       | different afterall?
       | 
       | - Still waiting to hear more from Google on how they plan to
       | leverage their novel PaLM architecture. The API for it was
       | released a month ago, but, to my awareness, has yet to take the
       | world by storm. (Q: Bard isn't powered by PaLM, right?)
       | 
       | Overall, I am not convinced this will be massively beneficial. I
       | don't trust Google's ability to execute at scale in this area. I
       | trust DeepMind's team and I trust Google's research teams, but
       | Google's ability to execute and take products to market has been
       | quite weak thus far. My gut says this action will hamstring
       | DeepMind in bureaucracy.
        
         | jstx1 wrote:
         | > It's tough for me to compare Google DeepMind to OpenAI GPT4.
         | 
         | Is it tough because one of these is a newly merged and
         | rebranded team and the other is a machine learning model?
        
           | atorodius wrote:
           | Yoo this killed me :D
        
         | 1-6 wrote:
         | AI is not a winner takes all scenario. The pond is so large
         | that there will be many winners.
         | 
         | One day, with AGI and autonomous agents, the goal will be to
         | merge neural network meshes together in order to gather highly
         | specialized datasets.
        
         | GreedClarifies wrote:
         | "this action will hamstring DeepMind in bureaucracy."
         | 
         | I'm sorry but I fail to see the problem with this. DeepMind has
         | made _very_ impressive demos and papers, but they have yet to
         | add one dollar of revenue to Google 's bottom line. Further
         | they have drained billions from Google.
         | 
         | Google has to, somehow, get completely out of the research
         | paper game and into the product game.
         | 
         | Papers have to have little/no impact on perf going forward.
         | Other than a small windfall to goodwill they are a misalignment
         | between the company's goals and those of the employees.
         | 
         | Products Google, products. Unless Larry and Sergey want to turn
         | Google into a non-profit research tank. Which would be fine,
         | but likely with substantially lower headcount. Even they aren't
         | that wealthy.
        
           | tobyjsullivan wrote:
           | Different people excel at different types of work
           | (particularly where deep experience is the most significant
           | contributor to performance). Tasking academic researchers
           | with building product is the pathway to hell.
           | 
           | The existing, top-performing product teams at Google should
           | be taking that research and building products around it. If
           | Google has any top-performing product teams left, that is...
        
           | ahzhou wrote:
           | Yeah, Google needs to make sure that their research doesn't
           | go the way of Xerox PARC.
        
           | Closi wrote:
           | > DeepMind has made very impressive demos and papers, but
           | they have yet to add one dollar of revenue to Google's bottom
           | line. Further they have drained billions from Google.
           | 
           | You could say the same about OpenAI and Microsoft, they
           | drained money for years until about 6 months ago when
           | suddenly the partnership started to pay back big style.
        
             | tempusalaria wrote:
             | OpenAI is still massively unprofitable and MSFT is (rightly
             | IMO) going to invest way more money in them so it's
             | definitely still a drain. A modest drain relative to MSFTs
             | overall resources
        
               | CydeWeys wrote:
               | At least the path to profitablity is super clear though:
               | selling GPT-4 access. What's the path to profitablity on
               | AlphaGo or whatever?
        
               | freedomben wrote:
               | As much as I'd love an OpenAI-style API from Google, I'm
               | not expecting that. It will probably be "profitable" to
               | them in the unseen backend making Search, Google
               | Assistant, etc better. I've been playing with Bard a lot
               | and it's pretty good, but OpenAI's API offering just
               | makes them so much more useful to me since I can use
               | whatever app I want (or even write my own) to consume the
               | product, and it's easy for me to see the value for my
               | dime.
        
           | chaxor wrote:
           | "Papers have little/no impact on perf" - this is a ridiculous
           | and false claim. Almost every single advancement in any field
           | has come from academia. Sure, it may not be recognized as
           | such by the general public because they aren't experts in the
           | area - but the fact remains that academia is pretty much the
           | only way to progress as a society. Companies just take what
           | academia gives them and make products out of it for their own
           | profit (not to completely trivialize that - it still comes
           | with it's own set of challenges), but the private sector is
           | completely misaligned with making real progress towards hard
           | problems. Deepmind is one of the examples that continues to
           | show this despite being a 'corporate entity' in that the
           | large advancements seen are out of their employment (i.e.
           | giving their excess of capital) of professors at universities
           | who focus on their research.
        
             | spokesbeing wrote:
             | Huh? None of this is true for a lot of core recent work. A
             | very obvious example is transformers, which did not come
             | out of academic research (or DeepMind for that matter) at
             | all.
        
             | sashank_1509 wrote:
             | > Almost every single advancement in any field has come
             | from Academia
             | 
             | This sounds like you need far more evidence. If you say
             | academia as the institution where you share papers, sure
             | but then that's just a sharing mechanism. Almost like
             | saying all advancements came out of Internet because arxiv
             | is where research is shared.
             | 
             | If you want to say professors and Universities have been
             | heralding AI advancement, that has not been true for at
             | least 10 years possibly more. Moment industry started
             | getting into Academia, Academia couldn't compete and died
             | out. Even Transformers the founding paper of the modern GPT
             | architectures came out of Google Research. In Vision,
             | ResNet, MaskRCNN to Segment Anything came out of Meta /
             | MIcrosoft. The last great academic invention might have
             | been dropout and even that involved Apple. After that I
             | fail to see Academia coming up with a single invention in
             | ML that the rest of the community instantly adopted because
             | of how good it was.
        
           | minsc_and_boo wrote:
           | >DeepMind has made very impressive demos and papers, but they
           | have yet to add one dollar of revenue to Google's bottom
           | line.
           | 
           | DeepMind has researched and developed features that exist in
           | many Google products today, e.g. Wavenet:
           | https://www.deepmind.com/research/highlighted-
           | research/waven...
        
             | ur-whale wrote:
             | LOL, if you look at the amount of money Google has poured
             | into Google and how much they got back for their
             | investment, it's laughable.
             | 
             | Things like the Wavenet "contributions" are just Demis
             | paying lip service to the fact that once in a while Google
             | was nudging them to produce something, _anything_ really
             | that was actually useful.
        
               | scottyah wrote:
               | Google putting the extreme amounts of easy dollars they
               | have into things that aren't instantly profitable is very
               | much what the founders said they'd do though
        
           | mrbungie wrote:
           | I don't know if said bureaucracy is a blessing or a curse
           | given Google's track record in product management. If pressed
           | I would bet towards the curse option.
        
           | tempusalaria wrote:
           | In 2021 Deepmind generated $2bln in revenue.
           | 
           | This was paid by Google for unspecified research services.
           | But the way it's accounted for it's likely that it was based
           | on some legitimate contribution. It is unlikely it would be
           | structured this way if it was just corporate support.
           | 
           | DeepMind has public financial filings and you can go read the
           | exact language they use to describe the revenue they
           | generate.
        
             | dougmwne wrote:
             | Sounds like corporate funny money to me, not real revenue.
        
               | tempusalaria wrote:
               | If it was, it probably wouldn't be accounted for in
               | taxable fashion as it is today.
               | 
               | DeepMind is profitable and paying tax on that profit.
               | It's public information that you can see in its UK
               | regulatory filings.
        
               | dougmwne wrote:
               | Fair point. I didn't realize it was taxable as I am used
               | to only profits being taxed and assumed Deepmind is run
               | at a loss.
        
             | jstx1 wrote:
             | It's like bragging about earning a good salary and then it
             | turns out that you work for your dad.
        
           | HybridCurve wrote:
           | I feel like google crossed some point about a decade ago
           | where they stopped making innovative stuff and started
           | focusing on squeezing revenue out of everything else. A bit
           | like when Carly turned HP into a printing/ink racket. Both
           | the decline of google maps and the inability of google to
           | filter out noise from their search results are strong
           | indicators of this for me. Scrambling to field a competing
           | product to maintain relevancy in this emerging market would
           | be consistent with this assessment as well. The old google
           | would have fielded the product first because it was useful,
           | but the current google seems to do it because they don't want
           | to lose revenue.
        
             | minwcnt5 wrote:
             | I'd say that point was a bit less than a decade ago - Aug
             | 10, 2015.
        
         | bradgessler wrote:
         | This reminds me of Nest. When it was separate, it was shipping
         | great hardware and OK software. Then Google appended "Google"
         | in front of it, creating "Google Nest" and kicked off the slow
         | Google Hug of Death(tm).
         | 
         | The first casualty was Nest shutting down its APIs, cutting off
         | an ecosystem of third party integrations.
         | 
         | The next casualty was replacing the Nest app with the Google
         | Home app. I stopped following Nest after that because I sold
         | all the Nest stuff I owned and replaced it all with HomeKit.
         | 
         | It's astounding how Google keeps doing this, and its
         | shareholders seem to go along with it. I agree, given their
         | track record, its hard to be optimistic about anything Google
         | slaps their name in front of.
        
         | 1024core wrote:
         | > It's tough for me to compare Google DeepMind to OpenAI GPT4.
         | 
         | You are comparing an organization to a DNN model? It would be
         | tough for anybody.
        
           | SeanAnderson wrote:
           | Yeah, fair, the way I expressed myself sounded stupid. What I
           | meant to say was something like: "I don't believe that
           | DeepMind is openly making use of LLM technologies. They're
           | known for their neural networks operating at a pixel-level
           | rather than a token-level. I don't know which of these
           | approaches has more long-term commercial viability."
        
             | whimsicalism wrote:
             | Google Brain is known for their LLMs
        
               | SeanAnderson wrote:
               | Okay, thanks! I need to read up more on Google Brain.
        
         | [deleted]
        
         | neel8986 wrote:
         | >> Overall, I am not convinced this will be massively
         | beneficial. I don't trust Google's ability to execute at scale
         | in this area.
         | 
         | Yes the team which literally created transformer and almost all
         | the important open research including Bert, T5, imagen, RLHF,
         | ViT don't have the ability to execute on AI /s. Tell me one
         | innovation OpenAI bought into the field. They are good at
         | execution but i havent seen anything novel coming out of them.
        
           | why_only_15 wrote:
           | 7/8 of the transformer authors are gone, BERT author is at
           | OAI, two first authors of T5 are gone, imagen team left to
           | make their own startup, etc. etc.
        
           | uptownfunk wrote:
           | This is I think a caricature of how engineers think.
           | 
           | Yes yes that's right the algorithm is the most important
           | part.
        
           | galactus wrote:
           | Because of course Xerox PARC, which literally invented the
           | GUI, desktop computer, the mouse, freaking Ethernet, etc
           | executed the commercialization of all their innovation
           | flawlessly....
        
           | throwntoday wrote:
           | There are plenty of products that launch on top tools and
           | frameworks that are worth far more than the underlying will
           | ever be. OpenAI is creating products, DeepMind was creating
           | tools.
           | 
           | It's not a matter of skill as much as objective. And DeepMind
           | would still be starting at zero if they decide to pivot to
           | products.
        
             | user3939382 wrote:
             | Look at SIP vs Facetime. We already had SIP av calls
             | forever, few ever heard of it.
        
               | ioblomov wrote:
               | I certainly haven't! But the canonical example would be
               | Xerox's Alto vs the Macintosh.
        
           | [deleted]
        
           | ianbutler wrote:
           | Being able to produce research is a very different skill from
           | being able to produce a very successful product. We have not
           | seen google do that very successfully for over a decade.
        
           | stingraycharles wrote:
           | I think in this context "execute" implies "create traction
           | with a real-world product". Given that even politicians and
           | comedy shows are talking about ChatGPT, I think it's fair to
           | acknowledge that Google is lacking in this area.
        
           | Takennickname wrote:
           | Are you implying that inventing something is the same as
           | being able to bring it to market?
        
           | mlinsey wrote:
           | Drawing a bright line between research vs products and
           | considering only the former to be "innovation" is a way to
           | become quickly irrelevant.
        
             | bushbaba wrote:
             | +1. Bell labs said hi
        
             | neel8986 wrote:
             | Product based on no fundamental innovation is also a path
             | to irrelevance. If there was even something remotely
             | defensible in GPTs then OpenAI would not have sold 50% of
             | company for 10B dollar. Only a matter of time and large
             | about of human in the loop will bring any large transformer
             | model into the same space as shown by recent models like
             | alpaca and vicuna are showing that. Only thing the whole
             | thing has done is no labs will open source any major
             | breakthroughs anymore
        
               | dopamean wrote:
               | These are capitalist enterprises here. I'd argue that
               | product is almost all that matters. Sure someone has to
               | innovate but the final product that can be sold is what
               | keeps people and companies relevant.
        
               | ZephyrBlu wrote:
               | What do you mean? Most VC-backed startups sell far _more_
               | than 50% of their company for far _less_ than 10B.
        
               | neel8986 wrote:
               | This is not a typical VC based company. According the HN
               | crowd this is the one company who can execute on AI and
               | challenge Google and all other trillion dollar AI labs.
               | In my opinion they themselves are aware of the fact they
               | are one trick pony. Given how astute a VC SamAltman is,
               | if there was any thing remotely innovative and defensible
               | about the product they would have never done that.
        
               | hot_gril wrote:
               | > Product based on no fundamental innovation is also a
               | path to irrelevance
               | 
               | Microsoft has been doing, like, negative innovation and
               | is still relevant.
        
           | jerpint wrote:
           | GPT, CLIP, Dalle, RLHF, these are all novel
        
             | qumpis wrote:
             | Rlhf wasn't introduced by openai. And GPT is a pretty
             | standard transformer, no? Yes they did it at scale and it
             | speaks volumes on their production skills, but OP was
             | asking about research
        
           | WanderPanda wrote:
           | People at OpenAI came up with PPO, arguably the most used
           | deep RL algorithm
        
           | avereveard wrote:
           | Well, what better demonstration that engineering without
           | business vision is sterile
           | 
           | While PaLM demolished benchmarks scores openai with a chat
           | tune of a sizeable but not unwieldy model took the world by
           | storm.
        
           | zamnos wrote:
           | Well, I've heard a lot about a ChatGPT thing in the news,
           | isn't that made by OpenAI?
        
           | jpeg_hero wrote:
           | Have you tried Bard?
        
           | hungryforcodes wrote:
           | And yet no one is talking about Google AI. None of it is a
           | house hold name.
           | 
           | So OpenAI it is.
        
             | querez wrote:
             | DeepMind clearly is a household name. Think of AlphaGo or
             | AlphaFold, those were legendary. Google Brain as well is a
             | household name. Think of the Transformer, or BERT. Those
             | are legendary, as well.
        
               | iknowstuff wrote:
               | I'm sorry, but no. DeepMind might be well known to AI
               | nerds, but it's not a household name the same way
               | ChatGPT/OpenAI has become.
        
           | tempest_ wrote:
           | In the grand scheme and from the outside looking in being
           | `good at execution` might be the most important thing.
           | 
           | There were plenty of touch screen phones before the iphone.
        
             | neel8986 wrote:
             | Outside world is looking at AI innovation only in recent
             | times forgetting the entire journey of last decade. If
             | there was any remotely defensible technology in OpenAI they
             | wouldn't have sold 50% of their company for 10B.
        
               | KyeRussell wrote:
               | Yes, we saw your other comment stating the same thing.
               | 
               | You are doing a whole lot of tea leave reading with
               | basically zero visibility, which I can't really reconcile
               | with how absolute you're being with your language.
        
           | dragonwriter wrote:
           | > Yes the team which literally created transformer and almost
           | all the important open research including Bert, T5, imagen,
           | RLHF, ViT don't have the ability to execute on AI /s.
           | 
           | This, but non-sarcastically. Google has spectacularly, so
           | far, failed to execute on products (even of the "selling
           | shovels" kind, much less end-user products) for generative
           | AI, despite both having lots of consumer products to which it
           | is naturally adaptable _and_ a lot of the fundamental
           | research work in generative AI.
           | 
           | The best explanation is that they actually are,
           | institutionally and structurally, bad at execution in this
           | domain, because they have all the pieces and incentives that
           | rule out most of the other potential explanations for that.
           | 
           | > OpenAI bought into the field. They are good at execution
           | but i havent seen anything novel coming out of them.
           | 
           | Right, OpenAI is good at execution (at least, when it comes
           | to selling-shovels tools, I don't see a lot of evidence
           | beyond that yet), whereas Google is, to all current evidence,
           | _not_ good at execution in this space.
        
             | thefourthchime wrote:
             | Numerous individuals have since transitioned away from
             | Google, with reports suggesting their growing
             | dissatisfaction as the company appeared indecisive about
             | utilizing their technological innovations effectively.
             | 
             | Moreover, it has been quite some time since Google
             | successfully developed and sustained a high-quality product
             | without ultimately discontinuing it. The organizational
             | structure at Google seems to inadvertently hinder the
             | creation of exceptional products, exemplifying Conway's Law
             | in practice.
             | 
             | Read more about this topic here:
             | https://www.wsj.com/articles/google-ai-chatbot-bard-
             | chatgpt-...
        
             | nostrademons wrote:
             | They're getting Innovator's Dilemma'd, the same way that
             | Bell Labs, DEC, and Xerox did. When you have an
             | exceptionally profitable monopoly, it biases every
             | executive's decision-making toward caution. Things are
             | good; you don't want to upset the golden goose by making
             | any radical moves; and so when your researchers come out
             | with something revolutionary and different you bury it,
             | maybe let them publish a few papers, but certainly don't
             | let it go to market.
             | 
             | Then somebody else reads the papers, decides to execute on
             | it, and hires all the researchers who are frustrated at
             | discovering all this cool stuff but never seeing it launch.
        
               | spunker540 wrote:
               | Are researchers actually frustrated to never see it
               | launch, or are they mostly focused on publishing papers?
               | 
               | I thought OpenAI's unique advantage over many big tech
               | companies is that they've somehow figured out how to fast
               | track research into product, or have researchers much
               | more willing to worry about "production".
        
               | dmix wrote:
               | The typical solution to this (assuming there is one
               | internally) is setting up a sub-company and keeping the
               | team isolated from the parent company aka
               | "intrapenuership" but also keeping them well resourced by
               | the parent.
               | 
               | It seems like that's what they were doing with DeepMind
               | for the last decade. But it's also possible DeepMind as
               | an institution lacked the pressure/product
               | sense/leadership to produce consumable products/services.
               | Maybe their instincts were more centered around R&D and
               | being isolated left them somewhat directionless?
               | 
               | So now that AI suddenly really matters as a business, not
               | just some indefinite future potential, Google wants to
               | bring them inside.
               | 
               | They could have created a 3rd entity, their own version
               | of OpenAI, combining DeepMind with some Google
               | management/teams and other acquisitions and spinning it
               | off semi-independently. But this play basically _has_ to
               | be from Google itself for their own reputation 's sake -
               | maybe not for practicality's sake but politically/image-
               | wise.
        
               | pclmulqdq wrote:
               | The problem with the intrapreneurship idea is that it's
               | really hard to beat desperation as a motivator. I have
               | seen people behave very differently in the context of a
               | startup vs a corporate research lab thanks to this
               | dynamic. Some people thrive in the corporate R&D
               | environment, but the innovator's dilemma eventually gets
               | to their managers.
               | 
               | Cisco has done a great job balancing this, actually -
               | they keep contact with engineers who leave to do
               | startups, and then acquire their companies if they become
               | successful enough to prove the product.
        
               | majani wrote:
               | In today's age of multimillion dollar seed rounds, I
               | don't think there's much difference between a buzzy
               | startup and a corporate R&D department
        
               | nprateem wrote:
               | Lack of major owner equity basically means few
               | intrapreneur efforts will succeed unless the 'founder'
               | really couldn't succeed without the daddy company
        
               | bugglebeetle wrote:
               | > But it's also possible DeepMind as an institution
               | lacked the pressure/product sense/leadership to produce
               | consumable products/services. Maybe their instincts were
               | more centered around R&D and being isolated left them
               | somewhat directionless?
               | 
               | It seems like this is more a Google problem than a
               | DeepMind problem though, no? Google created one of the
               | most successful R&D labs for ML/AI research the world has
               | ever known, then failed to have their other business
               | units capitalize on that success. OpenAI observed this
               | gap and swooped in to profit off all of their research
               | outputs (with backing from Microsoft).
               | 
               | IMO what they're doing here is doubling down on their
               | mistakes: instead of disciplining their other business
               | units for failing to take advantage of this research,
               | they're forcing their most productive research team to
               | assume responsibility and correct for those failures. I
               | expect this will go about as well as any other instance
               | of subjecting a bunch of research scientists to internal
               | political struggles and market discipline, i.e. very
               | poorly.
        
               | nostrademons wrote:
               | Yeah. It doesn't really work all that well. Xerox tried
               | it with Xerox PARC, Digital with Western Digital, AT&T
               | with Bell Labs, Yahoo with Yahoo Brickhouse, IBM with
               | their PC division, Google with Google X & Alphabet &
               | DeepMind, etc.
               | 
               | Being hungry and scrappy seems to be a necessary
               | precondition for bringing innovative products to market.
               | If you don't naturally come from hungry & scrappy
               | conditions (eg. Gates, Zuckerburg, Bezos, PG), being in
               | an environment where you're surrounded by hungry &
               | scrappy people seems to be necessary.
               | 
               | For that matter, a number of extremely well-resourced
               | startups (eg Color, Juicero, WebVan, Secret, Pets.com,
               | Theranos, WeWork) have failed in spectacular ways. Being
               | well-resourced seems to be an anti-success criteria even
               | for independent companies.
        
               | davidthewatson wrote:
               | That may have been true in the 70's and 80's. However, I
               | worked for a 2000 person (startup) software company in
               | the 90's that was acquired at 1.8B, another 4000 person
               | (startup) software company in the 90's that was acquired
               | at 3.4B, and then a few years ago, the acquirer of both
               | was itself acquired for 18B.
               | 
               | I survived ALL the layoffs somehow. Boots on the ground
               | agrees with "doesn't really work all that well" but the
               | people collecting rents keep collecting. Given the size
               | all of these received significant DOJ reviews though the
               | only detail I remember is basketball sized court rooms
               | filled with printed paper for the depositions. I'm sure
               | they burned down the Amazon to print all that legalese,
               | speaking of scaling problems.
        
               | pneumonic wrote:
               | > Digital with Western Digital
               | 
               | Digital (DEC) had no substantial connection with Western
               | Digital; see
               | https://en.wikipedia.org/wiki/Western_Digital#History
        
               | nostrademons wrote:
               | I got the name wrong; officially it was Digital's Western
               | Research Lab [1], hence colloquially "Western Digital".
               | 
               | [1] https://www.computerhistory.org/collections/catalog/1
               | 0275038...
        
               | JumpCrisscross wrote:
               | > _getting Innovator 's Dilemma'd_
               | 
               | They're also paying for their product managers'
               | cancellation culture. (Sorry.) I'm seeing a lot of AI
               | pitch decks; none suggest trusting Google. That saps not
               | only network effects, but what ill term earned research:
               | work done by others on your product. Google pays for all
               | its research and promotion. OpenAI does not.
        
             | tomComb wrote:
             | I'm puzzled that stuff like alpha Fold count for nothing in
             | this discussion (having just browsed through most of it).
             | 
             | I saw quotes from independent scientists referring to it as
             | the greatest breakthrough of their lifetime, and I saw
             | similarly strong language used in regard to the potential
             | for good of alpha fold as a product.
             | 
             | So they gave it away, but it is still a product they
             | followed through on and continue to.
             | 
             | Was it wrong of them that they gave it away, and right,
             | that Microsoft's primary intent with their open AI
             | technology, seems to be to provoke an arms race with
             | google?
        
               | djtango wrote:
               | Who cares about protein folding when a hyped up ELIZA can
               | confidently tell you lies
               | 
               | /s
        
               | ramraj07 wrote:
               | Alpha Fold is a game changer, but nowhere near the game
               | changer ChatGPT(4) is, even if ChatGPT was only available
               | for the subset of scientists that benefit from Alpha
               | Fold. We are literally arguing semantics if this is AGI,
               | and you're comparing it to a bespoke ML model that solves
               | a highly specific domain problem (as unsolvable and
               | impressive as it was).
        
               | adammarples wrote:
               | The domain is the domain of protein structure, something
               | which potentially has gigantic applications to life.
               | Predicting proteins may yet prove more useful than
               | predicting text.
        
               | visarga wrote:
               | Could be interesting to correlate DNA with text produced
               | by people. Both are self replicating, self evolving
               | languages.
        
               | theragra wrote:
               | If so, where are application of this? Is it too early?
        
               | misnome wrote:
               | > We are literally arguing semantics if this is AGI,
               | 
               | And if it isn't? Literally every single argument I've
               | seen towards this being AGI is "We don't know at all how
               | intelligence works, so let's say that this is it!!!!!"
               | 
               | > nowhere near the game changer ChatGPT(4) is, even if
               | ChatGPT was only available for the subset of scientists
               | that benefit from Alpha Fold
               | 
               | This is utter nonsense. For anyone who actually knows a
               | field, ChatGPT generates unhelpful, plausible-looking
               | nonsense. Conferences are putting up ChatGPT answers
               | about their fields to laugh at because of how
               | misleadingly wrong they are.
               | 
               | This is absolutely okay, because it can be a useful tool
               | without being the singularity. I'd sure that in a couple
               | of years time, most of what ChatGPT achieves will be in
               | line with most of the tech industry advances in the past
               | decade - pushing the bottom out of the labor market and
               | actively making the lives of the poorest worse in order
               | to line their own pockets.
               | 
               | I really wish people would stop projecting hopes and
               | wishes on top of breathless marketing.
        
               | rockinghigh wrote:
               | > ChatGPT generates unhelpful, plausible-looking
               | nonsense.
               | 
               | I use ChatGPT daily to generate code in multiple
               | languages. Not only does it generate complex code, but it
               | can explain it and improve it when prompted to do so.
               | It's mind blowing.
        
               | BlackSwanMan wrote:
               | GPT4 can pass the the neurosurgical medical boards, most
               | of the people laughing at it are typically too dumb to
               | note the difference between 3.5 and 4.
               | 
               | >pushing the bottom out of the labor market and actively
               | making the lives of the poorest worse in order to line
               | their own pockets.
               | 
               | This makes zero sense. GPT4 has little effect on a
               | janitor or truck driver. It doesn't pick fruit, or wash
               | cars.
        
               | quickthrower2 wrote:
               | Exams are designed to be challenging to humans because
               | most of us don't have photographic memories or RAM based
               | memory, so passing the test is a good predictor of
               | knowing your stuff, i.e. deep comprehension.
               | 
               | Making GPT sit it is like getting someone with no
               | knowledge but a computer full of past questions and
               | answers and a search button to sit the exam. It has
               | metaphorical written it's answers on it's arm.
        
               | entropicdrifter wrote:
               | This is essentially true. I explained it to my friends
               | like this:
               | 
               | It _knows_ a lot of stuff, but it can 't do much
               | _thinking_ , so the minute your problem and its solution
               | are far enough off the well-trodden path, its logic falls
               | apart. Likewise, it's not especially good at math. It's
               | great at _understanding your question_ and _replying with
               | a good plain-english answer_ , but it's not actually
               | _thinking_
        
               | pclmulqdq wrote:
               | FWIW, as a non-pathologist with a pathologist for a
               | father, I can almost pass the pathology boards when taken
               | as a test in isolation. Most of these tests are _very
               | easy_ for professionals in their fields, and are just a
               | Jacksonian barrier to entry. Being allowed to sit for the
               | test is the hard part, not the test itself.
               | 
               | As far as I know, the exception to this is the bar exam,
               | which GPT-4 can also pass, but that exam plays into
               | GPT-4's strengths much more than other professional
               | exams.
        
               | Quarrel wrote:
               | > the exception to this is the bar exam
               | 
               | FWIW, this is more true for CA than most states.
        
               | astrange wrote:
               | > I'd sure that in a couple of years time, most of what
               | ChatGPT achieves will be in line with most of the tech
               | industry advances in the past decade - pushing the bottom
               | out of the labor market and actively making the lives of
               | the poorest worse in order to line their own pockets
               | 
               | This is not what any of the US economic stats have looked
               | like in the last decade.
               | 
               | Especially since 2019, the poorest Americans are the only
               | people whose incomes have gone up!
        
               | nicetryguy wrote:
               | > ChatGPT generates unhelpful, plausible-looking
               | nonsense.
               | 
               | Absolutely not! I created a powershell script for
               | converting one ASM label format to another for retro game
               | development and i used ChatGPT to write it. Now, it
               | fumbled some of the basic program logic, however, it
               | absolutely nailed all of the specific regex and obtuse
               | powershell commands that i needed and that i merely
               | described to it in plain English.
               | 
               | It essentially aced the "hard parts" of the script and i
               | was able to take what it generated and make it fit my
               | needs perfectly with some minor tweaking. The end result
               | was far cleaner and far beyond what i would have been
               | able to write myself, all in a fraction of the time. This
               | ain't no breathless marketing dude: this thing is the
               | real deal.
               | 
               | ChatGPT is an extremely powerful tool and an absolute
               | game changer for development. Just because it is
               | imperfect and needs a bit of hand holding (which it may
               | not soon), do not underestimate it, and do not discount
               | the idea that it may become an absolute industry
               | disrupter in the painfully near future. I'm excited
               | ...and scared
        
               | robotresearcher wrote:
               | >> ChatGPT generates unhelpful, plausible-looking
               | nonsense. > Absolutely not!
               | 
               | It does, quite often. Not _only_ that, as you describe.
               | But it does.
               | 
               | For example, I asked it what my most cited paper is, and
               | it made up a plausible-sounding but non-existent paper,
               | along with fabricated Google Scholar citation counts.
               | Totally unhelpful.
               | 
               | It also can produce very useful things.
        
               | actionfromafar wrote:
               | I find it's better at really mainstream things. The web
               | is riddled with Powershell examples.
        
               | r3trohack3r wrote:
               | Your experience and my experience do not align.
               | 
               | I asked GPT-4 to give me a POSIX compliant C port of
               | dirbuster. It spit one out with instructions for
               | compiling it.
               | 
               | I asked it to make it more aggressive at scanning and it
               | updated it to be multi-threaded.
               | 
               | I asked it for a word list, and it gave me the git
               | command to clone one from GitHub and the command to
               | compile the program and run the output with the word
               | list.
               | 
               | I then told it that the HTTP service I was scanning
               | always returned 200 status=ok instead of a 404 and asked
               | it for a patch file. It generated that and gave me the
               | instructions for applying it to the program.
               | 
               | There was a bug I had to fix: word lists aren't prefixed
               | with /. Other than that one character fix, GPT-4 wrote a
               | C program that used an open source word list to scan the
               | HTTP service running on the television in my living room
               | for routes, and found the /pong route.
               | 
               | This week it's written 100% of the API code that takes a
               | CRUD based REST API and maps it to and from SQL queries
               | for me on a cloudflare worker. I give it the method
               | signature and the problem statement, it gives me the
               | code, and I copy and paste.
               | 
               | If you're laughing this thing off as generating unhelpful
               | nonsense you're going to get blind sided in the next few
               | years as GPT gets wired into the workflows at every layer
               | of your stack.
               | 
               | > pushing the bottom out of the labor market and actively
               | making the lives of the poorest worse in order to line
               | their own pockets.
               | 
               | I'm in a BNI group and a majority of these blue collar
               | workers have very little to worry about with GPT right
               | now. Until Boston Dynamics gets its stuff together and
               | the robots can do drywalling and plumbing, I'm not sure I
               | agree with your take. This isn't coming for the "poorest"
               | among us. This is coming for the middle class. From brand
               | consultants and accountants to software engineers and
               | advertisers.
               | 
               | Software engineers with GPT are about to replace software
               | engineers without GPT. Accountants with GPT are about to
               | replace accountants without GPT.
               | 
               | > Literally every single argument I've seen towards this
               | being AGI is
               | 
               | Here is one: it can simultaneously pass the bar exam,
               | port dirbuster to POSIX compliant C, give me a list of
               | competing brands for conducting a market analysis, get
               | into deep philosophical debates, and help me file my
               | taxes.
               | 
               | It can do all of this simultaneously. I can't find a
               | human capable of the simultaneous breadth and depth of
               | intelligence that ChatGPT exhibits. You can find someone
               | in the upper 90th percentile of any profession and show
               | that they can out compete GPT4. But you can't take that
               | same person and ask them to out compete someone in the
               | bottom 50th percentile of 4 other fields with much
               | success.
               | 
               | Artificial = machine, check. Intelligence = exhibits Nth
               | percentile intelligence in a single field, check General
               | = exhibits Nth percentile intelligence in more than one
               | field, check
               | 
               | This is AGI, now we are nit-picking. It's here.
        
               | l33tman wrote:
               | Maybe it's heavily biased towards programming and
               | computing questions? I've tested GPT-4 on numerous
               | physics stuff and it fails spectacularly at almost all of
               | them. It starts to hallucinate egregious stuff that's
               | completely false, misrepresents articles it tries to
               | quote as references etc. It's impressive as a glorified
               | search engine in those cases but can't at all be trusted
               | to explain most things unless they're the most canonical
               | curriculum questions.
               | 
               | This extreme difficulty in discerning what it
               | hallucinates and what is "true" is what it's most obvious
               | problem is. I guess it can be fixed somehow but right now
               | it has to be heavily fact-checked manually.
               | 
               | It does this for computing questions as well, but there
               | is some selection bias so people tend to post the
               | success-stories and not the fails. However it's less
               | dangerous if it's in computing as you'll notice it
               | immediately so maybe require less manual labour to keep
               | it in check.
        
               | visarga wrote:
               | > This is AGI, now we are nit-picking. It's here.
               | 
               | Hahaha, if you want nit-picking, all the language tasks
               | chatGPT is good at are strictly human tasks. Not general
               | tasks. Human tasks are all related to keeping humans
               | alive and making more of us, they don't span the whole
               | spectrum of possible tasks where intelligence could
               | exist.
               | 
               | Of course inside language tasks it is as general as can
               | be, yet still needs to be placed inside a more complex
               | system with tools to improve accuracy, LLM alone is like
               | brain alone - not that great at everything.
        
               | ChatGTP wrote:
               | On the other hand if you browse around the web you will
               | find various implementations of dirbuster, probably in C
               | for sure in C++ which are multi-threaded , it's not to
               | take away from your experience but I mean, without
               | knowing what's in the training set it may have already
               | been exposed to what you asked for, even several times
               | over.
               | 
               | I have a feeling they had access to a lot of code on GH,
               | who knows how much code they actually accessed. The
               | conspiracy theorist in me wonders if MS just didn't
               | provide access to public and private code to train on,
               | they wouldn't have even told Open AI, just said, "here's
               | some nice data", it's all secret and we can't see the
               | models inputs so I'll leave it at that. I mean they've
               | obviously prepared the data for copilot, so it was there
               | waiting to be trained on.
               | 
               | So yeah I feel your enthusiasm but if you think about it
               | a little more, or maybe not so hard to imagine what you
               | saw being actually rather simple ? Every time I write
               | code I feel kind of depressed because I know almost
               | certainly someone has already written the same thing and
               | that it's sitting in GitHub or somewhere else and I'm
               | wasting my time.
               | 
               | ChatGPT just takes away the knowing where to find
               | something (it's already seen almost everything the
               | average person can think of) you want and gives it to you
               | directly. Have you never thought of this already ? Like
               | you knew all the code you wanted already was there
               | somewhere, but you just didn't have an interface to get
               | to it? I've thought about this for quite a while and I
               | knew there would big data people doing experiments who
               | could see that probably 80-90% of code on GitHub is
               | pretty much identical.
               | 
               | Nothing is magic, right ?
        
               | r3trohack3r wrote:
               | I guess my response to that is: so what?
               | 
               | Regardless of the truthiness of your assertion, that
               | general description of work is a majority of my
               | profession. Novel contribution is a very small percentage
               | of the work I do. I'm looking forward to that shifting
               | significantly in the coming years.
               | 
               | It doesn't matter if the "POSIX compliant C" version of
               | dirbuster was in it's training set. Or if my cloudflare
               | wrangler JS API <-> Neon.tech SQL database implementation
               | was. (Copyright arguments withstanding for sure.)
               | 
               | What matters is that it did 100% of the work of
               | generating the code. All I did was guide it, code review
               | it, and ship it. It felt like I had a team of junior
               | engineers working in the trenches on the problem for me
               | and I was getting to have fun playing at a level up the
               | stack. It can generate 100LoC faster than I can type it,
               | and I can code review it as it generates it. I can fix
               | the code up in the editor and paste it back in with the
               | next prompt for another round of work.
               | 
               | This is with the current model.
               | 
               | Unlike GP's assertion that people are falling for
               | marketing, I actually don't know what OpenAI's marketing
               | is. 100% of my belief in GPT-4 comes from it doing my
               | real day-to-day tasks along side me. I don't need a
               | crystal ball to predict what AI is going to do, I'm
               | already using it as a partner in writing my code. It's
               | already well enough adept at my profession to pair-
               | program through problems with me.
               | 
               | This week I had a particularly hairy problem I needed to
               | solve, and I was already behind schedule on the project.
               | ChatGPT and I knocked it out in ~30 minutes. I suspect,
               | if I were alone, it would have taken me several hours.
               | The velocity boost came from: ChatGPT correctly
               | identifying it as a binpacking problem (which I had
               | missed), listing out several algorithms for approaching
               | binpacking, and giving me an initial (incorrect) first
               | implementation. I was able to go back and forth on that
               | response and get the rough constraints figured out for a
               | solution, ask it to generate that solution, and then
               | clean it up in my editor.
               | 
               | The flow I'm using today is already a 100% increase in
               | effective "workforce" for a software organization with a
               | $20 per month per employee subscription to ChatGPT.
               | Everyone gets a real-time, always available, pair
               | programmer.
        
               | Barrin92 wrote:
               | >We are literally arguing semantics if this is AGI
               | 
               | It isn't and nobody with any experience in the field
               | believes this. This is the Alexa / IBM Watson syndrome
               | all over again, people are obsessed with natural language
               | because it's relatable and it grabs the attention of
               | laypeople.
               | 
               | Protein folding is a major scientific breakthrough with
               | big implications in biology. People pay attention to
               | ChatGPT because it recites the constitution in pirate
               | English.
        
               | RandomLensman wrote:
               | ChatGPT cannot reason from or apply its knowledge - it is
               | nowhere near AGI.
               | 
               | For example, it can describe concepts like risk neutral
               | pricing and replication of derivatives but it cannot
               | apply that logic to show how to replicate something non-
               | trivial (i.e., not repeating well published things).
        
               | dragonwriter wrote:
               | > So they gave it away, but it is still a product
               | 
               | Except its not, because they gave it away without any
               | kind of commercialization. Its possible to give something
               | away for free in some context and still have it be a
               | product (Stable Diffusion is doing quite a bit of that,
               | though its very unclear if they'll be able to do it
               | sustainably), but AlphaFold doesn't seem to be an
               | example. It seems to be an example of something cool they
               | did that they had no desire to make into a product. Which
               | is great! But isn't the same as executing on product in a
               | space.
        
             | kernal wrote:
             | >This, but non-sarcastically. Google has spectacularly, so
             | far, failed to execute on products
             | 
             | Android is the biggest OS in the world
             | 
             | Chrome is the biggest browser in the world
             | 
             | Gmail is the biggest email service in the world
             | 
             | YouTube is the biggest video platform in the world
             | 
             | Google is the biggest search engine in the world
             | 
             | Google is the biggest digital advertiser in the world
             | 
             | and I'm probably missing more things they're #1 in.
             | 
             | Not bad for a company that has "spectacularly failed to
             | execute on products"
        
               | tester756 wrote:
               | I'd add Maps
        
               | dragonwriter wrote:
               | Uh, you snipped in the middle of a clause so you could
               | argue against something it didn't say.
               | 
               | Here's the whole thing (leaving out a parenthetical that
               | isn't important here):
               | 
               | "Google has spectacularly, so far, failed to execute on
               | products [...] for generative AI"
               | 
               | You listed a bunch of products in other domains, some of
               | which are the reasons why it has institutional incentives
               | _not_ to push generative AI forward, even if it also
               | stands to lose more if someone else wins in it.
        
               | TheCoelacanth wrote:
               | Which of those do you think is a product "for generative
               | AI"?
        
               | Fricken wrote:
               | The youngest of those products is 15 years old.
        
             | neel8986 wrote:
             | Generative AI at its current state is still a very new area
             | of research with many issues including hallucination, bias
             | and legal baggage. So for the first few version we are
             | looking at many new startups like open ai, stability,
             | anthropic etc. It is yet to be seen if any of the new breed
             | of startups actually starts to make sizeable revenue. But
             | again there is nothing defensible here unless all the major
             | labs stop publishing paper.
        
             | nullc wrote:
             | Is it just execution or did they gaze deeply into their
             | navels and convince themselves that delivering tools like
             | GPT4 would be 'unethical'?
        
             | jsnell wrote:
             | When did anyone realize that there generative AI was
             | actually a product with wide consumer appeal? Or how many
             | use cases there were for it as an API service? I'd say it
             | wasn't really obvious until around Q4 last year, maybe Q3
             | at the earliest.
             | 
             | That's a pretty short time ago. So it seems that so far it
             | hasn't really been a failure to execute, but more about
             | problems with product vision or with reading the market
             | right leading to not even _attempting_ to have actual
             | products in this space. That 's definitely a problem, but
             | not one that's particularly predictive of how well they'll
             | be able to execute now that they're actually working on
             | products.
        
               | cbzoiav wrote:
               | The bigger problem is cost.
               | 
               | The hardware costs alone of running something like GPT
               | 3.5 for real time results is 6-7 figures a year. By the
               | time you scale for user numbers and add redundancy... The
               | infra needs to be doing useful work 24/7 to pay for
               | itself.
               | 
               | It's more than possible Google knows exactly what it can
               | do, but was waiting for it to be financially viable
               | before acting on that. Meanwhile Microsoft has decided to
               | throw money at it like no tomorrow - if they corner the
               | market and it becomes financially viable before they lose
               | that it could pay off. That is a major gamble...
        
               | nullc wrote:
               | > The hardware costs alone of running something like GPT
               | 3.5 for real time results is 6-7 figures a year.
               | 
               | Can you unpack your thinking there? Even at 5% interest
               | for ownership costs to be six figures a year you're
               | talking about millions of dollars in hardware. Inference
               | is just not that expensive, not even with gigantic
               | models.
               | 
               | To the extent that there is operating cost (e.g.
               | energy)-- that isn't generated when the system is
               | offline.
               | 
               | I don't know how big GPT 3.5 is, but I can _train_ LLaMA
               | 65B on hardware at home and it is nowhere near that
               | expensive.
        
           | Keyframe wrote:
           | There's definitely something amiss. Maybe we're just not
           | seeing the whole picture, but Google has the best potential
           | out there still. Not only vast and fundamental research came
           | out their door (presumably there's more), but they also have
           | their own compute resources and an up-to-date copy of
           | internet.zip and gmail.zip and youtube.zip which they can
           | train on vs what small and stale stuff (compared to Google's
           | data) OpenAI trained their stuff on (like common crawl etc.).
           | What gives, Google? Get on it!
           | 
           | edit: I forgot all about google_maps.zip / waze.gz and all
           | the juicy traffic data coming from android.. which probably
           | already relies heavily on AI
        
             | danans wrote:
             | > gmail.zip
             | 
             | Despite what people often write and believe here, the
             | access controls on PII data at Google are incredibly
             | strict. You can't just arbitrarily train on people's
             | personal data. I know, because when I was there, working on
             | search backend data mining, in order to get access to
             | _anonymized_ search and web logs, I had to sign paperwork
             | that essentially said I 'd be taken to the cleaners if I
             | abused the access.
             | 
             | > What gives, Google? Get on it
             | 
             | It's a very difficult decision to intentionally destabilize
             | the space you are the leader in, for all the reasons you
             | can imagine. In a sense, Google needed someone else with
             | nothing to lose to shake up the space. How they execute in
             | the new reality is yet to be seen. The biggest challenge
             | they may have right now isn't technological, but that
             | "ChatGPT" has become a sort of brand, like Kleenex and
             | well, Google.
        
               | illiarian wrote:
               | > Despite what people often write and believe here, the
               | access controls on PII data at Google are incredibly
               | strict. You can't just arbitrarily train on people's
               | personal data.
               | 
               | And yet Google is the largest online advertiser in the
               | world. And yet, GMail used to (I don't know if it still
               | does) push ads into people's inboxes.
               | 
               | I have as much belief in their PII controls as in their
               | "Don't be evil" motto.
        
               | JeremyBanks wrote:
               | [dead]
        
               | Lacerda69 wrote:
               | i used gmail since the beta and never saw an ad there,
               | what are you talking about?
        
               | paulkon wrote:
               | Well put, the brand awareness of ChatGPT is the biggest
               | challenge they have now.
        
               | dmix wrote:
               | Meh people would much prefer to be typing their prompts
               | into a Google search box than opening a separate GPT app.
               | I doubt there real issue here is a marketing one. Despite
               | ChatGPT's massive growth numbers the market is pretty
               | immature, it's still very much open and not yet decided.
               | 
               | Many markets had early leaders who got stomped by later
               | entrants.
        
               | emilsedgh wrote:
               | I don't think it is.
               | 
               | I'd prioritize their problems like this:
               | 
               | 1. LLM's don't have a lucrative business model that
               | Google needs.
               | 
               | 2. The quality of their language model is really lacking
               | as of now.
               | 
               | You fix 1 and 2, ChatGPT's branding is nothing. Google is
               | the biggest advertisement machine in the world and they
               | can market the hell out of their product. Just see how
               | Chrome gained ground on Firefox for example.
               | 
               | Google is still used several folds more than ChatGPT and
               | if you resolve 1 and 2, Google will make their money and
               | their users have no incentive to go to ChatGPT.
        
               | Keyframe wrote:
               | You're right on both accounts.
               | 
               | However, whatever's going on inside I still strongly
               | believe in that company! Sometimes though it just feels
               | like they don't themselves.
        
             | anileated wrote:
             | Google could stop sending traffic to webmasters and pivot
             | to directly providing answers based on scraped data long,
             | long ago, but Google knew webmasters would be up in arms
             | over such a blatant bait and switch taking away their
             | traffic and revenue.
             | 
             | OpenAI subverted this by riding on the "open" part of their
             | name at first--before doing a 180-degree turn and selling
             | out to Microsoft.
        
             | arcatech wrote:
             | Google is great at technology and bad at making actual
             | products. This all makes sense to me.
        
             | narrator wrote:
             | The announcement felt cautious and political, like they are
             | running for technological ruler of the world and not a
             | company trying to make money. This is probably why they are
             | not going to not get very far against their competitors
             | despite having so much potential. They care too much about
             | what the EU and governments everywhere think of them now.
             | They are no longer a profit making entity that disrupts and
             | pushes the rules. They are part of maintaining the status
             | quo.
        
             | kmeisthax wrote:
             | The difference between OpenAI and Google is that the
             | latter's ethical concerns with AI are more deeply held.
             | Google gave us the Stochastic Parrots paper[0] -
             | effectively a very long argument as to why they _shouldn
             | 't_ build their own ChatGPT. OpenAI uses ethics as a
             | handwave to justify becoming a for-profit business selling
             | access to proprietary models through an API, citing the
             | ability to implement user-hostile antifeatures as a
             | _deliberate prosocial benefit_.
             | 
             | To be clear, Google _does_ use AI. They use it so heavily
             | that they 've designed four generations of training
             | accelerators. All the fancy knowledge graph features used
             | to keep you from clicking anything on the SERP are powered
             | by large language models. The only thing they didn't do is
             | turn Google Search into a chatbot, at least not until
             | Microsoft and OpenAI one-upped them and Google felt
             | competitive pressure to build what they thought was
             | garbage.
             | 
             | And yes, Google's customers share that belief. Remember
             | that when Google Bard gets a fact about exoplanets wrong,
             | it's a scandal. When Bing tries to gaslight its users into
             | thinking that time stopped at the same time GPT-4's
             | training did, it's _funny_. Bing can afford to make
             | mistakes that Google can 't, because nobody uses Bing if
             | they want good search results. They use Bing if they can't
             | be arsed to change the defaults[1].
             | 
             | [0] Or at least they did, then they fired the woman who
             | wrote it
             | 
             | [1] And yes that is why Microsoft really pushes Bing and
             | Edge hard in Windows.
        
               | binkHN wrote:
               | > ...nobody uses Bing if they want good search results.
               | 
               | Sadly, I think I'd argue that nobody has good search
               | results anymore. Google's results have been SEO'd to the
               | hilt and most of the results are blog spam garbage
               | nowadays.
        
               | richardw wrote:
               | OpenAI releasing imperfect products is exactly what they
               | said they would do. We need society to understand what
               | the state and risks are. The 6-month-wait shitstorm is
               | what happens when society gets the merest glimmer of the
               | potential. I applaud them for this, rather than focusing
               | on protecting their brand.
        
               | tarsinge wrote:
               | It was not some anecdotal fact that Bard got wrong, it
               | was during their official public demo. It was a "scandal"
               | because it showed Google was indeed unprepared and had no
               | better product, not even preparing and fact checking
               | their demo before was the cherry on the top.
               | 
               | Ethics is a false excuse because rushing that out show
               | they never cared either. It was just PR and their bluff
               | was called.
               | 
               | Also I skimmed over that Stochastic Paper and I'm
               | unimpressed. I'm unfamiliar with the subject but many
               | points seems unproven/political rather than scientific,
               | with a fixation on training data instead of studying the
               | emerging properties and many opinions notably regarding
               | social activism, but maybe it was already discussed here
               | on HN. Edit: found here:
               | https://news.ycombinator.com/item?id=34382901
        
               | nullc wrote:
               | > The only thing they didn't do is turn Google Search
               | into a chatbot,
               | 
               | No, they turned google search into what it is now.
               | 
               | For me, trying google bard was an instant reminder of the
               | change in behavior in google search from 15 years ago to
               | today.
               | 
               | We used to have a search that you could give obscure
               | flags to Linux commands and find their documentation or
               | source code. Today we have a google search that often
               | only tell you about how some kardashian or recent
               | political drama is a sounds-alike with the technical term
               | that you were searching for.
               | 
               | GPT4 has some of the same "excessively smart" failure
               | modes, but it's (and GPT3.5 for that matter) are so much
               | more useful for bard that they're a useful addition to
               | the toolbox. Too bad the toolbox hardly includes plain
               | search anymore.
        
               | neel8986 wrote:
               | >> To be clear, Google does use AI. They use it so
               | heavily that they've designed four generations of
               | training accelerators.
               | 
               | This +100 Somehow there is a perception that chat bots
               | are the only example of AI research or product that
               | matters and all AI organisations ability will be judged
               | by their ability to create chatbots.
        
               | visarga wrote:
               | LLMs are the end-game for almost all NLP and CV tasks.
               | You can freely specify the task description, input and
               | output formats, unlike discriminative models. You don't
               | need to retrain, don't need many examples, and most
               | importantly - it works on tasks the developers of the LLM
               | were not aware of at design time - "developer aware
               | generalisation". LLMs are more like new programming
               | languages than applications, pre-2020 neural nets were
               | mostly applications.
        
           | criley2 wrote:
           | This feels like someone negging Apple in 2007 "Palm invented
           | the smartphone, what has Apple done" lol
        
             | KyeRussell wrote:
             | Even in '07, Apple had a track record for doing things
             | right, not doing things first.
             | 
             | Current-day Google churns out sterile, uninspiring
             | products, and kills them.
             | 
             | If your argument is "this company is going to act out of
             | character and do something innovative!" then...yeah, sure.
             | That's a good way to be right, sometimes. Just don't let
             | everyone see the majority of the time where you've been
             | wrong.
        
             | outside1234 wrote:
             | well in early 2007 that would have been fair feedback :)
        
           | crakenzak wrote:
           | Every single member of the research team that invented the
           | transformer architecture have left Google to go to OpenAI, or
           | make their own startups (character.ai, anthropic, cohere)
        
             | simonster wrote:
             | Nope, Llion Jones is still at Google.
        
           | kortilla wrote:
           | Producing research is not executing for a company. Executing
           | is later in the pipeline and google is failing.
           | 
           | Reread the comment you are replying to. It explicitly said
           | that the research is good.
        
           | hbn wrote:
           | > They are good at execution
           | 
           | You're reiterating their point. Yeah, Google has competent AI
           | people but that means nothing for their own success if they
           | can't execute. OpenAI has proven that.
        
           | HyprMusic wrote:
           | Proximal Policy Optimization
        
             | [deleted]
        
           | kiratp wrote:
           | > They are good at execution but i havent seen anything novel
           | coming out of them.
           | 
           | Execution is 9/10 of the battle.
        
           | sangnoir wrote:
           | I suspect recency-bias may be tripping people up: LLMs and
           | ChatGPT are not the final word in AI, and there is no reason
           | for Google to bet the farm on them.
           | 
           |  _I_ wouldn 't bet against Google DeepMind originating the
           | next big thing, at the very least, their odds are higher than
           | OpenAIs.
           | 
           | Edit: this may yet turn out to be a Google+ moment, where an
           | upstart spooks Google into thinking it is fighting an
           | existential battle but winds up okay after some major
           | missteps that take years to fix (YouTube comments as a real-
           | name social network. Yuck)
        
           | yttribium wrote:
           | "Sun Microsystems literally invented Java and has done a ton
           | of open research on RISC, how are they not able to execute as
           | those technologies are exploding"
        
           | gorgoiler wrote:
           | _They are good at execution..._
           | 
           | It might help to reflect on what the upsides of this have
           | been for OpenAI, re execution.
           | 
           | On the face of it, execution is often all that matters. FB v
           | myspace, AMD v Intel (eventually), Uber v Lyft, MS v Apple
           | (pre 2001), Apple v MS (post 2001) etc.
        
           | eigenvalue wrote:
           | You could say the same about Xerox in the late 70s. And they
           | conclusively showed that they couldn't execute and squandered
           | all of their amazing original research. Looking at how
           | laughably bad Bard is, Google has a long way to prove they
           | aren't Xerox 2.0 at this point. I'm amazed that Sundar hasn't
           | been pushed out yet by Larry and Sergey.
        
             | gman83 wrote:
             | This thread is full of people saying that what Xerox did
             | was some terrible mistake, but I think that it was much
             | better that they could afford to do all this research which
             | spawned a massive industry as a result than had they become
             | this massive monopoly which controlled everything.
             | 
             | If Google spends billions of it's ad money doing original
             | research that spawns a new industry with thousands of
             | companies, that would seem to be a great result to me.
        
               | eigenvalue wrote:
               | That might be true on a societal level, but is small
               | solace to XRX shareholders, not to mention the many
               | researchers who contributed these brilliant creations
               | only to see them exploited by others while their own
               | company just ignored them and let them die on the vine.
        
           | moyix wrote:
           | RLHF is from Google? The reference I know of is OpenAI:
           | 
           | https://arxiv.org/pdf/2009.01325.pdf
           | 
           | CLIP also seems novel?
        
             | neel8986 wrote:
             | Original idea for the paper coming from deepmind https://pr
             | oceedings.neurips.cc/paper_files/paper/2017/file/d...
        
               | moyix wrote:
               | Seems marginal - half of those authors, including the
               | lead author, are from OpenAI!
        
               | neel8986 wrote:
               | They were in Deepmind when the research were done.
        
               | moyix wrote:
               | I'm sure you're right but the only note attached to the
               | author list is in the opposite direction - Tom B Brown
               | has an asterisk with "Work done while at OpenAI".
        
           | hot_gril wrote:
           | I would go on about how much execution matters, but it's not
           | just about execution, cause ChatGPT is actually a better AI
           | than anything Google has put out so far. So unless Google is
           | hiding something amazing...
        
           | pigscantfly wrote:
           | 95% of those people have left Google because the ethics and
           | safety teams prevented them from releasing any products based
           | on their research. We have those ex-Googlers to thank for
           | ChatGPT, Character.ai, Inceptive, ... which you'll notice are
           | _not_ Google products but rather competitors.
        
             | panarky wrote:
             | I, for one, appreciate a megacorp purposely sacrificing
             | revenue when they're not confident that the negative
             | externalities of that revenue would be minimized.
             | 
             | Google could have built a search engine where paid results
             | were indistinguishable from organic results, but the
             | negative externalities of that were too great.
             | 
             | Google could have remained in China, but the negative
             | externalities of developing and managing a censorship
             | engine were too great.
             | 
             | Google could have productized AI before the risks were
             | controlled, but they sacrificed revenue and first-mover
             | advantage to be more responsible, and to protect their
             | reputation.
             | 
             | This behavior is so rare, it's hard to think of another
             | megacorp that would do that.
             | 
             | Google's far from perfect, they've made ethical lapses,
             | which their competitors love to yell and scream about, but
             | their competitors wouldn't hold up well under the same
             | scrutiny.
        
               | hn_throwaway_99 wrote:
               | > Google could have built a search engine where paid
               | results were indistinguishable from organic results, but
               | the negative externalities of that were too great.
               | 
               | Have you not used Google search in the past 5 years?
        
               | panarky wrote:
               | When it says "sponsored" in bold text, prominently
               | displayed right at the top of the result, you know it's
               | an ad.
               | 
               | Before Google, search engines didn't do this. Paid
               | results were indistinguishable from organic results.
               | 
               | Here's an example --> https://imgur.com/a/bSJTBeD
               | 
               | If you have a counter-example, please share!
        
               | illiarian wrote:
               | > When it says "sponsored" in bold text, prominently
               | displayed right at the top of the result, you know it's
               | an ad.
               | 
               | Prominently? Bold text?
               | 
               | Here's how they repeatedly made ads indistinguishable
               | from search results:
               | https://atechnocratblog.wordpress.com/2016/07/26/color-
               | fade-...
               | 
               | Or this: https://twitter.com/garybernhardt/status/1648496
               | 387640938496
               | 
               | To quote from the above, here's what they said in the
               | beginning: "we expect that advertising funded search
               | engines will be inherently biased towards the advertisers
               | and away from the needs of the consumers"
        
               | hn_throwaway_99 wrote:
               | It sounds like you were either not around or didn't use
               | Google in the early 00s. Back then, there _was_ a very
               | clear, bright color difference between ads and organic
               | search results: a yellow bar at the top with at most two
               | ads, and a side bar. But organic results were easy to
               | identify and took up the majority of screen real estate.
               | 
               | Now, when I search any even slightly remotely commercial
               | search term on mobile, about the entire first page and a
               | half of results are ads. Yes, they're identified with a
               | "Sponsored" message, but as you can see from the
               | "evolution" link the other commenter replied, this was
               | obviously done to make the visual treatment between ads
               | and organic results less clear.
               | 
               | The reason I'm thrilled about Google finally getting
               | competition in their bread-and-butter is _not_ because I
               | want them to fail, but I want them to stop sucking so
               | bad. For about the past 10 or so years Google has gotten
               | so comfy with their monopoly position that the vast
               | majority of their main search updates have been extremely
               | hostile to _both_ end users and their advertisers as
               | Google continually demands more and more of  "the Google
               | tax" by pushing organic results down the page.
               | 
               | In the meantime I've switched to Bing, not because I
               | think Microsoft is so much better, because I desperately
               | want multiple search alternatives.
               | 
               | Edit: Great article from a couple years ago about how
               | Google tried to make ads _even more_ indistinguishable
               | from organic results:
               | https://www.theverge.com/tldr/2020/1/23/21078343/google-
               | ad-d...
        
           | richardw wrote:
           | That sounds like Google is the the Xerox Parc of AI. Still
           | need to execute.
        
           | candiodari wrote:
           | The original transformer team very much _has_ executed on
           | making successful implementations of transformers ... just
           | not for Google. Clearly something went a bit wrong at Google
           | brain in 2017.
           | 
           | https://www.linkedin.com/in/ashish-vaswani-99892181/
           | 
           | https://www.linkedin.com/in/noam-shazeer-3b27288/
           | 
           | https://www.linkedin.com/in/nikiparmar/
           | 
           | https://www.linkedin.com/in/jakob-uszkoreit-b238b51/
           | 
           | https://www.linkedin.com/in/aidangomez/
           | 
           | https://www.linkedin.com/in/lukaszkaiser/
           | 
           | https://www.linkedin.com/in/illia-polosukhin-77b6538/
           | 
           | Only one remains at Google:
           | 
           | https://www.linkedin.com/in/llion-jones-9ab3064b
        
           | hintymad wrote:
           | > Yes the team which literally created transformer and almost
           | all the important open research including Bert, T5, imagen,
           | RLHF, ViT don't have the ability to execute on AI /s
           | 
           | Yet Google does not have a slam-dunk product despite so many
           | great research results. This looks a gross failure of the
           | CEO, especially given that he's been chanting AI First in the
           | past few years.
        
         | throwntoday wrote:
         | And here I thought that Google would achieve AI supremacy
         | because of all the data they have been vacuuming for decades,
         | turns out they haven't even thought to utilize it?
         | 
         | How did they drop the ball so hard? OpenAI has been around for
         | less than a decade and as a smaller team with less resources
         | was able to make a better product.
        
           | cfeduke wrote:
           | Though this is usually how it goes - big successful companies
           | begin to bend towards regulatory capture after having their
           | period of upstart growth and disruption. They make as much
           | money as possible for shareholders on their cash cow and its
           | management culture's primary objective to make sure this is
           | not disturbed.
           | 
           | Think about how many decades head start IBM had to perfect
           | search, but search wasn't their core competency.
           | 
           | Delivering advertisements is Google's core competency.
        
         | Xeoncross wrote:
         | Yeah, the message seems be "Wait, we're still here! Don't
         | forget about us and we promise we'll catch up!"
         | 
         | > Announcing Google DeepMind... launched DeepMind back in 2010
         | ...
        
         | kernal wrote:
         | >Google is panicked about losing the AI race
         | 
         | What a shortsighted statement for a race that has barely gotten
         | out of the gates. But, if any one company should be panicking
         | then it's OpenAI at the thought of losing their minimal lead
         | and getting crushed by the company, that invented most of the
         | technology they use, put a significant amount of resources
         | behind their AI initiatives.
        
           | SeanAnderson wrote:
           | I don't find it to be that shortsighted?
           | 
           | Google Search had an outage yesterday. Google just underwent
           | its first round of layoffs ever which definitely affects
           | internal morale and makes all employees aware of their
           | company's mortality. Google's CEO was in the news last week
           | for hiding communications while under a legal hold. Google
           | stock tanked with the rushed demo of Bard. And, even if all
           | those things weren't true, Google has continually failed to
           | establish revenue streams independent from ads and
           | continually abandons products that don't meet their
           | expectations. Consumer confidence in new Google product
           | announcements is lower than any other major tech company -
           | the default assumption is that the product will be pulled
           | months/years later.
           | 
           | Microsoft is giving their full support to OpenAI through
           | their 49% partnership. $13B investment compared to Google
           | buying DeepMind for $500M and investing $300M in Anthropic.
           | Microsoft has good working agreements with the US government,
           | a long history of unreasonable support for their flagship
           | products, clawed their way back to being one of the most
           | valuable companies in the world by finding diverse revenue
           | streams, and, frankly, comes across as the wise adult in the
           | room given they already had their day in the sun with legal
           | battles.
           | 
           | I agree completely that if there continue to be marked
           | revolutions in AI that invalidate current SOTA then those
           | innovations are likely to arise from Google's research labs,
           | but from an execution standpoint I have nothing but concerns
           | for Google. It's crazy that I feel they need a second chance
           | in the AI revolution when LLMs originated from inside their
           | org just a few years ago. And it's not like they don't feel
           | similarly - there've been countless articles about "Code Red"
           | at Google as they try to rapidly adjust their strategy around
           | AI.
           | 
           | I think OpenAI has a wider leader than people are
           | acknowledging. It's like everyone was forced to show their
           | AI-hand the last couple of months, in an attempt to appease
           | shareholders, and it seemed like a fair fight until GPT4 hit
           | the ground running. Now we're looking at agents and multi-
           | modal support ontop of $200M/yr revenue when everyone else
           | has no business plan and has yet to announce any looming
           | upgrades. At a certain point, first-mover advantage
           | compounds, the foremost AI app store becomes established, and
           | people building commercial products will become entrenched.
        
         | notatoad wrote:
         | >pushing resources into DeepMind
         | 
         | is this a influx of resources, or consolidation and cutbacks? i
         | read it as google used to have two different ai research teams,
         | and now they have one fewer than they used to.
         | 
         | they've shut down at least one deepmind office recently:
         | https://betakit.com/alphabet-company-deepmind-shutters-edmon...
        
       | agnosticmantis wrote:
       | I wonder if this change has any implications for the TensorFlow
       | vs. JAX situation/transition. IIRC I read that DeepMind mainly
       | used JAX, but not sure about the Brain. Any insights from the
       | people in the know? It seems JAX is the future, but TF dominates
       | current production stacks.
        
       | simple10 wrote:
       | > I'm sure you will have lots of questions about what this new
       | unit will look like for you.
       | 
       | Can any HN Googlers comment on what this announcement means? Is
       | this announcement just a PR move to get people to pay attention
       | to upcoming announcements? Or does it actually have deeper impact
       | to the way Google functions with internal teams?
        
         | QuercusMax wrote:
         | I'm a Googler in the Research PA (Health AI), and as far as I
         | can tell this will have ~0 impact on my life.
         | 
         | Jeff Dean gave himself a promotion and doesn't want to run an
         | org any more; aside from that, :shrug:?
        
           | simple10 wrote:
           | Ah. Maybe this is just PR cover to ease external fears that
           | DeepMind team will leave Google.
        
           | DannyBee wrote:
           | Jeff never really wanted to run orgs anyway, so that's not
           | any different either!
        
             | usrnm wrote:
             | People who don't want to run an org just don't run one.
             | It's not like someone forced him to take all that money
        
               | DannyBee wrote:
               | ??? It's never that simple, and he didn't get more money
               | for running the org?
        
           | ra7 wrote:
           | Is Jeff Dean not running this new org? What is his new role
           | now?
        
             | QuercusMax wrote:
             | Chief Scientist, apparently.
        
         | ipnon wrote:
         | DeepMind is going to stop making models to do MineCraft
         | speedruns, and instead start making models to improve search
         | results and ad click through rates.
        
           | simple10 wrote:
           | Lol. Makes sense. Any idea why they're queuing this up for a
           | multiple announcement PR stunt? Seems a bit out of character
           | for Google to tease out announcements like this.
           | 
           | My guess is they have a bigger announcement coming next week.
           | Otherwise, it seems like a bad PR move... it positions Google
           | as playing catchup in AI... which is accurate, but strange
           | PR.
        
             | dougmwne wrote:
             | PR blitz because it's going to look good to investors.
             | Google is killing the vanity projects and moonshots and
             | rolling those resources into teams that aim to launch
             | products in the next 6-12 months.
        
           | okdood64 wrote:
           | Bingo. This feels like Google is _trying to get serious_
           | about leveraging DeepMind to create better products right now
           | (and generate more revenue) instead of: "Look at this robot
           | play soccer. Cool, huh?"
        
         | DataJunkie wrote:
         | Googler here, opinion my own.
         | 
         | In some sense, it's PR, but not in the typical gimmicky way.
         | Alphabet has had DeepMind for a while, and at this point with
         | all of the competition in AI, it doesn't make sense to keep
         | DeepMind at arm's length. I personally think it's a good move
         | and gives me more confidence, but it doesn't affect me
         | directly. I do worry what redundancies this causes with Brain
         | and Research though.
        
       | xyst wrote:
       | ChatGPT changed the game. Big G getting scared of falling behind.
       | 
       | G bought out DeepMind a long time ago. I wonder what they offered
       | C-level execs this time around.
        
       | rhyme-boss wrote:
       | Accelerating AI development and improving safety are inherently
       | contradictory. It's pretty annoying and disingenuous when someone
       | says "this move will speed us up and make us safer".
        
       | abraxas wrote:
       | I wonder where this puts Geoff Hinton in this new hierarchy. He
       | still works for Google, doesn't he?
        
       | cmarschner wrote:
       | Somebody got a bad performance review
        
       | robbiemitchell wrote:
       | How does this fit in with Bard? I see no mention of Jack Krawczyk
       | here, who is listed as its product lead.
        
         | dormento wrote:
         | I think they'll google-news the name "Bard" due to bad
         | reception caused by unrealistic expectations (and a market
         | primed by vastly superior alternatives).
        
       | w10-1 wrote:
       | Google politics and history aside, it's much better to link
       | research with products for software. Unlike physics and biology,
       | software is basically what we say it is, so there isn't a natural
       | ordering to research (and it can wander forever, all too much
       | like literary criticism).
       | 
       | What both Google research and product missed, and ChatGPT
       | provided almost accidentally, is that people need a way to answer
       | ill-formed questions, and iteratively refine those questions.
       | (The results are hit-or-miss, but far better than traditional
       | search.)
       | 
       | What both OpenAI, Bing, and now Google realize, is that the race
       | is not to a bigger model but to capturing the feedback loop of
       | users querying your model so you can learn how to better
       | understand their queries. If Microsoft gets all that traffic,
       | Google never even gets the opportunity to catch up.
       | 
       | If Google were really smart, they would take another step: to
       | break the mold of harvesting free users and instead pay
       | representative users to interact with their stuff, in order to
       | catch up. Just the process of operationalizing the notion of
       | "representative" will vastly improve both product and research,
       | and it would build goodwill in communities everywhere - goodwill
       | they'll need to remain the default.
       | 
       | Progressive queries are just the leading edge of entire worlds of
       | behavior that are yet ill-fitted to computers, but could be
       | accommodated via AI. And if your engineers consider the problem
       | as "fuzzy" search or "prompt engineering" or realism, you need to
       | get people with more empathy, a minimal understanding of
       | phenomenology, and enough experience with multiple cultures and
       | discourses to be able to relate and translate
        
       | [deleted]
        
       | rollinDyno wrote:
       | When was the last time Google was proactive rather than reactive?
       | It feels that this is the same for all big Tech firms except for
       | Microsoft.
        
         | darth_aardvark wrote:
         | The Metaverse is (was?) definitely proactive.
        
           | [deleted]
        
           | ulfw wrote:
           | What metaverse does Google proactively pursue?
        
             | [deleted]
        
             | pb7 wrote:
             | >It feels that this is the same for all big Tech firms
             | except for Microsoft.
             | 
             | Meta is a big tech firm.
        
         | gojomo wrote:
         | Thy've been proactive in deleting chat logs that they were
         | ordered by courts to keep for pending litigation.
        
       | Takennickname wrote:
       | Am I the only one who thinks this is Sundar screwing up big time?
       | If there's only one AI team, and it fails, then you blame whoever
       | is leading them. If there's multiple teams, and they all fail,
       | there's only Sundar to blame.
        
       | Abecid wrote:
       | Looks like DeepMind will no longer be able to pursue academic
       | research with the pressure to monetize. Talent exodus could
       | happen similar to what happened at Google AI where many prominent
       | researchers either went to OpenAI or started their own companies
        
         | ShamelessC wrote:
         | Agreed. I'm seeing multiple other comments here suggesting
         | DeepMind was somehow a waste when they have done a lot of very
         | impressive research. "Solving" protein folding. Retrieval
         | transformers. Novel solutions to math problems using ML? What
         | about beating fucking Lee Sedol in go? No? None of that
         | matters? C'mon.
        
           | perryizgr8 wrote:
           | Arguably none of that made a comparable difference to the
           | life of a common man as ChatGPT did. It's necessary to do
           | fundamental research, but imperative to maintain focus on
           | delivering real world value to real world people.
        
           | dougmwne wrote:
           | Those are towering achievements that don't add to Google as a
           | business. It was an important part of Google's reputation,
           | but now that reputation is in the mud as Microsoft and OpenAI
           | become king in the eyes of the public.
        
         | q7xvh97o2pDhNrh wrote:
         | > started their own companies
         | 
         | Do you know if there's a list of such companies floating
         | around? Really curious to see where the research talent in the
         | space is heading, especially if they're leaving the warm
         | embrace of their BigCo...
        
       | alecco wrote:
       | Musk recruiters for X.ai must be salivating.
        
       | imranq wrote:
       | I wonder what will happen to isomorphic labs which Demis is also
       | leading
        
       | macns wrote:
       | .. _Sundar is announcing that DeepMind and the Brain team from
       | Google Research will be joining forces as a single, focused unit
       | called Google DeepMind_
       | 
       | This would be enough as an anouncement, rest of it is just sugar
       | coating.
        
       | owenbrown wrote:
       | I wonder if they will converge on using Trax (Google Brain) or
       | Tensor Flow / PyTorch.
       | 
       | I use Trax is my NLP class, so I hope it gets more adoption.
        
         | querez wrote:
         | Both DeepMind and Brain use Jax, so they will definitely use
         | Jax. However, they use different high level frameworks: All of
         | DeepMind uses Haiku, while on the Brain side there are
         | competing frameworks, with flax currently being the most often
         | used one AFAIK. I'm not aware of anyone using trax there, and I
         | would not expect it to get more adoption, on the contrary.
        
       | ur-whale wrote:
       | About fucking time someone cracked the whip and get the money
       | sinkhole that is deepmind to producing something that contributes
       | to the bottom line.
       | 
       | Only took something that can potentially take out Google (GPT4)
       | to make it happen.
        
       | dougmwne wrote:
       | While ultimately I think this is probably a very good
       | organizational change, to have similar teams working on similar
       | projects under the same leadership, it does seem to spell trouble
       | in the short term.
       | 
       | I can read between the lines that Google is done having Deepmind
       | floating out there independently creating foundational research
       | and not products. Sounds like this is a sign that they've
       | internally recognized they are behind and need all their
       | resources pulling in the same directions towards responding to
       | the OpenAI/Microsoft threat.
       | 
       | It also seems to signal that they won't have their answer to Bing
       | in the short term. As they say, nine women can't make a baby in a
       | month and adding people to a late project makes it later.
        
         | version_five wrote:
         | This sounds about right. I think it's acknowledged that
         | OpenAI's strength has been product rather than just pure
         | research - google and facebook both have way more publications
         | and deeper benches, but aren't really commercializing anything.
         | 
         | The shift to commercialization (by companies) was inevitable.
         | It's also a bit sad though. Somebody still has to do the
         | fundamental stuff, and Google (along with Facebook) have been
         | amazing for the ecosystem, especially open source. If everyone
         | is going the OpenAI route, the golden age of AI is going be be
         | over as we to the profit extraction phase
        
           | sangnoir wrote:
           | > google and facebook both have way more publications and
           | deeper benches, but aren't really commercializing anything.
           | 
           | I am absolutely certain that Google and Facebook are
           | productizing their AI research and integrating it with their
           | money-making products and measurably earning more money from
           | the effort. Perhaps what you mean by "commercializing" is
           | packaging AI in direct-to-consumer APIs? IMO, that market is
           | not currently large enough to be worth the effort, but is
           | almost certain GCloud will continue to expand ML support.
        
           | dougmwne wrote:
           | I see your point, though I think it's ultimately going to be
           | good for AI progress. So far the research has been mostly a
           | vanity project for these companies. Who knew if there was
           | really any gold at the end of those rainbows. Eventually the
           | appetite for participating in the research paper olympics was
           | going to run out, probably right at the same time that
           | monetary policy stayed tight for too long.
           | 
           | The possibility of building a trillion dollar company on this
           | tech means a whole lot more investment, more people entering
           | the field. More people excited to tinker in their spare time
           | and more practical knowledge gained. More GPUs in more data
           | centers. Eventually things will loop back around to pure
           | research with that many more resources applied.
           | 
           | It sure beats an AI winter, which probably would have been
           | the alternative had LLMs not taken off.
        
           | [deleted]
        
       | Etheryte wrote:
       | So statistically [0], expect this product to be shut down in
       | 2027?
       | 
       | [0] https://gcemetery.co/google-product-lifespan/
        
         | QuercusMax wrote:
         | It's not a product, it's a research organization.
        
           | DonHopkins wrote:
           | Department of Research Simulation
        
       | chevy90 wrote:
       | Cant believe when i heard this news of google lacking ai
       | development despite being front runner in tech for too long and
       | with all that talent under the hood.
       | 
       | How times changes or is it true that nothing good lasts long?
        
       | omot wrote:
       | From this blog post, I could already feel the bureaucratic nature
       | of their org. My money's still on OpenAI. I think their
       | motivation is more pure, their objectives more focused, and their
       | org more simple. I usually think of product dominance in two
       | vectors: first to market and benchmarks.
       | 
       | Google took over the world as something like the 11th search
       | engine to hit the market, but some of their benchmarks were 10x
       | better.
       | 
       | OpenAI has both going for them right now and I don't think that's
       | going to change.
        
       | seydor wrote:
       | Google Mind or Deep Brain?
        
       | rasengan wrote:
       | Google, did you really just copy OpenAI's website layout [1]?
       | 
       | Google isn't the leader anymore.
       | 
       | -_____-
       | 
       | [1] https://openai.com/blog/chatgpt
        
         | okdood64 wrote:
         | Seems like a pretty cookie cutter start-up style website in
         | 2023...
        
       | frozenlettuce wrote:
       | Does anyone else feels some sort of "corporate-speak-blindness"
       | when reading these statements from Google? They are just
       | informing that some orgs are being rearranged, but for some
       | reason they had to make the text have super low information
       | density.
        
       | m3kw9 wrote:
       | Google still researching lol
        
       | fancyfredbot wrote:
       | It's an embarrassment to Google to have two independent AI
       | research teams. It looks like a failure of management and
       | oversight. I'm very surprised it took this long for them to be
       | merged.
        
       | uptownfunk wrote:
       | End of the day, the best product innovation has come from hungry
       | passionate and capable founders with a solid mix of science,
       | engineering and product.
       | 
       | As we are now seeing before our eyes, Google has aged. Big tech
       | cushy culture does no longer creates an environment that yields
       | innovation.
       | 
       | The MSFT move was probably brilliant most for this reason. They
       | saw the writing on the wall. ChatGPT would never have been
       | invented at any big tech co.
       | 
       | Goog investment in anthropic is just taking msft sloppy seconds
       | and kind of copy cat play. Who knows maybe anthropic will make a
       | happy mistake and create something surprising.
       | 
       | You are likely reading the result of a lot of corporate reorg
       | that was a big political battle and the victors are now patting
       | themselves on the back.
       | 
       | That said, reorg can be good to refocus the company, but you're
       | bleeding out massively while the infection spreads, putting a
       | little bandaid is no reason to celebrate.
       | 
       | Anyways wish them the best of luck. As a kid it was always one of
       | those companies we all dreamed to work for. Now it is like an
       | aged grandparent who needs a cane to walk and encouragement when
       | they are able to walk by themselves.
        
       | local_crmdgeon wrote:
       | This is what panic looks like.
        
         | ulfw wrote:
         | Same as the Google Plus days under wonderful Vic Gundotra.
        
       | next_xibalba wrote:
       | Competition is such a beautiful catalyst.
        
       | sva_ wrote:
       | I'm worried that OpenAI has started a trend of these AI companies
       | being a lot more secretive about their research in the future. I
       | mean basically OpenAI took Deepmind's/Google's public research on
       | transformers and ran with it, not publishing back the results of
       | improving it.
       | 
       | This probably sent a bad message with consequences for the whole
       | public research field.
        
         | layer8 wrote:
         | Isn't this a given as soon as there's serious money in it?
        
           | nomel wrote:
           | Related, are there any examples of an "open" field of
           | business, with public disclosure of the secret sauce that
           | gives competitive advantage?
        
         | neel8986 wrote:
         | I agree with this. Last decade was golden age of AI where all
         | major players including Google brain, deepmind, FAIR, Microsoft
         | Research contributed a lot. To honest OpenAI had the least
         | intellectual contribution of them all except of few marketing
         | material masquerading as papers. From now on we can expect all
         | labs to secretive and not publish anything. This is really bad
         | considering all this models are black box and research is
         | needed to understand them better. Hope government comes into
         | picture and forces this labs to explain details of each model
        
           | tempusalaria wrote:
           | FAIR will continue to publish. Nvidia and Uber also. Then you
           | have open source oriented labs who should continue
           | publishing. Google is the big one. They have made more
           | research contributions than all other labs combined
           | basically.
        
             | q7xvh97o2pDhNrh wrote:
             | I'm not _that_ plugged-in to the AI world (though doing my
             | best to catch up)... but is Uber really viewed as a
             | powerhouse on the same level as FAIR?
        
               | singhrac wrote:
               | No, not really. It was never really as large, and most of
               | their output was in probabilistic programming (e.g. Pyro)
               | and work that was relevant to self-driving cars (point
               | cloud compression, etc.). But they shut down Uber AI in
               | the layoffs last year.
        
         | generalizations wrote:
         | That trend started at about the moment when Llama was leaked.
         | We didn't really take the good-faith limited access in good
         | faith ourselves, and as a result lost trust.
        
           | sva_ wrote:
           | I disagree. Meta released a research paper and the model (the
           | latter only to researchers.) OpenAI won't even release an
           | actual paper detailing the specifics of their research. Thats
           | a much lower bar, and I highly doubt those two incidents are
           | really related.
        
         | mcast wrote:
         | You mean Microsoft, right? OpenAI was publishing research
         | papers until the MSFT partnership.
        
           | return_to_monke wrote:
           | research papers yes, models no.
        
       | neximo64 wrote:
       | This demonstrates to shareholders of Alphabet that Sundar is
       | actually not a good CEO. The focus is not on the product or
       | quality but organising resources. The resources are already the
       | best at Google but led by a moron.
        
       | whywhywhydude wrote:
       | Exciting time for AI. A little competition from OpenAI is finally
       | forcing google AI researchers to actually focus on real world
       | applications instead of just publishing papers and patting
       | themselves on the back.
        
         | galaxytachyon wrote:
         | What do you mean? The attention based transformer architecture
         | was created by Google. AlphaFold took the biotech world by
         | storm. Tensorflow is significant platform for AI developers.
         | Chinchilla pioneered new method to improve LLM.
         | 
         | "Just publishing paper" is such an ignorant and dismissive
         | attitude to one of the most significant contributors to AI
         | development in the world. Without Google research and
         | publication, OpenAI would not have the foundation to build its
         | GPT to the current level.
        
           | Workaccount2 wrote:
           | >Without Google research and publication, OpenAI would not
           | have the foundation to build its GPT to the current level.
           | 
           | Right, and shareholders are asking Sundar "Why is OpenAI
           | launching our product and taking our ( _massive_ ) commercial
           | success?"
           | 
           | Honestly I think Sundar should be let go over this, he should
           | have been let go years ago, but now I definitely don't see
           | what leadership sees in him. The dude is a better fit for
           | running General Mills than a tech company. No innovation,
           | just sell the same thing over and over.
        
           | mrbungie wrote:
           | The whole point is that all the progress you mention is worth
           | peanuts to Google's shareholders. Hence this decision and
           | blog post about it.
           | 
           | If anything it allowed competitors to raise above Google.
           | 
           | PS: Not saying Deepmind's research is not worthy, nor that
           | this is fair. Just that it appears that Alphabet/Google (and
           | by extension Deepmind) is being reminded that its main goal
           | is making money.
        
             | pb7 wrote:
             | Ah, classic damned if you, damned if you don't.
             | 
             | Search is being ruined by the pursuit of maximizing ad
             | revenue but AI research is being wasted because it's not
             | used in pursuit of maximizing revenue. Can't really win,
             | huh? There should be nothing but gratitude that Google uses
             | its ad revenue to pay for research that greatly benefits
             | everyone.
        
               | mrbungie wrote:
               | I'm not judging. I'm grateful of Alphabet/Google and that
               | its research is being extremely useful in AI/ML, just
               | saying shareholders may not think that way.
        
       | turnsout wrote:
       | Another Google AI announcement with no product in sight
        
       | w_for_wumbo wrote:
       | While AI ethicists and safety researchers are urging for a pause
       | to understand the implications of what we have already built,
       | Google is announcing they will invest more in the acceleration of
       | Artificial Intelligence.
        
         | ReptileMan wrote:
         | >AI ethicists and safety researchers
         | 
         | Aka grifters. Those are the new DEI consultants.
        
       | theGnuMe wrote:
       | My take:
       | 
       | 1. All fundamental AI research now falls under Demis. So
       | basically what was Brain is now Deep Brain. 2. Jeff will lead the
       | product build out of a multi-modal AI (LLM). 3. Google research
       | under James will continue with everything else not directly AI
       | related.
        
       | Imnimo wrote:
       | It wasn't very long ago that we were reading articles saying that
       | Deepmind wanted more independence from Google
       | (https://www.wsj.com/articles/google-unit-deepmind-
       | triedand-f...).
       | 
       | Feels a bit like China absorbing Hong Kong.
        
       | galaxyquanta wrote:
       | What does this mean for JAX (light-weight ML library from Google
       | Brain) vs Tensorflow (from Deepmind)?
        
         | tarvaina wrote:
         | Tensorflow is also from Google Brain, not DeepMind.
        
         | johnmoberg wrote:
         | DeepMind already seem to be using JAX quite extensively:
         | https://www.deepmind.com/blog/using-jax-to-accelerate-our-re...
        
       | dahwolf wrote:
       | "Combining our talents and efforts will accelerate our progress
       | towards a world in which AI helps solve the biggest challenges
       | facing humanity"
       | 
       | ...which is that we're not looking at enough ads.
        
       | divyekapoor wrote:
       | "Sundar is announcing"... not "we are announcing"... speaks
       | volumes as to the fact that this was a unilateral decision.
        
         | gsatic wrote:
         | Obviously. Who wants to work on adtech? Only ppl without a
         | choice or a clue.
        
         | dougmwne wrote:
         | Hah! Good catch. I doubt that was unintentional. Demis doesn't
         | sound too pleased about having his merry band of misfits sucked
         | into the mothership.
        
       ___________________________________________________________________
       (page generated 2023-04-20 23:00 UTC)