[HN Gopher] Why did Google Brain exist?
       ___________________________________________________________________
        
       Why did Google Brain exist?
        
       Author : brilee
       Score  : 312 points
       Date   : 2023-04-26 16:22 UTC (6 hours ago)
        
 (HTM) web link (www.moderndescartes.com)
 (TXT) w3m dump (www.moderndescartes.com)
        
       | khazhoux wrote:
       | > I sat on it because I wasn't sure of the optics of posting such
       | an essay while employed by Google Brain. But then Google made my
       | decision easier by laying me off in January. My severance check
       | cleared...
       | 
       | I'm really baffled by how people think it's OK to write public
       | accounts of their previous ( _and sometime current!_ ) employers'
       | inner workings. This guy got paid a shitload of money to do work
       | _and to keep all internal details private, even after he leaves_.
       | They could not be more clear about this when you join the
       | company.
       | 
       | Why do people think it's OK to share like this? This isn't a
       | whistleblowing situation -- he's just going for internet brownie
       | points. It's just an attempt to squeeze a bit more personal
       | benefit out of your (now-ended) employment.
       | 
       | Contractual/legal issues aside, I think this kind of post shows a
       | lack of personal integrity (because he _did_ sign a paper
       | agreeing not to disclose info), and even a betrayal of former
       | teammates who now have to deal with the fallout.
        
         | mrbabbage wrote:
         | The primary purpose of an NDA is to allow the company to
         | enforce trade secrets: the existence of the NDA is proof that
         | the company took steps to maintain the secrets' secrecy.
         | Nothing in this blog post looks like a trade secret to me;
         | rather, it's one person's fairly high-level reflections on the
         | work environment at a particularly high profile lab.
         | 
         | While he technically may have violated the NDA, it's really
         | hard for me to see any damage or fallout from this post. It's
         | gentle, disparages only at the highest levels of abstraction,
         | doesn't name names, etc. I don't think it makes sense to view
         | it in a moralistic or personal integrity light. Breach-of-
         | contract is not a moral wrong, merely a civil one that allows
         | the counterparty (Google) to get damages if they want.
        
         | bubblethink wrote:
         | > a betrayal of former teammates who now have to deal with the
         | fallout.
         | 
         | What fallout is this ? Did you sign a contract with him ? If
         | you are harmed by it, why don't you seek legal recourse ? Your
         | entire rant started with some NDA stuff, and in the end you
         | say, "legal issues aside". This is like the "Having said that"
         | move from Curb. You start with something, then contradict
         | yourself completely with "Having said that". If you have a
         | contractual grievance, seek it. If not, you are grieving on the
         | internet, just like he is.
        
         | brilee wrote:
         | I've been quite careful not to divulge anything confidential,
         | and anything that is remotely close to sensitive has publicly
         | accessible citations. My opinions about Google are tantamount
         | to discussions about workplace conditions, and it would be very
         | bad for society if ex-employees were not allowed to discuss
         | those.
        
           | khazhoux wrote:
           | But all your opinions are informed by your years of TGIFs and
           | internal emails and discussions and presentations and your
           | insider perspective. When you talk about promotions, or
           | internal values and prioritizations, you are leveraging info
           | gained privately.
           | 
           | If I'm wrong and nothing in your contract or on-boarding said
           | you shouldn't talk about internals, then my bad. But I
           | suspect they were as clear with you as they were with me,
           | that it's not ok to post anything based on inside info. And
           | in your opening paragraph you say:
           | 
           | > As somebody with a _unique perspective_
           | 
           | Your unique perspective was your access as an employee.
           | 
           | > and the unique freedom to share it
           | 
           | Your unique freedom is that you're done receiving money from
           | them. But contractually, this doesn't matter.
        
             | packetslave wrote:
             | Why do you care? Unless you work for Google Legal, you're
             | in no position to scold OP about _anything_
        
               | khazhoux wrote:
               | I care because I value these Confidentiality commitments,
               | and I believe that if someone doesn't like them, they
               | should not sign them to begin with, rather than breaking
               | them. A company (like any group of people) is allowed to
               | define the culture and standards required for membership.
               | 
               | I've worked in extremely secretive companies, and very
               | open ones. I prefer the open ones. But I still don't say
               | anything about internals at the secretive ones -- because
               | that was part of the commitment I made in exchange for
               | employment.
        
               | anonymouskimmer wrote:
               | From my perspective, in the most bold words I can think
               | to phrase this: You're betraying the citizens and lawful
               | residents of your country by not informing them of what
               | to expect should they accept a job at one of these
               | companies. Have fun with your 30 pieces of silver
               | (https://en.wikipedia.org/wiki/Thirty_pieces_of_silver ).
               | 
               | > and I believe that if someone doesn't like them, they
               | should not sign them to begin with, rather than breaking
               | them.
               | 
               | Do you, at least, believe that confidentiality agreements
               | should be broken if it is to make the police or public
               | aware of a crime? How about a civil infraction, such as a
               | hostile working environment?
        
               | khazhoux wrote:
               | > Do you, at least, believe that confidentiality
               | agreements should be broken if it is to make the police
               | or public aware of a crime? How about a civil infraction,
               | such as a hostile working environment?
               | 
               | Absolutely. I referenced whistleblowing in my original
               | post above. This isn't a such a case.
        
               | anonymouskimmer wrote:
               | Does this apply to people who infiltrate what they
               | believe are corrupt companies with the expectation of
               | digging up dirt, and thus sign the confidentiality
               | agreements with the expectation of violating them?
               | 
               | Does this apply to moral wrongs, which are technically
               | legal (e.g. cruel conditions at animal farms)?
        
           | burningion wrote:
           | It's obvious you care about the people you worked with, and
           | the potential for what you were building. From my perspective
           | you wrote this for the people who couldn't.
           | 
           | Ignore this person.
        
         | crazygringo wrote:
         | How do you know he was paid to keep all internal details
         | private after he leaves? Do you have knowledge of the
         | employment contract, can you share the relevant language with
         | us?
         | 
         | All he says is his concern about "optics", which has nothing to
         | do with contract.
         | 
         | If Google has a problem with his post, they can go after him,
         | but that's an issue between Google and him, not with you or me
         | or the rest of the internet.
         | 
         | I'm definitely struggling to see what any of this has to do
         | with personal integrity, betrayal, or squeezing personal
         | benefit. To the contrary, it simply seems informative and he's
         | sharing knowledge just to be helpful. Unless I've missed
         | something, I don't see anything revealed that would harm Google
         | or his former teammates here. No leaks of what's coming down
         | the pipeline, no scandals, nothing of the sort.
         | 
         | People are allowed to share opinions of their previous
         | employment and generally describe the broad outlines of their
         | work and where they worked. This isn't a situation of working
         | for the CIA with top-secret clearance.
        
           | khazhoux wrote:
           | > Do you have knowledge of the employment contract, can you
           | share the relevant language with us?
           | 
           | I actually could dig up my own contract from years ago (ugh,
           | the effort though) but Confidentiality clause is in there,
           | and it was made clear during on-boarding what is expected
           | from employees: don't share any internal info unless you're
           | an authorized company representative.
        
             | anonymouskimmer wrote:
             | General working conditions are not "internal info"; it is
             | beneficial to society to discuss working conditions (which
             | can be pretty detailed), and is thus a protected activity
             | under various laws. Nothing in any contract can obviate
             | this lawful right (and, to some, a basic duty of
             | citizenship, since the country is more important than any
             | single company within the country). At best contracts can
             | highlight what _is_ privileged information that there is a
             | duty to keep secret.
        
               | iamdamian wrote:
               | Now you have me curious. Which laws, specifically?
        
               | anonymouskimmer wrote:
               | The big one in the USA is the National Labor Relations
               | Act, but this generally applies to group action. However,
               | such group action can be literally publically posting
               | about a job's work conditions.
               | 
               | Here's a list of recent state laws in Washington, Oregon,
               | and Maine, which prohibition certain kinds of NDAs
               | (oriented toward unlawful activity, not general speech
               | rights): https://www.foley.com/en/insights/publications/2
               | 022/07/sever...
               | 
               | Contracts are required to be 'reasonable' for both
               | parties. This puts limits on the ability to constrain in
               | a contract. I don't know how much this reasonableness
               | standard is statutory or judicial.
               | 
               | For public-sector employees there's some protection under
               | the first amendment.
               | https://www.workplacefairness.org/retaliation-public-
               | employe...
               | 
               | And in Connecticut this free speech protection transfers
               | to private-sector employees too: https://law.justia.com/c
               | odes/connecticut/2005/title31/sec31-...
               | 
               | https://www.natlawreview.com/article/free-speech-and-
               | express...
               | 
               | https://www.pullcom.com/working-together/there-are-
               | limits-to... > there is no statutory protection if the
               | employee was only complaining about personal matters,
               | such as the terms or conditions of employment. The
               | employee has to show that he was commenting on a matter
               | of public concern, rather than attempting to resolve a
               | private dispute with the employer.
        
               | vore wrote:
               | Broadly, section 7 of the NLRA, but it is not spelled out
               | in the text of the act. Instead, Quicken Loans, Inc. v.
               | NLRB established the precedent that discussing working
               | conditions is not a violation of confidentiality: https:/
               | /www.lexisnexis.com/community/casebrief/p/casebrief-q...
        
             | colonwqbang wrote:
             | So only people who never worked at Google are allowed to
             | criticse Google?
        
             | confounded wrote:
             | You should write to to the author, to scold them directly!
        
               | [deleted]
        
           | refulgentis wrote:
           | This is a hopelessly naive misunderstanding framed as obvious
           | disloyalty, I want to buy calls on your career in middle
           | management.
        
             | seizethecheese wrote:
             | You mean puts?
        
               | refulgentis wrote:
               | nah calls, very in line with what _sounds_ good even when
               | naive and misinformed
        
         | q845712 wrote:
         | I read the article and thought he did a fine job of not
         | spilling too many secrets - I'm curious what you thought he
         | said that crossed the line?
         | 
         | I'm not personally aware of signing something that says "I'll
         | keep all internal details private" though I agree I'd be highly
         | unlikely to refer to anyone below the SVP level by name -- but
         | I think that's exactly what OP did?
        
           | khazhoux wrote:
           | True, this wasn't the most egregious. But the principle still
           | applies. He said himself he held this back until his final
           | check cleared.
        
             | rurp wrote:
             | A large company acting out spite towards a former employee
             | has happened more than a couple times. Minimizing that risk
             | seems entirely reasonable.
             | 
             | As others have said, I really don't see anything that's
             | especially private in the article. The author wrote in
             | pretty general terms.
        
         | ynx wrote:
         | Because a promise that imprisons knowledge indefinitely has no
         | integrity. To me, it's clear that it is absolutely in the
         | public interest to - perhaps not go out of one's way to spread,
         | but at least feel free to - explain the conditions and inner
         | workings of large and impactful organizations.
         | 
         | We're all thinkers who are asked to apply our minds in exchange
         | for money, not slaves whose brain is leased to or owned by our
         | employers for the duration of our tenure.
         | 
         | Even when asked to keep things secret, there's still no way for
         | a company to own every last vestige of knowledge or
         | understanding retained in our minds, and there's _still_ an
         | overwhelming public interest in building on and preserving
         | knowledge, to the point that, in my opinion, nearly any piece
         | of human knowledge short of trade secrets should _eventually_
         | be owned by humanity as a whole. (and there 's even some moral
         | arguments to be made about some trade secrets, but that's a
         | much deeper discussion)
         | 
         | I personally find that people who are overly concerned with
         | secrecy view their integrity from the lens of their employer,
         | but not at a human interest level. To be clear, there are still
         | times for secrecy or the security or integrity of information
         | to be respected, but it's nuanced and generally narrower than
         | people expect.
        
         | protastus wrote:
         | I've never worked at Google Brain, but I've been a research
         | manager in tech for a decade, and nothing here seems surprising
         | to me. It discusses the archetype of the well funded industry
         | lab that is too academic and ultimately winds down.
         | 
         | The post makes sensible but generic statements based on the
         | view of an IC. It tries to work back from the conclusion
         | (Google struggles to move academic research into product) and
         | produces plausible but hardly definitive explanations, with no
         | ranking, primarily because there's no discussion of the
         | thinking, actions and promises made at the executive level that
         | kept the lab funded for all these years.
        
           | brilee wrote:
           | You're right that I don't know enough as an IC to comment on
           | the thinking, actions, and promises at the exec level that
           | kept Brain funded. But even if I did, to the GP's point, I
           | would not talk about that part at all.
           | 
           | I'm curious about your perspective as a research manager in
           | tech - would you be willing to chat privately?
        
             | anonymouskimmer wrote:
             | > But even if I did, to the GP's point, I would not talk
             | about that part at all.
             | 
             | Even if you were one of said executives, who was moving to
             | a higher-level job at another company, and the company was
             | interviewing you as to the reasoning behind decisions made
             | in your previous job?
        
         | TaylorAlexander wrote:
         | What did he sign? And what are the relevant laws?
         | 
         | You have to be careful thinking you owe your employer
         | everything they would wish to have. Disclosing the inner
         | workings is extremely helpful to people trying to figure out
         | where to work, and how their current employer compares to
         | others. I got a job at Google X Robotics in 2017 in large part
         | because the place was so secret and I always wanted to know
         | what happened there. It was quite an interesting experience,
         | but I do wonder how I would have felt if someone like me
         | working there had written something like this before I made the
         | decision.
        
         | burtonator wrote:
         | That's not really what's happening here I think.
         | 
         | I think he's just not sure what's ok or not ok to write about.
         | Nothing he wrote here was problematic. No secrets and just sort
         | of mild criticism.
        
         | jrochkind1 wrote:
         | > he's just going for internet brownie points. It's just an
         | attempt to squeeze a bit more personal benefit out of your
         | (now-ended) employment.
         | 
         | So, many researcher types (and not only them, but also
         | including many of us) are motivated by -- find it personally
         | rewarding at a psychological level -- to share their thoughts
         | in a dialog with a community. They just find this to be an
         | enjoyable thing to do that makes them feel like they are a
         | valuable member of society contributing to a general
         | collaborative practice of knowledge-creation.
         | 
         | I hope it doesn't feel like I'm explaining the obvious; but it
         | occurs to me that to ask the question the way you did, this is
         | probably _not_ a motivation you have, not something you find
         | personally rewarding. Which is fine, we all are driven by
         | different things.
         | 
         | But I don't think it's quite the same thing as "internet
         | brownie points" -- while if you are especially good at it, you
         | will gain respect and admiration, which you will probably
         | appreciate it -- you aren't thinking "if I share my insight
         | gained working at Google, then maybe more people will think I'm
         | cool," you're just thinking that it's a natural urge to share
         | your insight and get feedback on it, because that itself is
         | something enjoyable and gives you a sense of purpose.
         | 
         | Which is to say, i don't think it's exactly a motivation for
         | "personal benefit" either, except in the sense that doing
         | things you enjoy and find rewarding are a "personal benefit",
         | that having a sense of purpose is a "personal benefit", sure.
         | 
         | I'm aware that not everyone works this way. I'm aware that some
         | people on HN seem to be motivated primarily by maximizing
         | income, for instance. That, or some other orientation, may lead
         | to thinking that one should never share anything at all
         | publicly about one's job, because it can only hurt and never
         | help whatever one's goals are (maximizing income or what have
         | you).
         | 
         | (Although... here you are commenting on HN; why? For internet
         | brownie points?)
         | 
         | But that is not generally how academic/researcher types are
         | oriented.
         | 
         | I think it's a sad thing if it becomes commonplace to think
         | that there's something _wrong_ with people who find purpose and
         | meaning in sharing their insights in dialog with a community.
        
         | choppaface wrote:
         | I'm really baffled by how many people think a job is more than
         | a job and there's some ownership over the employee's critical
         | thinking capabilities during the job and after it ends.
         | 
         | While I agree the OP is "going for internet brownie points" (or
         | probably a bit butthurt from being laid off from a certifiably
         | top-5 cushiest job in the United States) the article doesn't
         | include anything even remotely trade secret and is
         | predominantly opinion. It's really totally fine to blog about
         | how you feel about your employer. There are certainly risks,
         | but a company has to pay extra if they actually don't want you
         | to blog at all (or they have to be extremely litigious).
         | 
         | There's a strong precedent of employer-employee loyalty that
         | has substantially been set due to pre-internet information
         | disparities between the two. In the past year or so, there have
         | been some pretty unprecedented layoffs (e.g. Google lays off
         | thousands and then does billions of stock buybacks ...)... The
         | employer-employee relationship needs to evolve.
        
           | khazhoux wrote:
           | > a company has to pay extra
           | 
           | Part of my problem is that Google (and similar companies) are
           | _already_ paying insane (in the best way possible) amounts of
           | money, and when you sign the agreement to take said money,
           | you explicitly promise you won 't talk about company
           | internals.
           | 
           | To me this is quite simple: if you accept what winds up being
           | millions of dollars in cash+equity, and you give your word
           | that you'll keep your mouth shut as one of the conditions for
           | that pile of money... then you shut keep your mouth shut.
        
             | choppaface wrote:
             | You keep your mouth shut about _material property_ of
             | Google. E.g. don 't post code, and probably don't give
             | details about code that hasn't been made public. Sure, this
             | area is not clear and can also depend on one's internal
             | position in the company, but it's important to separate
             | moral hazard from legal hazard.
             | 
             | As far as one's _opinions_ go, and in particular how the
             | company made you _feel_ , that's not paid for. A severance
             | agreement might outline some things, but again that's legal
             | hazard and not moral hazard. There are certainly some execs
             | and managers who will only want to work with really, really
             | loyal people, who throw in a lot more for the money. And
             | some execs will pay a lot more for that... e.g. look at how
             | much Tesla spends on employee litigation.
        
             | anonymouskimmer wrote:
             | Should you also not use any skills you gained at your
             | previous employer at your new employer? Not to mention any
             | techniques you learned about that may help your current
             | employer? Would doing so be "talking about company
             | internals"?
             | 
             | So how do you ever get a better job than entry level, if
             | you aren't willing to use the knowledge you gained at prior
             | jobs in new jobs?
        
         | epolanski wrote:
         | I would never trust or hire people that wash their laundry
         | publicly like this.
        
         | gscott wrote:
         | The check cleared!
        
         | ShamelessC wrote:
         | That's a very optimistic take on how new generations perceive
         | the ethics surrounding confidentiality. Are you really
         | _baffled_ by this? I understand that it's a common position,
         | but it's a position that is so clearly tainted by the conflicts
         | of interest between employer and employee. And a keen awareness
         | of those conflicting interests _only_ serves to better an
         | employee's ability to serve themselves best in a capitalist
         | economy.
         | 
         | I'm not saying you are wrong per se. But if you don't see why
         | employees are willing to act in this way, you don't see how
         | employees feel about being trapped in a system where no matter
         | how much you are paid - you are ultimately making someone else
         | more.
        
           | khazhoux wrote:
           | I totally get why people act this way. Because it's Very
           | Important That Everyone Knows What I Think.
           | 
           | But there is no inequity or conflict of interest here. None
           | of that about being trapped in a capitalist economy is to the
           | point here. He has probably a couple million dollars in his
           | bank account that wasn't there before, and the deal was to
           | not talk publicly about internals (which includes promotions
           | process, internal motivations and decision-making, etc).
        
             | packetslave wrote:
             | Yet for some reason OP and everyone here are supposed to
             | care what YOU think?
        
               | khazhoux wrote:
               | I'm not suggesting people shouldn't post thoughts and
               | opinions. This is about whether a personal desire to
               | self-express, should override their explicit prior
               | commitment/promise not to do so.
        
             | pessimizer wrote:
             | You don't think there's any inequity between somebody with
             | a couple million in the bank and google, or any conflict of
             | interest between my desire to talk about my work and my
             | employer's desire that I do not? Your position is valid
             | enough without being willfully obtuse.
        
               | khazhoux wrote:
               | Once you agree to not talk about it (as part of the
               | employment terms), then there is zero conflict of
               | interest.
               | 
               | "Yes, I agree to not share internal info, in exchange for
               | this money. And by the way, I _will_ still share internal
               | info, because inequity. "
        
               | bbor wrote:
               | You're thinking about employment contracts like an
               | abstract economic exchange between two free parties,
               | which is very micro. Try thinking about it instead like a
               | bargain with the (macro) devil.
               | 
               | In other words, consider someone's perspective who has
               | society split into two camps: the people who do all the
               | work, and the corrupt elite that make their living
               | through theft and oppression. In such a world, signing a
               | contract with an employer (i.e. capitalist i.e. elite) is
               | more of a practical step than a sacrosanct bond. There's
               | a level of "what's reasonable" and "what legally
               | enforceable" beyond the basic "never break a promise"
               | level you're working at, IMO.
               | 
               | No ones endorsing publishing trade secrets randomly, but
               | you're treating all disclosures like they're equivalent.
        
         | jll29 wrote:
         | I checked with some Google friends who told me their contract
         | even makes it illegal to tell anyone that they work for Google
         | (no joke).
         | 
         | One side is what's in the papers you signed and the other side
         | is to what extent the terms can be enforced. But you have a
         | point in that it would be good professional practice to wait
         | for a decade before disclosing internals especially when names
         | of people are dropped...
        
           | [deleted]
        
       | [deleted]
        
       | simonster wrote:
       | I work for Google Brain. I remember meeting Brian at a conference
       | and I have nothing but good things to say about him. That said, I
       | think Brian is underestimating the extent to which the
       | Brain/DeepMind merger is happening because it's what researchers
       | want. Many of us have a strong sense that the future of ML
       | involves models built by large teams in industry environments. My
       | impression is that the goal of the merger is to create a better,
       | more coordinated environment for that kind of research.
        
         | gowld wrote:
         | The goal of the merger is for execs to look like they are doing
         | something to drive progress. Actual progress comes from the
         | researchers and developers.
        
           | anonylizard wrote:
           | Well, where exactly is this progress? Where is Google's
           | answer to GPT-4? Why weren't the 'researchers and developers'
           | making a GPT-4 equivalent?
           | 
           | Turns out you sometimes you need a top down, centralised
           | vision to execute on projects. When the goal is undefined,
           | you can allow researchers to run free and explore, now its
           | full on wartime, with clear goals (make GPT-5,6,7....).
        
             | oofbey wrote:
             | Google is fundamentally allergic to top-down management.
             | Most googlers will reject any attempt to be told what to do
             | as wrong, because lots of IC's voting with their feet are
             | smarter than any (google) exec at figuring out what to do.
             | 
             | Last time Google got spooked by a competitor was Facebook,
             | and they built Google Plus in response. We all know that
             | was an utter failure. Googlers could escape that one with
             | their egos in tact because winning in "social" is just some
             | UX junk, not hard-core engineering like ML.
             | 
             | It's gonna be super hard for them to come to grips with the
             | fact that they are way behind on something that they should
             | be good at. Plan for lots of cognitive dissonance ahead.
        
       | choppaface wrote:
       | > Neither side "won" this merger. I think both Brain and DeepMind
       | lose. I expect to see many project cancellations, project
       | mergers, and reallocations of headcount over the next few months,
       | as well as attrition.
       | 
       | This merger will be a big test for Sundar, who has openly
       | admitted years ago to there being major trust issues [1]. Can
       | Sundar maintain the perspective of being the alpha company while
       | bleeding a ton of talent that doesn't actively contribute to tech
       | dominance? Or will he piss off the wrong people internally? It's
       | OK to have a Google Plus / Stadia failure if the team really
       | wanted to do the project. If the team does _not_ want to work
       | together though, and they fail, then Sundar's request that the
       | orgs work together to save the company is going to get totally
       | ignored in the finger-pointing.
       | 
       | [1] https://www.axios.com/2019/10/26/google-trust-employee-
       | immig... .
        
         | ergocoder wrote:
         | The merger will fail.
         | 
         | If 5000 people are not enough to do things, 10000 people will
         | unlikely change that.
        
       | vl wrote:
       | >PyTorch/Nvidia GPUs easily overtaking TensorFlow/Google TPUs.
       | 
       | TF lost to PyTorch, and this is Google's fault - TF APIs are both
       | insane and badly documented.
       | 
       | But nothing comes close to performance of Google's TPU exaflop
       | mega-clusters. Nvidia is not even in the same ballpark.
        
         | gillesjacobs wrote:
         | I have used both but ended up dropping TF for PyTorch after
         | 2018. Mainly it was the larger PyTorch ecosystem in my field
         | (NLP) and clear API design and documentation that did it for
         | me.
         | 
         | However, TF was still a valid contender and it was not clearcut
         | back in 2016-17 which framework was better.
        
         | jdlyga wrote:
         | I can speak from experience on this. Getting started with
         | TensorFlow was very complicated with sparse documentation, so
         | we dropped the idea of using it.
        
           | vl wrote:
           | I had to use TF when I worked at G, when I left I immediately
           | started to use PyTorch and never looked back.
        
             | aix1 wrote:
             | Even internally at Google/DeepMind, all the cool kids have
             | long moved to JAX.
        
               | dekhn wrote:
               | Once the ads model runs on Jax instead of TF, it's
               | curtains for TF.
        
         | joseph_grobbles wrote:
         | [dead]
        
         | belval wrote:
         | There is a first mover handicap there though. TF1.0 included a
         | bunch of things that were harder to understand like
         | tf.Session(). PyTorch was inspired from the good parts and "we
         | will eager-everything". Internally I'm sure there was a lot of
         | debate in the TF team that culminated with TF2.0, but by that
         | time the damage was done and people saw PyTorch as easier.
        
           | bitL wrote:
           | I think the main problem was debugging tensors on the fly,
           | impossible with TF/Keras, but completely natural to PyTorch.
           | Most researchers needed to sequentially observe what is going
           | on in tensors (histograms etc.) and even doing backprop for
           | their newly constructed layers by hand and that was difficult
           | with TF.
        
           | disgruntledphd2 wrote:
           | Nope, Pytorch was inspired by the Lua version of Torch which
           | well predates Tensor flow. To be fair, basically every other
           | DL framework made the same mistake though.
           | 
           | Also, tensorflow was a total nightmare to install while
           | Pytorch was pretty straightforward, which definitely
           | shouldn't be discounted.
        
             | andyjohnson0 wrote:
             | > Also, tensorflow was a total nightmare to install while
             | Pytorch was pretty straightforward, which definitely
             | shouldn't be discounted.
             | 
             | I think this is a very important point, and I remember
             | sweating blood trying to build a standalone tf environment
             | (admittedly on windows) in the past. I'm impressed by how
             | much simpler and smoother the process has recently become.
             | 
             | I do prefer Keras to Pytorch though - but thats just me
        
         | tdullien wrote:
         | TF in its first version was stellarly misdesigned. It was
         | infuriatingly difficult to use, particularly if you were of the
         | "I just want to write code and have it autodiffed + SGDed"
         | school, I found it crazy to use Python to manually construct a
         | computational graph...
        
           | oofbey wrote:
           | TF1 was pretty rough to use, but beat the pants off Theano
           | for usability, which was really the best thing going before
           | it. Sure it was slow as dirt ("tensorslow") even though the
           | awkward design was justified on being able to make it fast.
           | But it was by far the best thing going for a long time.
           | 
           | Google really killed TF with the transition to TF2. Backwards
           | incompatible everything? This only makes sense if you live in
           | a giant monorepo with tools that rewrite everybody's code
           | whenever you change an interface. (e.g. inside google). On
           | the outside it took TF's biggest asset and turned it into a
           | liability. Every library, blog post, stackoverflow post, etc
           | talking about TF was now wrong. So anybody trying to figure
           | out how to get started or build something was forced into
           | confusion. Not sure about this, but I suspect it's Chollet's
           | fault.
        
           | dekhn wrote:
           | You need _something_ to construct a graph. Why not pick a
           | well-known language already used in scientific computing and
           | stats /data science? The other options are: pick a lesser
           | known language (lua, julia) or a language not traditionally
           | used for scientific computing (php, ruby), or a compiled
           | language most researchers don't know (C++), or a raw config
           | file format (which you would then use code to generate).
           | 
           | What's really crazy is using Pure, Idiomatic Python which is
           | then Traced to generate a graph (what Jax does). I want my
           | model definitions to be declarative, not implict in the code.
        
           | whymauri wrote:
           | Python is the least of my concerns with Tensorflow...
           | especially TF 1.0. What a mess it was, and kinda still is.
        
           | 1024core wrote:
           | There's a reason why the TL behind TF (Rajat Monga) got
           | booted out.
        
             | piecerough wrote:
             | What's this based on?
        
               | 1024core wrote:
               | Check Rajat Monga's LinkedIn profile. He's no longer with
               | Google.
        
           | metadat wrote:
           | What's the meaning of _SDG_ in this context?
           | 
           | Edit:
           | 
           | Hypothesis: Stochastic Gradient Descent
        
             | ragebol wrote:
             | Parent typed SGD, which means Stochastic Gradient Descent.
             | An optimization method.
        
       | asdfman123 wrote:
       | My theory is that broadly, tech learned not to act like Microsoft
       | in the 90s -- closed off, anti-competitive, unpopular -- but
       | swung too far in the opposite direction.
       | 
       | Google has been basically giving away technology for free, which
       | was easy because of all the easy money. It's good for reputation
       | and attracting the best talent. That is, until a competitor
       | starts to threaten to overtake you with the technology you gave
       | them (ChatGPT based on LLM research, Edge based on Chromium,
       | etc.).
        
         | potatolicious wrote:
         | Ehh, I mildly disagree. I'm not entirely bought-in on the
         | notion that giving one's technical innovations for free is
         | obviously the right move, but I don't think it's why the
         | company is in trouble.
         | 
         | Chrome is experiencing unprecedented competition because it
         | faltered on the product. Chrome went from synonymous with fast-
         | and-snappy to synonymous with slow-and-bloated.
         | 
         | Likewise Google invented transformers - but the sin isn't
         | giving it away, it's failing to exercise the technology itself
         | in a compelling way. At any moment in time Google could have
         | released ChatGPT (or some variation thereof), but they didn't.
         | 
         | I've made this point before - but Google's problems have little
         | to do with how it's pursuing fundamental research, but
         | everything to do with how it pursues its products. The failure
         | to apply fundamental innovations that happened within its own
         | halls is organizational.
        
           | return_to_monke wrote:
           | > Chrome is experiencing unprecedented competition
           | 
           | From where? 65% globally use Chrome (https://en.m.wikipedia.o
           | rg/wiki/Usage_share_of_web_browsers).
           | 
           | The only widespread competition is from Safari and FF, both
           | of which have been around longer than it.
        
           | asdfman123 wrote:
           | Well, Google could have easily not have shared its
           | technology.
           | 
           | However, the bloat problem you've described are difficult
           | problems to solve, and are to some degree endemic to large
           | businesses with established products.
        
             | potatolicious wrote:
             | > _" Well, Google could have easily not have shared its
             | technology."_
             | 
             | Sure, but the idea is that if they didn't share their
             | technology, they'd still be in the same spot: they would
             | have invented transformers and _still_ not shipped major
             | products around it.
             | 
             | Sure maybe OpenAI won't exist, but competitors will find
             | other ways to compete. They always do.
             | 
             | So at best they are _very very slightly_ better off than
             | the alternative, but being secretive IMO wouldn 't have
             | been a major change to their market position.
             | 
             | Meanwhile, if Google was better at productizing its
             | research, it matters relatively little what they give away.
             | They would be first to market with best-in-class products,
             | the fact that there would be a litany of clones would be a
             | minor annoyance at best.
        
               | asdfman123 wrote:
               | True, but they only feel the fire now, and you can tell
               | they're rapidly trying to productionalize stuff like
               | you've described. It will take time though.
               | 
               | They were too risk averse before.
        
       | dr_dshiv wrote:
       | Lots of great insight. Here's one:
       | 
       | "Given the long timelines of a PhD program, the vast majority of
       | early ML researchers were self-taught crossovers from other
       | fields. This created the conditions for excellent
       | interdisciplinary work to happen. This transitional anomaly is
       | unfortunately mistaken by most people to be an inherent property
       | of machine learning to upturn existing fields. It is not.
       | 
       | Today, the vast majority of new ML researcher hires are freshly
       | minted PhDs, who have only ever studied problems from the ML
       | point of view. I've seen repeatedly that it's much harder for a
       | ML PhD to learn chemistry than for a chemist to learn ML."
        
         | asciimov wrote:
         | > "[...] I've seen repeatedly that it's much harder for a ML
         | PhD to learn chemistry than for a chemist to learn ML."
         | 
         | That's good ol' academic gatekeeping for ya, available wherever
         | PhD's are found.
        
           | mattkrause wrote:
           | There's more to it than that.
           | 
           | CS is unusually easy to learn on your own. You can mess
           | around, build intuition, and check your progress---all on
           | your own and in your pyjamas. It's easy to roll things back
           | if you make a mistake, and hard to do lasting damage. There
           | are tons of useful resources, often freely available. Thus,
           | you can get to an intermediate level quickly and cheaply.
           | 
           | Wet-lab fields have none of that. Hands-on experience and
           | mentorship is hard for beginners to get outside of school.
           | There are a few introductory things online, but what's the
           | Andrew Ng MOOC for pchem?
        
         | knorker wrote:
         | > I've seen repeatedly that it's much harder for a ML PhD to
         | learn chemistry than for a chemist to learn ML.
         | 
         | Haha, I've seen that for so many topics. "It's much easier for
         | someone used to circuit switched phone networks to learn IP
         | than the other way around", says the person who started with
         | circuit switched.
         | 
         | I just thought "dude, you're literally the worst at IP
         | networking that I've ever met. Your misunderstandings are dug
         | into everything I've seen you do with IP".
        
         | kergonath wrote:
         | > I've seen repeatedly that it's much harder for a ML PhD to
         | learn chemistry than for a chemist to learn ML
         | 
         | I can confirm. We regularly look for people to write some
         | computational physics code, and recently for people using ML to
         | solve solid state physics problems. It's way easier to bring a
         | good physicist or chemist to a decent CS level (either ML or
         | HPC) than the other way around.
        
           | hgsgm wrote:
           | This is because software developers are too good at
           | automation themselves out of a job.
        
             | MichaelZuo wrote:
             | It's also because nobody goes to get a phd in solid state
             | physics for the money or career prospects, at least not in
             | the last decade. So it's a small and self selected group.
        
             | kergonath wrote:
             | We love automation. There is just too much to do, and the
             | field is unbounded. More automation means more papers and
             | more interesting science.
        
           | epolanski wrote:
           | It's the same reason analysts come from math rather than
           | economy degrees.
           | 
           | You can teach a mathematician what he needs to know about
           | finance, you can hardly do the opposite.
        
         | dekhn wrote:
         | As somebody who has crossed the line between ML and chemistry
         | many times, I would love to see: more ML researchers who know
         | chemistry, more chemistry researchers who know ML, and best of
         | all, fully cross-disciplinary researchers who are both masters
         | of chemistry and ML, as those are the ones who move the field
         | farthest, fastest.
        
           | whymauri wrote:
           | You could probably fit all the people who fit the last
           | criteria in the same room (chemistry side is probably the
           | bottleneck, especially drugs which is a effectively a
           | specialization).
        
           | quickthrower2 wrote:
           | Society is not structured to encourage this. Getting a job
           | sooner is more lucrative. Any breakthough you make having
           | studied for a couple of decades is property of a corporation
           | not you.
        
           | pama wrote:
           | Present. I think there exist many of us. Chemistry is a very
           | wide field though, so not sure if organic synthesis vs
           | theoretical chemistry vs physical chemistry vs biochemistry
           | will end up more useful to help tackle drug discovery
           | problems or other chemistry applications. Same with ML I
           | suppose; even though the specialties are less concrete
           | nowadays, the breadth of publications has far exceeded the
           | breadth of modern chemistry.
        
         | michaelrpeskin wrote:
         | obligatory XKCD: https://xkcd.com/793/
        
           | fknorangesite wrote:
           | Similarly: https://www.smbc-comics.com/comic/2012-03-21
        
           | javajosh wrote:
           | That's a great xkcd, but there are 2 upsides to this arrogant
           | approach. First, arrogance is a nerd-snipe maximizer. Second,
           | there is a small chance you're absolutely right, and you've
           | just obviated a whole field from first principles. It doesn't
           | happen often, but when it does happen and there is no clout
           | like "emporer's new clothes" clout.
           | 
           | EDIT: The downside, of course, is that you appear arrogant,
           | and people won't like you. This can hurt your reputation
           | because it is apparently anti-social behavior on several
           | levels. I think its fair to call it a little bit of an
           | intellectual punk rock move that is probably better left to
           | the young. It's an interesting emotional anchor to mapping a
           | new field, though.
        
             | jojosbuddy wrote:
             | Not laughing! /s (physicist here)
             | 
             | Actually most applied physicists like myself go down that
             | path cause we're pretty efficient, lazy folk & skip through
             | as fast as possible--I call it the principle of maximum
             | laziness.
        
             | anonymouskimmer wrote:
             | > Second, there is a small chance you're absolutely right,
             | and you've just obviated a whole field from first
             | principles.
             | 
             | Mostly when I read about things like this happening, it's
             | happening to a formerly intractable problem in mathematics.
             | Do you have examples outside of math?
        
               | passer_byer wrote:
               | Alfred Wegener as the initial author on the theory of
               | plate tectonics comes to mind. He was a trained
               | meteorologist who observed the similarities between
               | geological formations between the South American east
               | coast and African west coast. He was lucky, in that his
               | father in-law was a prominent geologist and helped him
               | defend this thesis.
        
               | anonymouskimmer wrote:
               | Oh yeah, revolutionary insights are very important for
               | the advancement of knowledge and the elimination of wrong
               | ideas. But as you wrote, this was the work of a thesis,
               | not a random commenter from another field.
        
           | CogitoCogito wrote:
           | https://xkcd.com/1831/
        
         | qumpis wrote:
         | I'm yet to see an ML PhD be required to learn chemistry to a
         | similar extent that chemists would need to doing ML (especially
         | at research level)
        
           | kevviiinn wrote:
           | That's because application and research are quite different.
           | If one does a PhD in ML they learn how to research ML.
           | Someone with a PhD in chemistry learns how to research
           | chemistry, they only need to apply ML to that research
        
             | selimthegrim wrote:
             | Well I think the issue is more of if you're Genentech and
             | you need ML people and can't afford to pay them you're
             | probably better off retraining chemistry PhDs.
        
               | kgwgk wrote:
               | What if they don't need "ML people"? Computational
               | biology has been a thing for a while.
        
               | selimthegrim wrote:
               | Well they had a whole suite of presentations at NeurIPS
               | that suggests they hired a bunch.
        
               | antipaul wrote:
               | https://www.gene.com/scientists/our-scientists/prescient-
               | des...
        
               | kgwgk wrote:
               | They could afford them then...
        
               | kevviiinn wrote:
               | I think you missed my point. Genentech, AFAIK, was not
               | doing research on machine learning as in the principles
               | of how machine learning works and how to make it better.
               | They do biotech research which uses applied machine
               | learning. You don't need a PhD in ML to apply things that
               | are already known
        
               | cmavvv wrote:
               | As a PhD student working on core ML methods with
               | applications in chemistry, I second this. During my PhD,
               | I read very few papers by chemists that were exciting
               | from a ML perspective. Some work very well, but the
               | chemists don't even seem to always understand why they
               | made the right choice for a specific problem.
               | 
               | I don't claim that the opposite is easy either. Chemistry
               | is really difficult, and I understand very little.
        
               | gowld wrote:
               | You can get an ML PhD doing applied ML.
        
               | dekhn wrote:
               | Genentech has several ML groups that do mostly applied
               | work, but some do fairly deep research into the model
               | design itself, rather than just applying off-the-shelf
               | systems. For example, they acquired Prescient Design
               | which builds fairly sophisticated protein models (https:/
               | /nips.cc/Conferences/2022/ScheduleMultitrack?event=59...)
               | and one of the coauthors is the head of Genentech
               | Research (which itself is very similar to Google
               | Research/Brain/DeepMind), and came from the Broad
               | Institute having done ML for decades ('before it was
               | cool').
               | 
               | They have a few other groups as well (https://nips.cc/Con
               | ferences/2022/ScheduleMultitrack?event=60... and https://
               | neurips.cc/Conferences/2022/ScheduleMultitrack?event...
               | and https://neurips.cc/Conferences/2022/ScheduleMultitrac
               | k?event...).
               | 
               | I can't say I know anybody there who is doing what I
               | would describe as truly pure research into ML; it's not
               | in the DNA of the company (so to speak) to do that.
        
             | ghaff wrote:
             | Back when O'Reilly was still hosting events (sigh), at one
             | of their AI conferences, someone from Google gave a talk
             | about differences between research/academic AI and applied
             | AI. I think she had a PhD in the field herself but
             | basically she made the argument that someone who is just
             | looking to more or less apply existing tools to business or
             | other problems mostly doesn't need a lot of the math-heavy
             | theory you'll get in a PhD program. You do need to
             | understand limitations etc. of tools and techniques. But
             | that's different from the kind of novel investigation
             | that's needed to get a PhD.
        
               | frozenport wrote:
               | >>math-heavy theory you'll get in a PhD program
               | 
               | Lol. With the exception of niche groups in compressed
               | sensing, math doesn't get too hard. Furthermore, ML isn't
               | math driven in the sense people are trying things and
               | somebody tries to come up with the explanation after the
               | fact.
        
         | SkyBelow wrote:
         | >I've seen repeatedly that it's much harder for a ML PhD to
         | learn chemistry than for a chemist to learn ML.
         | 
         | Perhaps this is selection bias. Among all the chemists, the
         | ones who will dabble in ML will likely be the chemists with the
         | highest ML related aptitude. In contrast, a ML expert on a
         | chemist project is more likely not being internally driven to
         | explore it but instead has been assigned the work, which means
         | that there is less bias in selection and thus they have less
         | chemistry aptitude.
        
         | ternaus wrote:
         | Loved this argument as well.
         | 
         | With respect to more mature research fields the entry point to
         | ML is much lower.
         | 
         | Hence I always recommended people to have major in Physics,
         | Chemistry, Biology etc but look for projects in these fields
         | that could benefit in ML (I have a number of them about
         | Physics).
         | 
         | So that argument was not novel.
         | 
         | But the fact that pure ML PhDs will have significantly lower
         | multidisciplinary knowledge is a good one. It could be
         | compensated by the fact that ML is growing fast and all kinds
         | of people join the ride, but still.
        
         | kenjackson wrote:
         | Chemistry is a centuries old discipline, that people study
         | undergrad a full four years before getting a PhD in the field
         | of chemistry.
         | 
         | ML is a, practically speaking, 15 year old field that PhDs
         | often begin to study after a couple of AI courses in undergrade
         | and a specific track in grad school (while they study other
         | parts of CS as part of their early graduate CS work).
         | 
         | There's just way less context in ML than Chemistry.
        
       | zgin4679 wrote:
       | It thinks, therefore it did.
        
         | rossdavidh wrote:
         | So if it doesn't exist now, that means it didn't think?
        
       | ironman1478 wrote:
       | "I've seen repeatedly that it's much harder for a ML PhD to learn
       | chemistry than for a chemist to learn ML. (This may be
       | survivorship bias; the only chemists I encounter are those that
       | have successfully learned ML, whereas I see ML researchers
       | attempt and fail to learn chemistry all the time.)"
       | 
       | This is something that rings really true to me. I work in imaging
       | and it's just very clear that people in ML don't want to learn
       | how things actually work and just want to throw a model at it
       | (this is a generalization obviously, but it's more often than not
       | the case). It only gets you 80% there, which is fine usually, but
       | not fine when the details are make or break for a company.
       | Unfortunately that last 20% requires understanding of the domain
       | and people just don't like digging into a topic to actually
       | understanding things.
        
         | drakenot wrote:
         | This seems to kind of be the opposite opinion of The Bitter
         | Lesson[0].
         | 
         | [0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
        
       | DavidSJ wrote:
       | > ... the publication of key research like LSTMs in 2014 ...
       | 
       | Minor nitpick, but LSTMs date to 1997 and were not invented by
       | Google. [1]
       | 
       | [1] Hochreiter and Schmidhuber (1997). Long short-term memory.
       | https://ieeexplore.ieee.org/abstract/document/6795963
        
         | Scea91 wrote:
         | Seems more than a nitpick to me. I find the essay interesting
         | but this line raised some distrust in me. How can someone have
         | these deep insights into Google's ML strategy and the evolution
         | of the field and simultaneously think LSTMs were invented by
         | Google in 2014?
        
           | brilee wrote:
           | sorry, I had a mind fart. I was thinking of seq2seq
           | https://research.google/pubs/pub43155/
           | 
           | Pushing the fix now...
        
             | DavidSJ wrote:
             | seq2seq was indeed a big deal.
        
             | jll29 wrote:
             | You're lucky you could push a fix before Schmidhuber [1]
             | noticed! ;)
             | 
             | [1] https://arxiv.org/abs/2212.11279
        
           | logicchains wrote:
           | >How can someone have these deep insights into Google's ML
           | strategy and the evolution of the field and simultaneously
           | think LSTMs were invented by Google in 2014?
           | 
           | It may not have been accidental; there's a deliberate
           | movement among some people in the ML community to deny Jurgen
           | Schmidhuber credit for inventing LSTMs and GANs.
        
             | seatsniffer wrote:
             | That's something I hadn't heard about. Is there any
             | particular reason for this?
        
               | nighthawk454 wrote:
               | It's become somewhat of a meme, where Schmidhuber
               | seemingly tries to claim credit for nearly everything. I
               | _think_ it's because he published ideas back in the 90s
               | or so that weren't fully executable/realized at the time,
               | and later people figured out how to actually flesh them
               | out do it, and supposedly didn't cite him
               | appropriately/enough. Often the ideas weren't exactly the
               | same - but rather he claims they're derivatives of the
               | concept he was going for.
               | 
               | https://en.wikipedia.org/wiki/J%C3%BCrgen_Schmidhuber#Cre
               | dit...
        
               | seatsniffer wrote:
               | Thanks for taking the time to explain. I'll check out the
               | link also.
        
       | jayzalowitz wrote:
       | nitpick Cockroach was built by the team that built spanner. So
       | your phrasing is off there.
        
         | fizwhiz wrote:
         | nit: Cockroach was founded by a Xoogler but there's no public
         | evidence that they were on the Spanner team at any point.
        
       | dekhn wrote:
       | because once Jeff Dean had solved Google's maslow problems
       | (scaling web search, making ads profitable, developing high
       | performance machine learning systems) he wanted to return to
       | doing academic-style research, but with the benefit of Google's
       | technical and monetary resources, and not part of X, which never
       | produces anything of long-term value. I know for sure he wanted
       | to make an impact in medical AI and felt that being part of a
       | research org would make that easier/more possible than if he was
       | on a product team.
        
         | dgacmu wrote:
         | I generally agree with this though with some tweaks. I think
         | Jeff wanted to do something that he thought was both awesome
         | (he's liked neural networks for a long time - his undergrad
         | thesis was on them) and likely to have long-term major impact
         | for Google, and he was able to justify the Awesome Thing by
         | identifying a way for it to have significant potential revenue
         | impact for Google via improvements to ad revenue, as well as
         | significant potential "unknown huge transformative
         | possibilities" benefits. But I do suspect that you're right
         | that the heart of it was "Jeff was really passionate about this
         | thing".
         | 
         | Of course, this starts to get at different versions of the
         | question: Why did Google Brain exist in 2012 as a scrappy team
         | of builders, and why did Brain exist in 2019 as a mega-
         | powerhouse of AI research? I think you and I are talking about
         | the former question, and TFA may be focusing more on the second
         | part of the question.
         | 
         | [I was randomly affiliated with Brain from 2015-2019 but wasn't
         | there in the early days]
        
           | dekhn wrote:
           | It grew from the scrappy group to the mega-powerhouse by
           | combining a number of things: being the right place at the
           | right time with the right resources and people. They had a
           | great cachet- I was working hard to join Brain in 2012
           | because it seemed like they were one of the few groups who
           | had access to the necessary CPU and data sets and mental
           | approaches that would transform machine learning. And at that
           | time, they used that cachet to hire a bunch of up and coming
           | researchers (many of them Hinton's students or in his sphere)
           | and wrote up some pretty great papers.
           | 
           | Many people outside of Brain were researchers working on
           | boring other projects who transferred in, bringing their
           | internal experience in software development and deployment,
           | which helped a lot on the infra side.
        
         | leoh wrote:
         | Waymo?
        
           | dekhn wrote:
           | waymo hasn't produced anything of long-term value yet. And
           | everything about it that worked well wasn't due to Google X
        
             | ra7 wrote:
             | Can you expand why being part of Google X hinders a team? I
             | believe Waymo "graduated" from Google X to its own entity.
        
               | dekhn wrote:
               | X exists as a press-release generation system, not as a
               | real technology creation system. They onboard many
               | impractical projects that are either copies of something
               | being done already in industry ("but with more Google and
               | ML!") or doesn't have a market (space elevators).
        
               | alphabetting wrote:
               | Waymo has developed the modern autonomous vehicle from
               | the ground up. It's basically a matter of scale now. It's
               | a mindblowing tech stack. The first time riding in one is
               | much more otherwordly than using GPT for the first time.
               | The value of the technology is far greater whatever PR
               | they have generated (not many people know about it)
        
               | dekhn wrote:
               | I have infinite respect for the process that Waymo
               | followed to get to where they are. And I'm impressed that
               | Google continued to fund the project and move it forward
               | even when it represents such a long-term bet.
               | 
               | but it's not a product that has any real revenue. and
               | most car companies keep their distance from google self-
               | driving tech because they're afraid. afraid google wants
               | to put them out of business. It's unclear if google could
               | ever sell (as a product, as an IP package, etc) what
               | they've created because it depends so deeply on a
               | collection of technology google makes available to waymo.
        
               | alphabetting wrote:
               | I was just disputing "X exists as a press-release
               | generation system, not as a real technology creation
               | system." Definitely agree the path to profitability will
               | be tough.
        
           | mechagodzilla wrote:
           | Waymo is kind of like DeepMind - they're costing Alphabet
           | billions of dollars a year for a decade+ with no appreciable
           | revenue to show for it, but they're working on something
           | _neat_ , so surely it must be good?
        
         | choppaface wrote:
         | I agree that the OP makes a bunch of interesting points, but I
         | think historically Brain really grew out of what Dean wanted to
         | do and the fact that he wanted it to be full-stack, e.g.
         | including the TPU. Also, crucially, Brain would use Google data
         | and contribute back to Ads/Search directly versus Google X
         | which was supposed to be more of an incubator.
         | 
         | But it's also notable how the perspective of an ex-Brain
         | employee might differ from what sparked the Brain founders in
         | the first place.
        
         | marricks wrote:
         | [flagged]
        
           | calrissian wrote:
           | > Also not surprised at the immediate down votes for
           | questioning Googles new AI lead!
           | 
           | That's because you are wrong to pretend he did anything wrong
           | by firing T.G. And also, because you added this weird
           | lie/mudslinging/whatever on top of it:
           | 
           | > while she was on vacation
        
           | Traubenfuchs wrote:
           | A lot of people, especially on hacker news, feel disdain for
           | researchers of ethics, bias and fairness, as they are
           | perceived as both holding technology back and profiting from
           | advances in it (that they can then analyse and criticize).
        
             | calrissian wrote:
             | I don't think you're necessarily wrong in your assesment
             | about HN and AI enthusiasts, but also in this case I think
             | it's more accurate to talk about a Twitter agitator and
             | race-baiter [1], rather than a "researcher of ethics, bias
             | and fairness".
             | 
             | [1] https://twitter.com/timnitGebru/status/1651055495271137
             | 280?s...
        
           | hintymad wrote:
           | Controvery of what? Did you read Gebru's paper? For instance,
           | her calculation of carbon footprint of training BERT assumes
           | that companies will train BERT 24x7. Gebru is a disgrace to
           | the community because she always, I mean literally always,
           | attacks her critics by motives. You think bias is a data
           | problem? You're a bigot (See her dispute with LeCun). You
           | disagree with my assessment on an ML model? You are white
           | male oppressor (her attacking a Google's SVP).
           | 
           | Gebru is not a researcher. She is a modern-age Trofim
           | Lysenko, who politicizes everything and weaponizes political
           | correctness.
           | 
           | And yeah, she deserves to be fired. Many times.
        
             | erenyeager wrote:
             | Ok but the lack of underrepresented minorities in the field
             | and the important role people like Gebru played in
             | extending the political and status of minorities is ok to
             | extinguish? We need more than just white male / Chinese
             | male / Indian male monoculture "STEM lords". This is
             | already recognized in fields like medicine, where
             | minorities treating minorities results in better outcomes
             | and the greater push to open positions of status to
             | minorities.
        
               | Silverback_VII wrote:
               | I personally believe that racial or diversity quotas are
               | even more racist or sexist. We should expect minorities
               | to develop their own culture of intellectual excellence.
               | After all, they are no longer children. Giving them a
               | shortcut is a form of insult. Providing someone an
               | advantage based on their race or sex at the expense of
               | someone else who is more qualified due to their race or
               | sex is nonsensical. Companies may fail as a result of
               | such practices. Ultimately, what truly matters is how
               | innovative and efficient a company is.
        
               | qumpis wrote:
               | Yes, uplifting minorities is great, but anyone should be
               | accountable equally when it comes to workplace conduct
        
               | logicchains wrote:
               | >Ok but the lack of underrepresented minorities in the
               | field and the important role people like Gebru played in
               | extending the political and status of minorities is ok to
               | extinguish?
               | 
               | Yes it's okay to extinguish it if hiring underrepresented
               | minorities means hiring bad actors like her who
               | contribute nothing of value. Scientific truth is
               | scientific truth; if you hire people for the color of
               | their skin or their sexuality instead of their ability to
               | produce truth, you slow the progress of science and make
               | the world worse for everyone.
        
               | belval wrote:
               | This seems like a pretty bad faith argument that
               | illustrates exactly the point the parent comment was
               | making. Firing Gebru for insubordination is not
               | "extinguishing" anything, it's getting rid of an employee
               | that was actively taking pot shots at the company in her
               | paper and somehow equated getting fired with anti-
               | minority bias. In practice, Google is already much more
               | tolerant of activism than the average tech company and
               | she was unable to play by the corporate rules.
        
               | hintymad wrote:
               | > important role people like Gebru played in extending
               | the political and status of minorities is ok to
               | extinguish?
               | 
               | No, she didn't. Attacking everyone for baseless motives
               | and identities is the worst kind of activism. She
               | alienated people by attacking them without basis. She
               | disdained those who truly fought for the fairness and
               | justice of every race. She left a bad taste in people who
               | truly cared about progress. Yes, it's totally worth
               | "extinguishing" her role, as her role is nothing but a
               | political charade.
               | 
               | As for under-repented minorities, do you even know the
               | Chinese Exclusion Act? Do you know how large the pipeline
               | of the STEM students in different races and why there was
               | a gap? Do you know why the median GPA of the high school
               | students in the inner city was 0.5X out of 4? Why was
               | that? The questions can fill a book. Yeah, activism is
               | easy, as long as you have the right skin and shameless
               | attitude. Solving real problems is hard.
        
           | renewiltord wrote:
           | These safety people guarantee a useless product that never
           | does unsafe things. ChatGPT proved that you can have a
           | product do unsafe things and still be useful if you put a
           | disclaimer on it. Overall, as a user, I couldn't give a damn
           | if things are unsafe by the definition of this style of
           | ethicist. They were a ZIRP and my life is better for their
           | absence.
        
           | mupuff1234 wrote:
           | Or maybe it's just not perceived as controversial?
           | 
           | Her boss told her to do something, she refused and got
           | sacked.
        
           | qumpis wrote:
           | Wiki doesn't seem to give detail into the situation, nor the
           | paper in question
        
           | dekhn wrote:
           | That's a very simplified version of the story, but I would
           | say that Dean greatly reduced his stature when he defended
           | Megan Kacholia for her abrupt termination of Timnit. Note
           | that Timnit was verbally abusive to Jeff's reports, anybody
           | who worked there could see what she was posting to internal
           | group discussions, so her time at Google was limited, but
           | most managers would say that she should had at least been put
           | on a pip and given 6 months.
           | 
           | Dean has since cited the paper in a subsequent paper (which
           | tears apart the Stochastic Parrots paper).
        
             | marricks wrote:
             | Google has since fired other folks on her team and was in
             | crisis mode to protect Dean. Like, I'm not really going to
             | give them the benefit of the doubt on this.
             | 
             | When people brought Dean up Timnit came up as something to
             | consider, it's interesting to see how all anyone has to say
             | in these threads is reverence towards him. People should
             | try to see the whole picture.
        
               | opportune wrote:
               | Being somewhat involved in one bad thing doesn't justify
               | cancelling someone.
               | 
               | To my knowledge Dean was essentially doing post-hoc
               | damage control for what one of the middle managers in his
               | org did. Even if they did want Timnit gone (as others
               | mention, you are getting only one side of the story in
               | media) they did it in a bad way, for sure. At the same
               | time I don't think one botched firing diminishes decades
               | of achievements from a legitimately kind person.
        
               | jeffbee wrote:
               | Timnit and the other ex-ML ethics crowd who got fired
               | from Google seem like some of the most ignorant people
               | around. I don't defend Dean reflexively, it just seems
               | like he is on the right side of the issue. For example,
               | here is Emma Strubell accusing Dean of creating a "toxic
               | workplace culture" after he and David Patterson had to
               | refute her paper in print.
               | 
               | https://twitter.com/strubell/status/1634164078691098625?l
               | ang...
               | 
               | The thing is if David Patterson and Jeff Dean think your
               | numbers for the energy cost of machine learning might be
               | wrong, then you are probably wrong. These ML meta-
               | researchers are not practitioners and appear to have no
               | idea what they are talking about. Keeping a person like
               | Timnit or Strubell on staff seems like it costs more than
               | its worth.
        
               | dragonwriter wrote:
               | > Timnit and the other ex-ML ethics crowd
               | 
               | Timnit is ex-Google, but very much not ex-ML ethics
               | (fouded Distributed AI Research Institute focussed on the
               | field in late 2021). Very much also true of Margaret
               | Mitchell, who has been at Hugging Face since 2021.
        
           | Silverback_VII wrote:
           | She appears to be a symbol for everything that went wrong at
           | Google. These are the kind of problems that arise when life
           | is too easy, just before the downfall. In other words,
           | decadence. How else can one explain that Google's AI research
           | was dethroned by OpenAI?
        
             | hintymad wrote:
             | Yeah, Dean's fault is hiring such people in the first
             | place. If you hire an activist, you get activism. And if
             | you hire someone whose livelihood depends on finding more
             | problems, well, they will scream more problems, one way or
             | another. Otherwise, why would state U of Mich got one DEI
             | officer per three staff members?
        
       | antipaul wrote:
       | Why does Google X exist?
        
       | liuyipei wrote:
       | Google has good engineers and a long history of high throughput
       | computing. This, combined with a lack of understanding what ML
       | research is like (versus deployment), led to the original TF1
       | API. Also, the fact that google has good engineers working in a
       | big bureaucracy probably hid a lot of the design problems as
       | well.
       | 
       | TF2 was a total failure, in that TF1 can do a few things really
       | well when you get the hang of it, but TF2 was just a strictly
       | inferior version of pytorch, further plagued by confusion due to
       | TF1. In alternate history, if Google pivoted in to JAX much
       | earlier and more aggressively, they could still be in the game. I
       | speak as someone who has at some point knew all the intricacies
       | and differences between TF1 and TF2.
        
       | htrp wrote:
       | To prevent talented people from developing tech elsewhere.
        
         | dbish wrote:
         | MSR seemed like it had a similar underlying purpose (or at
         | least nice side effect).
        
       | nologic01 wrote:
       | > it is becoming increasingly apparent to Google that it does not
       | know how to capture that value
       | 
       | To paraphrase, its the business model, stupid.
       | 
       | Inventing algorithms, building powerful tools and infrastructure
       | etc is actually a tractable problem: you can throw money and
       | brains at it (and the latter typically follows the former). While
       | the richness of research fields is not predictable, you can bet
       | that the general project of employing silicon to work with
       | information will keep bearing fruits for a long time. So creating
       | that value is not the problem.
       | 
       | The problem with capitalizing (literally) on that intellectual
       | output is that it can only be done 1) within a given business
       | model that can channel effectively it or 2) through the invention
       | of totally new business models. 1) is a challenge: These billions
       | of users on which AI goodies can surface are not customers, they
       | are product. They don't pay for anything and they don't create
       | any virtuous circle of requirements and solutions. Alas, option
       | 2) inventing major new business models is highly non-trivial. The
       | track record is poor: the only major alternative business model
       | to adtech (cloud unit) was not invented there anyway and in any
       | case selling sophisticated IT services whether to consumers or
       | enterprise is a can of worms that others have much more
       | experience in.
       | 
       | For a industrial research unit to thrive, its output must be
       | congruent with what the organization is doing. Not necessarily in
       | the detail, but definitely in the big picture.
        
       | amoss wrote:
       | > Today, thought leaders casually opine on how and where ML will
       | be useful, and MBAs feel like this is an acceptable substitute
       | for expert opinion.
       | 
       | Sounds like standard operating procedure.
        
         | kps wrote:
         | Sounds like something LLMs would actually be good for. They're
         | not getting us fusion power or cancer cures.
        
       | [deleted]
        
       | vrglvrglvrgl wrote:
       | [dead]
        
       | nipponese wrote:
       | Organized, concise, and not wordy. Props to the writer, he shows
       | a deep degree of written communication skills on a topic
       | frequently cluttered with jargon.
        
       | light_hue_1 wrote:
       | > The next obvious reason for Google to invest in pure research
       | is for the breakthrough discoveries it has yielded and can
       | continue to yield. As a rudimentary brag sheet, Brain gave Google
       | TensorFlow, TPUs, significantly improved Translate, JAX, and
       | Transformers.
       | 
       | Except that these advances have made other companies an
       | existential threat for Google. 2 years ago it was hard to imagine
       | what could topple Google. Now a lot of people can see a clear
       | path: large language models.
       | 
       | From a business perspective it's astounding what a massive
       | failure Google Brain has been. Basically nothing has spun out of
       | it to benefit Google. And yet at the same time, so much has
       | leaked out, and so many people have left with that knowledge
       | Google paid for, that Google might go the way of Yahoo in 10
       | years.
       | 
       | This is the simpler explanation of the Brain-Deep Mind merger:
       | both Brain and Deep Mind have fundamentally failed as businesses.
        
         | [deleted]
        
         | sushid wrote:
         | It truly feels like Google Brain could be considered Google's
         | equivalent of Bell Labs in the 70s.
        
         | cma wrote:
         | > And yet at the same time, so much has leaked out, and so many
         | people have left with that knowledge Google paid for, that
         | Google might go the way of Yahoo in 10 years.
         | 
         | Google couldn't have hired the talent they did without allowing
         | them to publish.
        
         | dekhn wrote:
         | Google never talked much about it externally, but Google
         | Research (the predecessor to Brain) had a single project which
         | almost entirely funded the entire division- a growth-oriented
         | machine learnign system called Sibyl. What was sibyl used for?
         | Growing youtube and google play and other products by making
         | the more addictive. Sibyl wasn't a very good system (I've never
         | seen a product that had more technical debt) but it did
         | basically "pay for" all of the research for a while.
        
           | temac wrote:
           | Seems to be quite evil though.
        
             | onlyrealcuzzo wrote:
             | Every business is essentially in the business of making
             | people addicted to their products in some way.
             | 
             | You're an oil company - you want people to drive as much as
             | possible.
             | 
             | You're an airline company - you want people to fly all over
             | the world as much as possible.
             | 
             | You're a fashion company - you want people to buy new
             | clothes constantly.
             | 
             | You're a beverage company - you want people drinking your
             | drink all the time instead of water.
             | 
             | You're an Internet advertising company - you want people's
             | eyeballs on your products as much as possible (to blast
             | them with ads).
             | 
             | It's just business.
        
               | wetmore wrote:
               | That doesn't mean it's not evil though?
        
             | packetslave wrote:
             | It's evil if you phrase it (as OP did) as "getting people
             | addicted to YouTube".
             | 
             | Less so if you phrase it "show people recommendations that
             | they're likely to actually click on, based on what they've
             | watched previously", which is what Sybil really was.
        
           | throwaway29303 wrote:
           | In case anyone is wondering what Sibyl is all about, here's a
           | video
           | 
           | https://www.youtube.com/watch?v=3SaZ5UAQrQM
        
             | dekhn wrote:
             | Yep- that goes into a fair amount of detail. Sibyl was
             | retired but many of the ideas lived on in TFX. I worked on
             | it a bit and it was definitely the weirdest, most
             | technically-debt-ridden systems I've ever seen, but it was
             | highly effective at getting people additicted to watching
             | Youtube and downloading games that showed ads.
        
         | abtinf wrote:
         | It's like Xerox PARC all over again.
        
         | jeffbee wrote:
         | > Basically nothing has spun out of it to benefit Google
         | 
         | Quite a ridiculous statement. Google has inserted ML all over
         | their products. Maybe you just don't notice, to their credit.
         | But for example the fact that YouTube can automatically
         | generate subtitles for any written language from any spoken
         | language is a direct outcome of Google ML research. There are
         | lots of machine-inferred search ranking signals. Google Sheets
         | will automatically fill in your formulas, that's in-house ML
         | research, too.
        
           | light_hue_1 wrote:
           | > Quite a ridiculous statement. Google has inserted ML all
           | over their products. Maybe you just don't notice, to their
           | credit. But for example the fact that YouTube can
           | automatically generate subtitles for any written language
           | from any spoken language is a direct outcome of Google ML
           | research. There are lots of machine-inferred search ranking
           | signals. Google Sheets will automatically fill in your
           | formulas, that's in-house ML research, too.
           | 
           | I noticed all the toy demos. None of these have provided
           | Google with any competitive advantage over anyone.
           | 
           | For the investment, Google Brain has been a massive failure.
           | It provided Google with essentially zero value. And helped
           | create competitors.
        
             | sudosysgen wrote:
             | Automatic subtitles and translation is actually a huge
             | feature which is very useful to the many people that don't
             | speak English. It definitely did provide Google with a lot
             | of value.
        
               | light_hue_1 wrote:
               | > Automatic subtitles and translation is actually a huge
               | feature which is very useful to the many people that
               | don't speak English. It definitely did provide Google
               | with a lot of value.
               | 
               | It lost Google immense value.
               | 
               | Before Google Brain the only speech recognizers that
               | halfway worked were at Google, IBM and Amazon. And Amazon
               | had to buy a company to get access.
               | 
               | After Google Brain, anyone can run a speech recognizer.
               | One that is state of the art. There are many models out
               | there that just work well enough.
               | 
               | Google went from having an ok speech recognizer that sort
               | of worked in a few languages and gave YouTube an
               | advantage that no company aside from IBM and Amazon could
               | touch. Neither of which compete with Google much. No
               | startup could have anything like Google's captioning. It
               | was untouchable. Like, speech recognition researchers
               | actively avoided this competition, that's how inferior
               | everyone was.
               | 
               | To now, post Google Brain, any startup can have captions
               | that are as good as YouTube's captions. You can run
               | countless models on your laptop today.
               | 
               | This is a huge competitive loss for Google.
               | 
               | They got a minor feature for YouTube and lost one of the
               | key ML advantages they had.
        
               | sp332 wrote:
               | But little startups are not threatening YouTube. And now
               | they are paying money to Google for the use of Google
               | Brain.
        
               | light_hue_1 wrote:
               | You can run your speech recognizer on AWS, you don't need
               | to give Google a cent.
               | 
               | Whatever comes after YouTube, if it's a startup or not,
               | it will have top-notch captioning, just like YouTube.
               | Google gave up a massive competitive advantage with huge
               | technical barriers.
        
               | sudosysgen wrote:
               | It's too late now. YouTube penetrated every single market
               | outside of China and is now unshakeable from network
               | effect
               | 
               | It completely paid off already, and Google is going to be
               | reaping the dividends of the advantage they had in
               | emerging markets for the next 15 years.
               | 
               | The real advantage has always been network effect. Purely
               | technological moats don't work in the long term. People
               | catching up was inevitable, but Google was able to cash
               | it into a untouchable worldwide lead, and on top of that
               | they made their researchers happy and recruited others by
               | allowing them to publish, and they don't need to maintain
               | an expensive purely technical lead.
        
             | loudmax wrote:
             | > It provided Google with essentially zero value.
             | 
             | Or rather, it provided enormous value. The failure was for
             | Google to actually capture more than a tiny fraction of
             | that value.
             | 
             | No amount of engineering brilliance is going to save Google
             | as long as the management is dysfunctional.
        
               | light_hue_1 wrote:
               | Oh, I don't disagree at all that Google Brain provided
               | enormous value for society. Just like Xerox PARC. Both of
               | them were a massive drain on resources and provided
               | negative value for the parent company.
               | 
               | And I agree, it's not Google Brain's fault. Google's
               | management has been a disaster for a long time. It's just
               | amazing how you can have every advantage and still
               | achieve nothing.
        
           | QuercusMax wrote:
           | A ton of the new stuff in the pixel cameras is ML powered,
           | along with a lot of Google Photos.
        
         | asdfman123 wrote:
         | Google has very good LLMs. It just let OpenAI beat them to the
         | punch by releasing them earlier.
         | 
         | As an established business, Google felt it had a lot to lose by
         | releasing "unsafe" AI into the world. But OpenAI doesn't have a
         | money printing machine, and it's sink-or-swim for them.
        
           | snapcaster wrote:
           | I keep hearing this, but Bard sucks so badly when i've tried
           | to use it like GPT-4 or compare results its like night and
           | day. What makes you so confident they have "secret" LLMs that
           | are superior?
        
             | jeffbee wrote:
             | Bard is in full-scale production to all U.S. users for
             | free. GPT-4 costs $20/month. Rather a big difference in the
             | economics of the situation. Also it's pretty clear that
             | even the $20 is highly subsidized. Microsoft is willing to
             | incinerate almost any amount of money to harm Google.
        
               | snapcaster wrote:
               | Free but unuseably bad <<<<<<<<<<< $20 but using it 20-30
               | times a day at work
               | 
               | seriously have you tried it? compared it to even GPT-3?
               | it really really sucks
        
               | jeffbee wrote:
               | Yes I think it has less utility than the free version of
               | ChatGPT, but it also has some nice points, is faster, and
               | has fewer outages.
               | 
               | For my use case none of them is worth using. All three of
               | the ones we've mentioned in this thread will just make up
               | language features that would be useful but don't exist,
               | and all three of them will hallucinate imaginary sections
               | of the C++ standard to explain them. Bard loves
               | `std::uint128_t`. GPT-4 will make up GIS coordinate
               | reference systems that don't exist. For me they are all
               | more trouble than they are worth, on the daily.
        
               | cubefox wrote:
               | GPT-4 is also free to all users, not just from the US,
               | with 200 turns per day and 20 per conversation. It's just
               | called "Bing Chat mode" instead of GPT-4. Of course
               | Microsoft is losing money with it. But Microsoft can
               | afford to lose money.
        
             | asdfman123 wrote:
             | Have you tried PaLM?
             | 
             | I work for Google and have been playing with it. It's
             | pretty good.
             | 
             | The decision to release Bard, an LLM that was clearly not
             | as good as ChatGPT, struck me as reactive and is why people
             | think Google is behind. I'd think so too if I had just
             | demoed Bard.
        
               | snapcaster wrote:
               | No, but would love to try it. I'm using these models
               | 20-30 times a day throughout the average work day for
               | random tasks so have a pretty good sense of performance
               | levels. Didn't think it was available to public yet but
               | just saw it's apparently on google cloud now, i'll have
               | to try it out. How do you compare Palm with GPT4 if
               | you've had a chance to try both?
        
               | asdfman123 wrote:
               | Seems pretty similar. In general Google LLMs seem better
               | suited for just conversation and ChatGPT is built to
               | honor "write me X in the style of Y" prompts.
               | 
               | The latter is more interesting to play around with,
               | granted, and I think it's an area where Google can catch
               | up, but it doesn't seem like a huge technical hurdle.
        
       ___________________________________________________________________
       (page generated 2023-04-26 23:00 UTC)