[HN Gopher] Automating my job with GPT-3
       ___________________________________________________________________
        
       Automating my job with GPT-3
        
       Author : daolf
       Score  : 200 points
       Date   : 2021-01-27 16:27 UTC (6 hours ago)
        
 (HTM) web link (blog.seekwell.io)
 (TXT) w3m dump (blog.seekwell.io)
        
       | rotten wrote:
       | I would have used the word "percentage" rather than "percent". I
       | wonder if the slightly more precise english would have helped?
        
       | frompdx wrote:
       | _Woah. I never gave it my database schema but it assumes I have a
       | table called "users" (which is accurate) and that there's a
       | timestamp field called "signup_time" for when a user signed up._
       | 
       | I am definitely impressed by the fact that it could get this
       | close without knowledge of the schema, and that you can provide
       | additional context about the schema. Seems like there is a lot of
       | potential for building a natural language query engine that is
       | hooked up to a database. I suppose there is always a risk that a
       | user could generate a dangerous query but that could be
       | mitigated.
       | 
       | Not related to the article but what exactly is "open" about
       | OpenAI?
        
         | vladsanchez wrote:
         | No magic needed, only metadata. ;-)
        
         | gumby wrote:
         | > but what exactly is "open" about OpenAI?
         | 
         | Nothing. At this point it's simply openwashing.
        
         | wayeq wrote:
         | > Not related to the article but what exactly is "open" about
         | OpenAI?
         | 
         | Microsoft's checkbook?
        
         | pizza wrote:
         | Closed and open.. ClopenAI
        
         | vertis wrote:
         | Nothing. It was a not-for-profit but it converted itself to a
         | for-profit entity and made an exclusive deal with Microsoft for
         | GPT-3 (not sure how it's exclusive given all the beta API
         | users).
         | 
         | Granted training your own copy of GPT-3 would be beyond most
         | peoples means anyway (I think I read an estimate that it was a
         | multi-million dollar effort to train a model that big).
         | 
         | I do think it's a bit dodgy to not change the name though when
         | you change the core premise.
        
           | swalsh wrote:
           | I would love for an ACTUAL open AI platform, someone should
           | build a SETI@Home like platform to allow normal people to
           | aggregate their spare GPU time.
        
           | gwern wrote:
           | Incorrect. It is still a not-for-profit, which _owns_ a for-
           | profit entity. It is fairly common for charities to own much
           | or all of for-profit entities (eg Hershey Chocolate, or in
           | today 's Matt Levine newsletter, I learned that a quarter of
           | Kellogg's is still owned by the original Kellogg charity).
           | And the exclusive deal was not for GPT-3, in the sense of any
           | specific checkpoint, but for the _underlying code_.
        
             | vertis wrote:
             | I stand corrected.
        
             | jfrunyon wrote:
             | - Charity is not the same as not-for-profit
             | 
             | - Hershey is a public company. Most certainly NOT owned by
             | either a charity or a non-profit. The only way a non-profit
             | comes into the picture is that a significant portion of
             | their 'Class B' stock is owned by a trust which is
             | dedicated to a non-profit (the Milton Hershey School).
             | (https://www.thehersheycompany.com/content/dam/corporate-
             | us/d... pp 36-37)
        
           | frompdx wrote:
           | That's disappointing.
           | 
           |  _OpenAI's mission is to ensure that artificial general
           | intelligence (AGI)--by which we mean highly autonomous
           | systems that outperform humans at most economically valuable
           | work--benefits all of humanity._
           | 
           | Certainly makes that statement seem less credible.
        
             | greentrust wrote:
             | What if, and bear with me, strong AI poses real dangers and
             | open sourcing extremely powerful models to everyone
             | (including malicious actors and dictatorial governments)
             | would actually harm humanity more than it benefits it?
        
               | MrGilbert wrote:
               | > (including malicious actors and dictatorial
               | governments) would actually harm humanity more than it
               | benefits it?
               | 
               | I'm really glad that weapons aren't open source. Imagine
               | every dictatorship would get their hands on weapons.
               | Luckily, it's hidden behind a paywall. /s
        
           | derefr wrote:
           | GPT-3 is the same "tech" as GPT-2, with more training. GPT-2
           | is FOSS. I have a feeling that OpenAI's next architecture (if
           | there ever is one) would still also be FOSS.
           | 
           | I think OpenAI just chose a bad name for this for-profit
           | initiative -- "GPT-3" -- that makes it sound like they were
           | pivoting their company in a new direction with a new
           | generation of tech.
           | 
           | Really, GPT-3 should have been called something more like
           | "GPT-2 Pro Plus Enterprise SaaS Edition." (Let's say
           | "GPT-2++" for short.) Then it would have been clear that:
           | 
           | 1. "GPT-2++" is not a generational leap over "GPT-2";
           | 
           | 2. an actual "GPT-3" would come later, and that it _would_ be
           | a new generation of tech; and
           | 
           | 3. there would be a commercial "GPT-3++" to go along with
           | "GPT-3", just like "GPT-2++" goes along with "GPT-2".
           | 
           | (I can see why they called it GPT-3, though. Calling it
           | "GPT-2++" probably wouldn't have made for very good news
           | copy.)
        
             | armoredkitten wrote:
             | You make it sound as if GPT-3 is just the same GPT-2 model
             | with some extra Enterprise-y features thrown in. They're
             | completely different models, trained on different data, and
             | vastly different sizes. GPT-2 had 1.5B parameters, and
             | GPT-3 has 175B. It's two orders of magnitude larger.
             | 
             | Sure, both models are using the same structures (attention
             | layers, mostly), so it's a quantitative change rather than
             | a qualitative change. But there's still a hell of a big
             | difference between the two.
        
               | derefr wrote:
               | Right, but GPT-2 was the name of the particular ML
               | _architecture_ they were studying the properties of; not
               | the name of any specific model trained on that
               | architecture.
               | 
               | There was _a_ pre-trained GPT-2 model offered for
               | download. The whole  "interesting thing" they were
               | publishing about, was that models trained under the GPT-2
               | ML architecture were uniquely-good at transfer learning,
               | and so _any_ pre-trained GPT-2 model of sufficient size,
               | would be extremely useful as a  "seed" for doing your own
               | model training on top of.
               | 
               | They built one such model, but that model was not,
               | itself, "GPT-2."
               | 
               | Keep in mind, the training data for that model is open;
               | you can download it yourself and reproduce the offered
               | base-model from it if you like. That's because GPT-2 (the
               | architecture) was formal academic computer science:
               | journal papers and all. The particular pre-trained model,
               | and its input training data, were just published as
               | experimental data.
               | 
               | It is under _that_ lens, that I call GPT-3  "GPT-2++."
               | It's a different _model_ , but it's the same _science_.
               | The model was never OpenAI 's "product." The science
               | itself was/is.
               | 
               | Certainly, the SaaS pre-trained model named "GPT-3" is
               | qualitatively different than the downloadable pre-trained
               | base-model people refer to as "GPT-2." But so are all the
               | various trained models people have built by training
               | GPT-2 _the architecture_ with their own inputs. The whole
               | class of things trained on that architecture are
               | fundamentally all  "GPT-2 models." And so "GPT-3" is just
               | one such "GPT-2 model." Just a really big, surprisingly-
               | useful one.
        
             | marcosdumay wrote:
             | GPT-2 Community and GPT-2 Enterprise.
             | 
             | Those terms are so disseminated that I wouldn't be
             | surprised if GPT-2 could suggest them.
        
               | derefr wrote:
               | What I meant by my last statement is that no news outlet
               | would have wanted to talk about "the innovative power of
               | GPT-2 Enterprise." That just sounds fake, honestly.
               | _Every_ SaaS company wants to talk about the  "innovative
               | power" of the extra doodads they tack onto their
               | Enterprise plans of their open-core product; where
               | usually nobody is paying for their SaaS _because_ of
               | those doodads, but rather just because they want the
               | service, want the ops handled for them, and want
               | enterprise support if it goes down.
               | 
               | But, by marketing it as a new _version_ of the tech,
               | "GPT-3", OpenAI gave journalists something they could
               | actually report on without feeling like they're just
               | shoving a PR release down people's throats. "The new
               | generation of the tech can do all these amazing things;
               | it's a leap forward!" _is_ news. Even though, in this
               | case, it 's only a "quantity has a quality all its own"
               | kind of "generational leap."
        
       | jgilias wrote:
       | "GPT-3, I need a React App that displays vital economic
       | statistics from the World Bank API."
       | 
       | ----
       | 
       | "Nice! can you add a drop-down for regional statistics when in
       | country view?"
       | 
       | ----
       | 
       | "Just one last thing. Can you make the logo bigger?"
        
         | m12k wrote:
         | https://www.youtube.com/watch?v=mqpY5kEtA2Y
        
           | meowface wrote:
           | Even though that one appears to be on an official channel of
           | theirs, the quality on this one is much better, for some
           | reason: https://www.youtube.com/watch?v=maAFcEU6atk
        
             | m12k wrote:
             | Thanks for that - yes, it looks like Adult Swim Germany has
             | had to create a zoomed version of the original in order to
             | avoid an automated copyright strike from their parent
             | company. Kinda ironic with yet another example of the
             | algorithms doing most of the work, and everything getting
             | slightly worse as a result.
        
         | neurostimulant wrote:
         | When that day finally come I guess lurking on hn will be my
         | full time job. The question is which job got replaced first,
         | the managers, or the programmers?
        
           | lrossi wrote:
           | Neither. The programmers will still have jobs to debug the
           | apps as they are not handling correctly 1% of the inputs. The
           | managers will come up with all the necessary processes to
           | maintain oversight of the new activities and keep their jobs.
        
       | chewxy wrote:
       | FWIW I ran a startup that provided you with a program (single
       | binary) that allowed you to run natural language queries on your
       | database across most schemas. It had a full semantics layers
       | which translated your query into a mixed-lambda calculus-prolog
       | query, which is then translated into SQL as needed - you can see
       | a sample of the semantics layer here:
       | https://youtu.be/fd4EPh2tYrk?t=92.
       | 
       | It's deep learning based with a lot of augmentation. Going from
       | the OP's article to actually being able to run queries on any
       | schema is quite a bit more work. I'd love to see GPT3 handle
       | arbitrary schemas.
       | 
       | p/s: the startup failed. deep research based startups need a lot
       | of funds.
        
         | lrossi wrote:
         | Sorry to hear about your startup.
         | 
         | Translating between natural language and SQL is a reasonable
         | idea. I was thinking about this as well, but I didn't try
         | anything as I don't have an ML background. I spent some time
         | looking at the SQL side of the problem, and it seemed quite a
         | rabbit hole.
         | 
         | If you do manage to get it working up to a point where it's
         | usable by the average person, you can take it one step further:
         | auto generate simple apps or websites in a no-code product.
         | 
         | This might bring some hate from the dev community as we are
         | automating ourselves out of a job, but it would be a pretty
         | impressive product if it worked.
        
           | chewxy wrote:
           | It did more than SQL. It could generate programs in the
           | syntax of an arbitrary programming language (with enough
           | pretraining examples) as well. What powers it is a tree-to-
           | tree transducer, which is a kind of recursive neural network
           | (not recurrent, which is what LSTMs are).
           | 
           | It's been 5 years and I've been thinking a lot on this. This
           | is a product with no good market fit. If you break it down by
           | "kind" of sales, your basic branches are B2B and B2C.
           | 
           | B2C is mostly out because the person on the omnibus have no
           | general need for a programming language, SQL or not (plus,
           | outside of emacs, nothing consumers use are inherently
           | "programmable"). So this program simply becomes a utility
           | program that you reach out to occasionally like `cut` or
           | `sed`.
           | 
           | We mostly targeted businesses and other startups. We wanted
           | to empower the common employee to be able to query data on
           | their own. That itself came with a huge amount of challenge.
           | Turns out most employers don't like the idea of any Tom Dick
           | and Harry having query access to databases. So we branched
           | out, tried to allow querying spreadsheets and presentations
           | (yes, the advertising arm of one big media corporation stored
           | their data in tables in a .pptx file on a sharepoint server).
           | The integrations are finnicky and break often.
           | 
           | Maybe we're not smart enough to figure this out. But one day,
           | one day I shall be back.
           | 
           | But in the meantime, the failure of the startup spawned the
           | Gorgonia family of deep learning libraries
           | (https://github.com/gorgonia/gorgonia). Check it out if you
           | want.
        
       | minimaxir wrote:
       | This is a use case where AI-powered SQL is a solution in search
       | of a problem, and introduces more issues than just doing boring
       | SQL. For data analysis, it's much more important to be accurate
       | than fast, and this article is unclear how many attempts each
       | example query took. GPT-3 does not always output coherent output
       | (even with good prompts), and since not 100% of the output is
       | valid SQL the QA and risk tolerance of bad output affects the
       | economics.
       | 
       | OpenAI's GPT-3 API is expensive enough (especially with heavy
       | prompt engineering) that the time saved may not outweigh the
       | cost, particularly if the output is not 100% accurate.
        
         | mritchie712 wrote:
         | One of the authors here. The idea (if we were to actually
         | implement this in our product) would be to give the user some
         | "boilerplate". We're no where near being able to automate a 100
         | line `SELECT` statement with CTE's etc., but it does a decent
         | job of starting you off.
        
           | minimaxir wrote:
           | Granted, you could also get similar boilerplate from Googling
           | your query and checking the top answer on Stack Overflow.
           | That's free, and includes discussions on
           | constraints/optimization.
        
             | mritchie712 wrote:
             | Yeah, we originally thought GPT could accept a large domain
             | specific training set (e.g. feed in the SQL schema for a
             | user), but it's not there yet. A PM at OpenAI said it
             | shouldn't be long off though. When that's possible, the SQL
             | generated should be much better than Google.
        
       | choeger wrote:
       | The problem with the curre t "AI" technology is it is only
       | approximately correct (or rather, it is somewhat likely to
       | produce a "good" result). This gives great use-cases when it
       | comes to human perception, as we can filter out or correct small
       | mistakes and reject big ones. But when used as input to a
       | machine, even the smallest mistake can have huge consequences.
       | Admittedly, this nonlinearity also applies when human beings
       | "talk" to machines, but the input to and output of a single human
       | being will always be constrained, whereas a machine could output
       | billions of programs per day. I don't think it would be wise to
       | follow that route before we have computational models that can
       | cope with the many small and few big mistakes an "AI" would make.
        
         | Hydraulix989 wrote:
         | Devil's Advocate: What makes this any different than human
         | error?
        
           | Judgmentality wrote:
           | Lack of human oversight.
           | 
           | Think of how frustrating it is to be unable to talk to a
           | human at Facebook or Google because their AI closed your
           | account without explanation.
           | 
           | Now imagine this is how everything works.
        
             | ithkuil wrote:
             | It depends on the human, it depends on the process; I guess
             | it will depend on the quality of AI in the future.
             | 
             | I consistently have terrible experiences with by human
             | operators over the phone, e.g. phone company and similar
             | (in my case italy, but I guess it's a general problem).
             | They routinely cannot address my issues and just say they
             | are sorry but they cannot do anything about it, or that
             | this time it will work.
             | 
             | Human operators are a solution only if they are not
             | themselves slaves to an internal rigid automated system
        
           | vlovich123 wrote:
           | Lack of human oversight is one as mentioned below. Speed is
           | another one.
           | 
           | Whatever error a human can cause, a machine can do as much or
           | more damage many orders of magnitude faster and larger and be
           | difficult to correct.
        
         | Joeri wrote:
         | GPT-3 strikes me as the human fast thinking process, without
         | the slow thinking process to validate and correct its answers.
         | It is half a brain, but an impressive half at that.
        
           | visarga wrote:
           | It's like a human with no senses - sight, hearing, touch,
           | smell or taste, also paralyzed, short term amnesic and alone,
           | but able to gobble tons of random internet text. After
           | training it can meet people but can't learn from those
           | experiences anymore, the network is frozen when it meets the
           | real world.
        
       | Sidetalker wrote:
       | He's our fastest business analyst but sometimes on a hot day
       | he'll just keep repeating "Syntax Error"...
       | 
       | Very cool work, I continue to be blown away by what GPT-3 can
       | achieve.
        
       | htrp wrote:
       | We should start with the caveat that the GPT3 API waitlist
       | doesn't actually move, you literally need to get an employee to
       | get you manually off the waitlist.
        
         | greentrust wrote:
         | I'm a member of the beta. The Slack group regularly sees
         | influxes of hundreds of new customers, many of who seem to have
         | signed up from the waitlist.
        
       | rexreed wrote:
       | I constantly wonder how people are getting access to the GPT-3
       | API (as beta users) when so many are still on the waiting list.
       | The answer to use the Adventure Dungeon game is quite lacking.
        
         | mritchie712 wrote:
         | We didn't do anything special. Signed up for the waitlist on
         | day one and just randomly got an email one day saying we're in.
        
           | neovive wrote:
           | How long did you have to wait? I've been on the GPT-3 waiting
           | list for a few months, hoping to build an educational app and
           | haven't anything yet.
        
           | rexreed wrote:
           | It's good to hear that the beta API application process is as
           | probabilistic as their algorithms.
        
       | mraza007 wrote:
       | Really interesting article. I'm just curious to know how do you
       | get access to gpt-3
        
         | Diederich wrote:
         | Go back to the article and search for "If you're interested in
         | trying it out", there's a link that allows you to signup for
         | the waiting list.
        
           | mraza007 wrote:
           | Got it. Do you have to pay for it to use it
        
             | mritchie712 wrote:
             | OpenAI is a paid API. The SQL pad we (https://seekwell.io/)
             | offer has a free tier with paid premium features.
        
               | mraza007 wrote:
               | Got it thanks for answering
        
           | jaytaylor wrote:
           | Anecdotally, I signed up around last June (06/2020), and am
           | still waiting to hear back..
        
             | flemhans wrote:
             | Same.
        
               | tom_wilde wrote:
               | Same. :|
        
             | gdb wrote:
             | (I work at OpenAI.)
             | 
             | We've been ramping up our invites from the waitlist -- our
             | Slack community has over 18,000 members -- but we still are
             | only a small fraction of way through. We've been really
             | overwhelmed with the demand and have been scaling our team
             | and processes to be able to meet it.
             | 
             | We can also often accelerate invites for people who do have
             | a specific application they'd like to build. Please feel
             | free to email me (gdb@openai.com) and I may be able to
             | help. (As a caveat, I get about a hundred emails a week, so
             | I can't reply to all of them -- but know that I will do my
             | best.)
        
               | neovive wrote:
               | Thank you for your open and honest response. I've been on
               | the waiting list for a few months myself and it's great
               | to hear that Open AI is ramping up to meet the enormous
               | demand for GPT-3.
        
             | navait wrote:
             | AFAIK the list never moves and you basically have to know
             | someone at openAI.
        
               | [deleted]
        
           | soperj wrote:
           | Also in the "signed up for waitlist but never heard back". I
           | signed up a couple of times because I thought I might have
           | done it from an address that got filtered out at first.
        
       | jakearmitage wrote:
       | You know what grinds my gears with GPT-3? The fact that I can't
       | tinker with it. I can't do what this guy just did, or play around
       | with it, or learn from it, or whatever. Access is limited.
       | 
       | I feel like I'm back in 95, when I had to beg faculty staff to
       | get a copy of VB on some lab computer, only to be able to use it
       | 1 hour a day. Restricting knowledge like this, in 2021, feels
       | odd.
        
         | qayxc wrote:
         | Get used to it. The infrastructure involved is just too
         | expensive to run at home.
         | 
         | The same applies to quantum computers. Models like GPT-3 are
         | way too big for a consumer machine to handle and require
         | something like a DXG-station [0][1] with 4x 80 GiB A100 GPUs to
         | run properly.
         | 
         | So even if the model were available for download, you wouldn't
         | be able to even run it without hardware costing north of
         | $125,000.
         | 
         | It's less about restricting knowledge and more about the insane
         | amount of resources required. It's not as bad as getting access
         | to FMRI or particle accelerators, but it's getting there ;)
         | 
         | [0] https://bdtechtalks.com/2020/09/21/gpt-3-economy-business-
         | mo...
         | 
         | [1] https://www.nvidia.com/content/dam/en-zz/Solutions/Data-
         | Cent...
        
       | jawns wrote:
       | This is really cool, but it's clear that the person requesting
       | the SQL has to know whether the generated SQL is correct for it
       | to be of use.
       | 
       | If I'm a non-technical user and I ask a plain-language question
       | and the generated SQL is incorrect, it's likely going to give the
       | wrong answer -- but unless it's terribly wrong ("Syntax error",
       | truly implausible values) the user may not know that it's wrong.
       | 
       | So I see this as more of a tool to speed up development than a
       | tool that can power end users' plain-language queries. But who
       | knows? Maybe GPT-4 will clear that hurdle.
        
         | mritchie712 wrote:
         | One of the authors here. You're exactly right. We're no where
         | near being able to automate a 100 line `SELECT` statement with
         | CTE's etc., but it does a decent job of starting you off.
        
       | navait wrote:
       | It reminds me of why tools like Tableau are so useful. You dont'
       | have to teach people SQL or whatever, they can build their own
       | visualizations and Tableau will do the SQL for you.
        
         | yuy910616 wrote:
         | Fun story. So we've interview candidates by giving them SQL
         | take-home questions. We gave them a user but everyone on our
         | team could see the queries ran by that user. One candidate was
         | really impressive. They were using some very advance syntax and
         | the queries were immaculate.
         | 
         | Turns out they were using PowerBI lol
        
       | mrkeen wrote:
       | This seems to be consistent with my outsider view of AI demos.
       | 
       | 1) Have a question
       | 
       | 2) Figure out the answer
       | 
       | 3) Have the AI figure out the answer
       | 
       | 4) If the AI figured out your answer, be impressed, otherwise try
       | again.
        
       ___________________________________________________________________
       (page generated 2021-01-27 23:00 UTC)