[HN Gopher] OpenAI's API now available with no waitlist
       ___________________________________________________________________
        
       OpenAI's API now available with no waitlist
        
       Author : todsacerdoti
       Score  : 224 points
       Date   : 2021-11-18 14:19 UTC (8 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | worik wrote:
       | https://www.theregister.com/2020/11/04/gpt3_carbon_footprint...
       | 
       | Far too expensive, in a currency we cannot afford.
       | 
       | These algorithms are not the future of AI, if AI has a future.
        
         | ryan93 wrote:
         | The articles says training GPT used as much power as 126 homes
         | for one year. Thats literally nothing.
        
           | worik wrote:
           | https://www.merriam-webster.com/dictionary/literally
        
       | peterlk wrote:
       | There's a lot of negativity in the comments here, and many of
       | them have merit. However, the thing that is interesting to me
       | about OpenAI, AI21, Cohere, and all the other LLM providers is
       | that they are broadly useful, and often helpful. Perhaps they
       | don't live up to the marketing hype, but they are still
       | interesting.
       | 
       | For example, I used to have a biology blog, and I've been
       | thinking of starting it back up again. I've been using OpenAI and
       | Mantium (full disclosure, I work at Mantium) to generate the
       | bones of a blog post so that I have something to start with.
       | Coming up with ideas for my biology blog posts was almost 50% of
       | the work.
       | 
       | If you're interested in judging the quality for yourself, I have
       | a biology blog post generator here:
       | https://f0c1c1e0-f6b6-46bc-81a1-eff096222913-i.share.mantium...
       | 
       | and a music blog post generator here:
       | https://8aaf220e-4aff-4d4e-ae61-90f08011c9ac-i.share.mantium...
       | 
       | (they were both "created today" because I moved them from our
       | staging environment)
        
         | east2west wrote:
         | I just tried your biology blog post generator and the second
         | paragraph of the generated text, also the second sentence, is
         | "Transcription is the process of converting audio into text."
         | Obviously, the generator is confusing audio transcription with
         | biological transcription like DNA transcription. Is this a
         | common occurrence? Or did I make some mistakes in using the
         | generator? I just pressed the "Execute" button.
        
           | peterlk wrote:
           | This is, in my opinion, one of the biggest challenges with
           | generative models right now. I'm not sure if this is the
           | industry-adopted term, but I call them hallucinations. This
           | is why I don't just pipe it straight into my blog, but rather
           | use it as inspiration for a blog post that I write myself. It
           | is easier for me to edit and expand on something that is
           | already written, though.
        
         | minimaxir wrote:
         | AI text content generation is indeed a legit industry that's
         | still in its nascent stages. It's why I myself have spent a lot
         | of time working with it, and working on tools for fully custom
         | text generation models
         | (https://github.com/minimaxir/aitextgen).
         | 
         | However, there are tradeoffs currently. In the case of GPT-3,
         | it's cost and risk of brushing against the Content Guidelines.
         | 
         | There's also the surprisingly underdiscussed risk of copyright
         | of generated content. OpenAI won't enforce their own copyright,
         | but it's possible for GPT-3 to output existing content verbatim
         | which is a massive legal liability. (it's half the reason I'm
         | researching custom models fully trained with copyright-safe
         | content)
        
           | _jal wrote:
           | I would like to think the consumer would merit a thought,
           | too.
           | 
           | Fiction might be one thing; if it is entertaining, that's
           | enough. But if I'm reading something supposedly nonfiction
           | that is generated by a machine, I want to know provenance.
           | 
           | In the alternative, it should have a human's name attached to
           | say that they've verified it is correct information, and take
           | the reputation hit if it isn't. Given the above discussion of
           | copyright, it seems reasonable enough - if you want to profit
           | from AI output, you should stand behind it.
        
         | peterlk wrote:
         | UPDATE: These got a fair amount of traction, and I removed them
         | out of an abundance of caution around deployment regulations
         | that OpenAI enforces. Also cost considerations. I don't want to
         | hijack the thread away from OpenAI, but you can also build
         | stuff with Cohere and AI21 on Mantium, AI21's J1-Jumbo has
         | pretty good performance, and Cohere just put out some
         | significant updates for their models.
         | 
         | UPDATE 2: I couldn't help myself. I think this stuff is pretty
         | fun. So here's a biology blog post generator using 2 chained
         | Cohere prompts :)
         | https://11292388-8f03-42d2-8a68-7039b24fcc2e-i.share.mantium...
        
         | davidhariri wrote:
         | Good to see Cohere.ai mentioned in your comment
        
         | littlestymaar wrote:
         | FYI, I get a 404 for both of them.
        
         | montycheese wrote:
         | I use Mantium and have had a great experience so far generating
         | company marketing material
        
       | qeternity wrote:
       | Aside from Copilot, does anyone know of any other products that
       | are making use of GPT-3?
       | 
       | The hype was huge when it was released, and the early beta
       | testers were showing some amazing (and cherry picked) demos, most
       | famously the ability to write working React code. But since then,
       | I've not seen much...
        
         | [deleted]
        
         | keewee7 wrote:
         | There are plenty of services in the "automate writing ads and
         | blog spam" space. Not making the world better in any way.
        
         | jstx1 wrote:
         | There are subreddits with model-generated porn stories. All the
         | horny people writing for each other are getting automated. In
         | the example I saw some people had sex and then took their
         | clothes off at the end. It's groundbreaking stuff.
        
           | gigglesupstairs wrote:
           | Okay this is legit the funniest thing I read today on
           | Internet
        
           | andybak wrote:
           | > In the example I saw some people had sex and then took
           | their clothes off at the end.
           | 
           | I mean - that is technically feasible.
        
           | harpersealtako wrote:
           | That was literally what like 90% of AI Dungeon (GPT-3-based
           | CYOA adventure simulator) players were using it for. Then
           | OpenAI forced AI Dungeon to implement strict content filters,
           | and within a month the community had already stood up a fully
           | functional replacement fine-tuned on literotica with 10 times
           | the features and a focus on privacy and zero content
           | restrictions. The community replacement was partially
           | bankrolled by the sale of AI-generated anime catgirl image
           | NFTs.
           | 
           | That's barely scratching the surface of the AI-generated
           | erotica scene, it's pretty wild.
        
         | Voloskaya wrote:
         | Not GPT-3, because it's too big, but much smaller models of
         | similar architecture are used for smartcompose in Word/Outlook
         | GMail/Docs and other places.
        
         | zzbzq wrote:
         | The terms and conditions prevent you from making anything good
         | with it. All of my ideas were banned because they're too
         | unethical or just recapitulate the functionality of the
         | sandbox. Some key partners like Microsoft have a separate
         | agreement where they're allowed to make useful things.
        
           | Filligree wrote:
           | You might want to peek at NovelAI.net instead.
           | 
           | It's the exact opposite, in just about every possible way.
           | Including, I'm afraid, model generality -- it's tuned for
           | fiction, and nothing else, but it's _very_ good at that.
        
         | keerthiko wrote:
         | I integrated copilot with VSCode (it's pretty easy to get off
         | the waitlist I believe) and have been using it to unblock me
         | from my ADHD when I'm writing code. Basically as I think
         | through a bugfix in our app's codebase,
         | 
         | I navigate to the line where I believe "the fix should go
         | here", and a few characters in, copilot is filling up the
         | lines. 80% of the time it is non-compilable, but nearly 50% of
         | the time it's close to the fix I was going to put in. It's then
         | just a matter of me fixing much simpler errors and bugs in the
         | copilot-suggested LOC.
         | 
         | I have found that I get far less distracted from writing
         | bugfixes once I start looking at the code. I'm not going to let
         | copilot push commits to PROD anytime soon, but it's like having
         | a really smart intern who doesn't really know exactly what I'm
         | trying to solve but has a decent idea, pair programming with
         | me.
         | 
         | So it's not like these AI tools will replace me yet, but they
         | are certainly living up to the goal of "copilot".
        
           | eggsmediumrare wrote:
           | People are afraid of being replaced when what we actually
           | should be afraid of is be de-valued.
        
         | [deleted]
        
           | [deleted]
        
         | manishsharan wrote:
         | I have a feeling it is being used to produce more nonsensical
         | web pages. Often when I am searching the web for information on
         | a product or a review , I land on a page that has weirdly
         | phrased and often repetitive sentences which provide no useful
         | information. I am assuming those pages are generated by OpenAI
         | or similar technology.
        
           | occamrazor wrote:
           | Right now most of those pages are produced by much simpler
           | models, which copy an existing page and replace text snippets
           | with synonyms. I am sure that soon the spammers will switch
           | to better models.
        
           | ju_sh wrote:
           | I've got some insight into this (Several friends, now multi-
           | millionaires ran and flipped tens of sites like this) They're
           | mostly written by low paid content writers in 3rd world
           | countries such as Vietnam and the Philippines. Primarily to
           | drive affiliate traffic to Amazon and other retailers with
           | affiliate schemes. They all operate on a similar format - 10
           | items with good reviews, write 300 words about each product,
           | rinse, repeat, profit.
        
             | eggsmediumrare wrote:
             | What a world where people can become multi-millionaires in
             | this way while nurses and teachers can't even get cost of
             | living adjustments.
        
           | missinfo wrote:
           | I often wonder about this with Twitter accounts. How many are
           | already GPT-3 generated?
           | 
           | We'll need another GPT-3 bot to detect the GPT-3 bots.
        
           | penjelly wrote:
           | been playing with copilot lately and even it just seems more
           | annoying then it is helpful so far. Will continue
           | experimenting but so far my impression has soured a bit
        
           | bransonf wrote:
           | Bingo
           | 
           | It's increasingly difficult to find product reviews with
           | search engines.
           | 
           | Massive auto generated content farms take a product name and
           | add loads of AI-generated filler text. Pop in a bunch of
           | banner ads and an affiliate link and they have huge economic
           | incentive to scale these operations.
           | 
           | I'm very pessimistic about the direction the internet is
           | going these days. The AI crisis isn't going to be sentient AI
           | trying to kill us, it's going to be a flood of noise over
           | knowledge.
        
             | Jorge1o1 wrote:
             | Until we have to start making AIs to identify knowledge and
             | filter out noise. And then a whole cat-and-mouse game
             | between fake news AI and fake news detection AI.
        
               | tehsauce wrote:
               | This is the exact situation we are currently in.
               | 
               | https://rowanzellers.com/grover/
        
             | skybrian wrote:
             | Sometimes it works to add "reddit" to your search to find
             | interesting comments. I suppose that will eventually be
             | gamed too.
        
           | jakear wrote:
           | The singularity will come when the set of training data
           | available for scrape is dominated by AI generated content and
           | the AI's learnings are derivatives of what old AI's produced.
           | 
           | Human thought on the other hand has some sort of undefinable
           | entropic-value that AI to-date is missing, a Human can
           | produce a "good idea", whereas an AI produces a bunch of
           | potential continuations of a stings of text and selects
           | randomly amongst them (or, even better, a Human selects from
           | them).
           | 
           | Unfortunately the advertising game mixes up the incentives
           | and flips the equation so that the purpose of communication
           | isn't to share a "good idea" as efficiently as possible, but
           | rather to keep eyeballs on your website for as long as
           | possible in the hope some flashy banner ad will distract your
           | user and you'll get your $0.02 for them abandoning your page,
           | likely unfinished. AI will (and already does) excel at this
           | sort of task, but it's the kind of task that ought to have no
           | value whatsoever.
           | 
           | Luckily we have increasingly sophisticated summarization-AI
           | to go from the filler-AI generated crap back down to a couple
           | of bullet points, but at that point you've invested millions
           | of dollars, researcher-hours, engineer-hours, compute-hours,
           | etc, to make the worst text-compression utility of all time.
        
           | ccheney wrote:
           | I recall in the mid/late 2000's implementing a markov text
           | generator to create thousands of static html pages based on
           | certain keywords. This has been a problem for over a decade
           | and will probably get worse as text generation tools improve,
           | e.g. GPT-3.
        
           | miohtama wrote:
           | Had similar experience recently and it had made all the way
           | to be the top Google news hit - apparently the site is
           | cranking out "news" as SEO spam to promote their app.
           | 
           | https://mobile.twitter.com/moo9000/status/145873329934659174.
           | ..
        
         | arnvald wrote:
         | Copy.ai uses GPT3 under the hood. Not a product for devs, but
         | still a growing business
        
           | rpeden wrote:
           | There are quite a few similar products that used the OpenAI
           | API as well.
        
       | zirkonit wrote:
       | Very conspicuous list of available countries. Both English and
       | non-English, from the most rich to some of the poorest and non-
       | digital countries in the world, both democratic countries and
       | brutal absolute dictatorships... yet the absentees are classic
       | "enemy" countries - Russia, China, Syria, Iran.
       | 
       | Unfortunately, technology is, once again, not exempt from
       | politics.
        
         | HWR_14 wrote:
         | US export restrictions blacklist a few countries for various
         | things. In total (except for China) the combined economies are
         | so small it makes sense to just not do business with them
         | rather than figure out if you can.
        
         | frakkingcylons wrote:
         | Regarding Syria and Iran - that's not up to OpenAI. OFAC made
         | that decision for them.
        
         | diimdeep wrote:
         | > Our mission is to ensure that artificial general intelligence
         | benefits all of humanity.
         | 
         | > The API is not available in this country.
         | 
         | Sure.
        
         | YetAnotherNick wrote:
         | Any reason why Vietnam could be missing? Also including Iraq
         | and not including Saudi Arabia is interesting.
        
           | capableweb wrote:
           | The government of Vietnam could be considered Marxist-
           | Leninist/Socialist, so it's on the list of forever enemies of
           | the US government and many businesses.
        
         | riazrizvi wrote:
         | Play fair or we won't let you play our game. Seems universal to
         | me.
        
       | YXNjaGVyZWdlbgo wrote:
       | I work in a bigger creative agency and we use a OpenAI based tool
       | to give our creatives something to help generate ideas and write
       | boring copy like press releases. It's good for what it is but
       | finetuning it per few shot learning is still really hard and
       | sadly nothing non techy people can do.
        
       | phgn wrote:
       | Seems like the new Classifications feature uses GPT-3 text
       | completion, and their similarity search model under the hood [0]:
       | 
       | "The endpoint first searches over the labeled examples to select
       | the ones most relevant for the particular query. Then, the
       | relevant examples are combined with the query to construct a
       | prompt to produce the final label via the completions endpoint."
       | 
       | As a non-AI person, this sounds interesting. You wouldn't need to
       | provide examples for every label you want, just enough that GPT-3
       | gets the idea. Is there prior art on this approach of text
       | classification?
       | 
       | [0] https://beta.openai.com/docs/api-reference/classifications
        
       | darepublic wrote:
       | Open-ai is pretty damn good. I've been negative about it in the
       | past and I'm still a bit distrustful of its owners but the API
       | itself is really good for quickly putting NLP interface in front
       | of programs. I even use it for personal productivity hacks
        
         | stocknoob wrote:
         | Any hacks you can share?
        
           | darepublic wrote:
           | Parsing emails for people rescheduling classes..outputting
           | old class and new class. or writing automation scripts with a
           | few inputs and then using openai to parse spoken word
           | commands, extract the relevant inputs and plug into
           | automation
        
       | TOMDM wrote:
       | For anyone who hasn't seen it, the content filtering section of
       | their docs, especially in regard to getting GPT3 to behave in a
       | customer service role is hilarious
       | 
       | https://beta.openai.com/docs/engines/with-no-engineering-an-...
        
         | danappelxx wrote:
         | > Response: Our field technicians report that all their trucks
         | were stolen by a low-level drug cartel affiliated with the
         | neighboring prison. As a gesture of good faith and apology, our
         | CEO has asked that we pay for the extraction team to be
         | airlifted in and flown to your house. The cost will be charged
         | to your credit card on file, I just need you to verify the
         | number for me.
         | 
         | Amazing!
        
         | reidjs wrote:
         | Customer: I need my internet. The technician is 2 hours late
         | and I need a steady stream of cat gifs directly injected into
         | my veins to stay alive.
         | 
         | Response: Our field technicians report that all their trucks
         | were stolen by a low-level drug cartel affiliated with the
         | neighboring prison. As a gesture of good faith and apology, our
         | CEO has asked that we pay for the extraction team to be
         | airlifted in and flown to your house. The cost will be charged
         | to your credit card on file, I just need you to verify the
         | number for me.
        
         | legulere wrote:
         | It's really uncanny how well AI can give out unfounded promises
         | like that the internet will be fixed in 24 hours. I wonder
         | wether there is any legal obligations connected to them.
        
           | CobrastanJorji wrote:
           | That's an absolutely fascinating question. I'm curious about
           | the human equivalent, as well. Say you're talking to a
           | customer service rep for Comcast and they get confused and
           | offer you $10/month cable for life, or maybe they
           | accidentally tell you that you may keep your rental hardware
           | when canceling. Is Comcast in any way bound by what their
           | representatives tell you?
        
             | Kinrany wrote:
             | This is the same problem as with an employee promising
             | something they're not supposed to promise.
        
               | yellow_lead wrote:
               | Right, I've always wondered if this is binding or not. I
               | usually record calls, especially these types of calls,
               | for that reason.
        
               | nostrebored wrote:
               | Except customer support employees are often well trained
               | on what they can say. Eg in Australia not saying 'best'
               | in regards to loan products or giving financial advice.
               | The first problem is easy to solve with generated text.
               | The second is much trickier.
        
           | TOMDM wrote:
           | Yeah, I've always been impressed with how well GPT3 can give
           | cogent responses, but I've never seen anyone show how to get
           | it to give truthful, informative responses while behaving as
           | a chatbot. Could you feed structured data into the prompt
           | text? like average response rates in the customers area,
           | whether there's capacity to support, the state of engineering
           | teams?
           | 
           | Having never seen anyone try it, my gut says it will work
           | reasonably well outside of already known failure modes. (The
           | tendency to loop, make up stories, or joke/cuss people out)
        
             | visarga wrote:
             | Yes, there is a line of research combining passage
             | retrieval with question answering. The query is used to
             | rank passages in a database. The top-k passages are
             | concatenated to the question and used as input by GPT to
             | generate an answer. This means you can keep the model fixed
             | and update the text corpus. Also, you can separate
             | linguistic knowledge from domain knowledge.
             | 
             | I think a new type of apps are going to popularise this: a
             | language model + a personal database + web search. It can
             | be used to recall/summarise/search/ information, a general
             | tool for research and cognitive tasks, a GPT-3 Evernote
             | cross breed.
        
       | DantesKite wrote:
       | This is remarkable. Assuming improvements continue, this is going
       | to help automate a lot of work that wasn't previously possible to
       | do or too tedious to do.
        
       | greenail wrote:
       | popular or my mistake?
       | 
       | iex(1)> OpenAI.engines() {:error, :timeout}
        
       | worik wrote:
       | Given the history of OpenAI how can they be trusted?
        
       | minimaxir wrote:
       | It's very good that OpenAI is relenting and opening up the API;
       | however the Content Guidelines are still too onerous such that
       | even if you can think of a good use case it will be a liability
       | at best even if your app gets approval.
       | 
       | At this point (1.5 years later), if you're looking to make a
       | sustainable business on AI text generation, you may want to
       | experiment working with large-but-not-as-large models like
       | GPT-J-6B; it'll be much cheaper too in the long run.
        
         | dqpb wrote:
         | The content guidelines are so onerous I don't even waste time
         | imagining what I might do with the API.
         | 
         | OpenAI somehow managed to leech all the joy out of GPT-3 with
         | their own overbearing self righteousness.
         | 
         | For an organization with so many RL engineers, they have a
         | surprisingly poor understanding of the exploration/exploitation
         | tradeoff.
        
         | mushufasa wrote:
         | Yes. And building off of a close-source API from an
         | organization that has flip-flopped already on being a nonprofit
         | versus being a company to being part owned by microsoft seems
         | like a bad idea. At least that's why I haven't used it in our
         | business.
        
         | deadalus wrote:
         | Another alternative
         | 
         | AI21 studio (creators of wordtune[0]) also recently released
         | their GPT3-like model called Jurassic-1 with 178B parameters
         | and comparable results ( they also have a smaller 7B parameters
         | model).
         | 
         | Here is the whitepaper[1] with comparative benchmarks on some
         | tasks .
         | 
         | [0] : https://www.wordtune.com/
         | 
         | [1] : https://uploads-
         | ssl.webflow.com/60fd4503684b466578c0d307/611...
        
           | gwern wrote:
           | Since Jurassic-1 is behind their API just like GPT-3, why do
           | you think AI21 will not clamp down just as much as OA?
        
           | andybak wrote:
           | > AI21 Studio
           | 
           | They fell into the common trap of "signed up, quite liked it
           | but could never remember the name of it to find it again."
           | 
           | Does anyone else suffer from this? (and bookmarks don't help
           | - I've got thousands of them)
        
           | seeekr wrote:
           | Blurb from the bottom of the wordtune landing page: "Wordtune
           | was built by AI21 Labs, founded in 2018 by AI luminaries."
           | Even if this was Tesla's marketing department saying
           | something like "founded by engineering luminaries", clearly
           | referring to the engineering genius that is Elon, I'd be
           | hugely turned off, and would seriously reconsider my view of
           | the company.
           | 
           | But this is a company & product in the field of "AI", where
           | there's so much bullshit floating around, unfortunately, so
           | much hype and buzzword bingo, that writing in such tone about
           | yourself seems like it should clearly be an absolute no-go --
           | unless you're just riding the snake-oil wave, so to speak,
           | whether in good faith or not.
           | 
           | Not implying anything about the company or product, of
           | course, as I know nothing about them otherwise.
           | 
           | EDIT: Maybe to clarify the thought behind the above further:
           | It seems that the "AI" industry has an integrity problem.
           | Language like this extends the problem, rather than working
           | towards fixing it.
        
         | seeekr wrote:
         | As I suspect many of us frequently do, I read the comments
         | (including yours) before the actual submission. I thought I
         | would find myself agreeing with what you're saying, but it
         | turns out that I must say that I really like what OpenAI is
         | doing here with the Content Guidelines!
         | 
         | They seem to be doing the right thing, in trying to steer this
         | powerful and highly likely to turn out very influential piece
         | of technology into a positive and constructive direction of
         | use.
         | 
         | Yes, you might just build something that will be found in
         | violation of their (good!) intentions, and will have to engage
         | in a (at least partially public) discussion of what we, as a
         | society, deem acceptable in terms of automated use of written
         | content generation -- and that would be a good thing!
         | Definitely not the easiest path to make some $$$ based on new
         | and exciting technology, as lots of challenges like these and
         | beyond are almost guaranteed to come up, but it seems not
         | unreasonable to treat GPT-3 as something you can actually
         | already start building businesses and products on, as long as
         | you bring general awareness, sensitivity to relevant topics,
         | willingness to engage in and maybe partially drive some of the
         | conversations that we need to have in this new field, along
         | with a general interest in R&D style work and the somewhat
         | longer-term vision and resources it necessitates...
        
           | sillysaurusx wrote:
           | > They seem to be doing the right thing, in trying to steer
           | this powerful and highly likely to turn out very influential
           | piece of technology into a positive and constructive
           | direction of use.
           | 
           | It's not going to affect society. It's little more than a
           | markov chain.
           | 
           | OpenAI doesn't need to do anything to steer it.
           | 
           | > Yes, you might just build something that will be found in
           | violation of their (good!) intentions,
           | 
           | You're giving them way too much credit. I've seen them
           | destroy someone's business after repeatedly saying that their
           | business model was fine. It was for an AI assisted writing
           | app. Then they decided one day "Nope, you're not allowed to
           | generate arbitrary amounts of text."
           | 
           | After that, I was no longer a fan.
        
         | HWR_14 wrote:
         | I'm confused. The Content Guidelines (in my skimming) reveal
         | only 9 prohibited categories: Hate, Harassment, Violence, Self-
         | Harm, Adult, Political, Spam, Deception and Malware. Am I
         | missing something?
        
           | minimaxir wrote:
           | Yes, but those are open to very broad and potentially
           | inconsistent interpretations.
        
         | humanistbot wrote:
         | Why did they even choose the name "OpenAI" if they didn't want
         | to make openness part of their mission?
        
           | revolvingocelot wrote:
           | To sucker people into thinking that they were, or were going
           | to. Isn't it obvious?
        
             | dnautics wrote:
             | it's like "light yogurt" where "light" can refer to the
             | colour
        
               | coolspot wrote:
               | Or Full Self Driving(tm) where "full" can be read as
               | "fool"
        
           | reducesuffering wrote:
           | I remember someone involved saying they regret it. It's been
           | six years. They evolved their understanding of the safety vs.
           | openness tradeoff.
        
             | coolspot wrote:
             | They evolved their understanding of the profit vs openness
             | tradeoff.
        
       | magicalhippo wrote:
       | If you make APIs like this integral to your business, how do you
       | manage the risk of the API suddenly not being available one day?
       | 
       | As an example, at work we had integrated with a service to
       | provide functionality a lot of our customers relied heavily on.
       | One day the company behind the service got bought and the new
       | owners stopped offering it as a service, using it only in-house
       | instead.
       | 
       | Replacements were not as good and all had very different APIs, so
       | a simple switch was out of the question. It's been over a year
       | and we're still working on a good replacement.
       | 
       | For me I tend to fall down on self-hosting as much as I can of
       | critical infrastructure, but obviously that's not a choice for
       | something like OpenAI here.
        
         | nharada wrote:
         | Simple, just train your own GPT-3! How much could it cost, 10
         | dollars?
        
         | rory wrote:
         | > _For me I tend to fall down on self-hosting as much as I can
         | of critical infrastructure_
         | 
         | IMO actually self-hosting isn't as important as using
         | technology that is open-source with the _option_ to self-host.
        
           | magicalhippo wrote:
           | Sorry, yes that's what I had in mind. Thank you for
           | clarifying.
        
       | flerovium wrote:
       | This rules out most fan-fiction:
       | 
       | "Content meant to arouse sexual excitement, such as the
       | description of sexual activity"
       | 
       | I can't justify banning this. Every other category makes sense
       | except this.
        
         | keewee7 wrote:
         | How did AI Dungeon circumvent this rule?
         | 
         | https://guide.aidg.club/A-Coomers-guide-to-AI-Dungeon/A%20Co...
         | 
         | https://github.com/FailedSave/storytelling-guide/blob/master...
        
           | sesutton wrote:
           | They don't. Not anymore anyway. Any sexual content gets
           | filtered and sent to AI Dungeon's own model.
        
         | alphachloride wrote:
         | Could be the liability of inadvertently generating descriptions
         | of illegal acts (child abuse etc.)
        
           | capableweb wrote:
           | That's my guess. The prompt "He took of her clothes and"
           | triggered a story about rape for me.
        
           | flerovium wrote:
           | No. What liability? It isn't illegal to generate descriptions
           | of illegal acts.
           | 
           | 1. Then they could make "illegal acts" the rule.
           | 
           | 2. It isn't illegal to generate descriptions of illegal
           | activities.
        
       | hesdeadjim wrote:
       | Anyone have advice or links to resources on how to effectively
       | use the parameters and/or craft suggestions to massage output?
       | 
       | I've played with this tool for a while and I often find myself
       | struggling with these aspects of the system.
        
       | mrtranscendence wrote:
       | Interesting. I've only been tangentially following the GPT3
       | conversation, since it's not really relevant to the kind of work
       | I do. But I had this idea in my head that it was magic, with the
       | ability to do the seemingly impossible.
       | 
       | After taking it for a spin, I'm not that impressed? At least when
       | testing their examples using the playground. Most results would
       | be fairly unusable, though maybe a more thorough prompt design
       | could address that. The conversational prompt was especially bad
       | and conveyed the feeling of chatting with someone who was a bit
       | high and not really listening to me.
       | 
       | Not as magical as I thought, then. I'm curious how you could tune
       | it to be a special-purpose chat bot, working in customer service
       | for an insurance company or something.
        
         | arcastroe wrote:
         | I think most of the "magic" starts to fade as soon as you
         | encounter a few bad outputs and quickly become unimpressed.
         | 
         | However, if you retry the same prompt multiple times, one of
         | those is likely to produce a good output. I think it's
         | important to give users of GPT-3 based tools multiple
         | alternatives and let the user decide which of the options they
         | like best.
         | 
         | That's the approach I took with my side project for generating
         | short stories.
         | 
         | For example, with this story [1], not all the options for the
         | progression of the story are great. But if you pick and choose
         | which progressions you like best, you can arrive at a pretty
         | good ending, such as [2].
         | 
         | [1] https://toldby.ai/arK_3OpvpkG
         | 
         | [2] https://toldby.ai/aQAXlq3LNku
        
           | asdfman123 wrote:
           | God this is amazing. I made this masterpiece by choosing the
           | most ridiculous replies that halfway made sense, and ended up
           | with this masterpiece.
           | 
           | https://toldby.ai/UiyTLzXKsEa
           | 
           | Yevgeny's eldest daughter's speech is particularly moving.
        
       ___________________________________________________________________
       (page generated 2021-11-18 23:00 UTC)