[HN Gopher] GPTs: Custom versions of ChatGPT
       ___________________________________________________________________
        
       GPTs: Custom versions of ChatGPT
        
       Author : davidbarker
       Score  : 293 points
       Date   : 2023-11-06 18:18 UTC (2 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | alvis wrote:
       | I have been utilising GPT to create my own app, and now openAI
       | wants to be the only app that matters. I'm not sure whether I
       | should be excited or not :/
        
         | topicseed wrote:
         | They will allow revenue sharing so then it's a matter of how
         | many customers they'd be able to offer you for your app; and
         | whether it makes sense for you to distribute through them or
         | bypass them and distribute yourself.
        
           | elforce002 wrote:
           | This business model will only serve them in the long run.
           | Luckily for us, open source llms are getting traction and we
           | won't depend on "open"AI to implement features on our apps.
        
         | mickdarling wrote:
         | Well, they tried to put a government sponsored moat in the way
         | of other people building AI companies that would be competing
         | with them. Thankfully, they mostly seem to have whiffed the
         | ball on that one. This plan of theirs to monetize the creation
         | of agents and other tools that take advantage of their
         | underlying infrastructure is a good secondary kind of moat.
         | Because, if your tool relies on their underlying
         | infrastructure, even if you could build something different,
         | the infrastructure is required. This may be a "less-evil" way
         | to keep them building things and making tools available without
         | completely locking out competition.
        
         | fatherzine wrote:
         | Every prompt you put in somebody else's LLM goes into the
         | training set of the next iteration of said LLM, with the
         | explicit purpose of replacing you as a cognitively, and
         | therefore economically, relevant entity. The only dignified
         | move is not to play, though it's a very difficult choice. It
         | probably not a winning move, though at this point there are no
         | obvious winning moves -- you and I and all our loved ones will
         | be obsoleted and replaced by tech within the next few years.
         | Concretely, to not play means to stop feeding the machine data,
         | i.e. disconnecting from the digital world. Given how
         | digitalized society is becoming, possibly also from the modern
         | society altogether. Godspeed.
        
           | oezi wrote:
           | I think all technological revolutions have caused similar
           | transformations which obsolete certain types of activities
           | and push novel activities to the forefront.
           | 
           | Not playing is certainly possible but could be a losing
           | strategy as well.
        
           | bbor wrote:
           | I like to think about it from the perspective of the far
           | future, looking back on me as a historical actor. I have no
           | idea what will happen exactly, of course, but I can't imagine
           | a moral/social crisis of the past where "cross your fingers
           | and hope it goes away" is a move I'd approve of...
           | 
           | That said, your worry is one I definitely share. I guess I
           | just hope more people think of ways they can try to
           | ride/shape this wave, rather than stop/weather it.
        
           | willsmith72 wrote:
           | > you and I and all our loved ones will be obsoleted and
           | replaced by tech within the next few years
           | 
           | What is this? When has this ever proved true, despite being
           | spouted throughout all history? It's such an easy, throwaway
           | and meaningless sentence.
           | 
           | Please explain further. Replaced how? By what? What's going
           | to happen to the humans in the next few years?
        
             | Aerbil313 wrote:
             | What is your job? Chances are, it's nothing an AGI (based
             | on LLMs) can't do, and an AGI is possible, today. People
             | are building these things today, check out GitHub. And if
             | you don't believe GPT-4 cannot do your job cheaper than
             | you, just wait for GPT-N, which will be able to.
        
               | willsmith72 wrote:
               | Progress isn't linear. Point me to an AGI on GitHub. Our
               | definition of work changes based on the greatest possible
               | technology at any given moment.
        
         | zeroCalories wrote:
         | When your startup is a repackaging of another company's tech I
         | don't see how you can be surprised when a big player swoops in
         | to kill you off.
        
         | cornholio wrote:
         | What did you expect? Surely it was just an MVP, and you
         | expected OpenAI to commoditize its complements? Right?
         | 
         | On the long run, if your idea or app can be expressed as a
         | flavor of a general GPT, you will not be able to compete with
         | the AI gorillas. The space for AI startups is with custom,
         | highly niched data or capabilities, that cannot be found in a
         | general corpus, or that you can uniquely generate or control.
        
         | ahmedhosssam wrote:
         | I thought about the same thing, I've seen a lot of apps that
         | have similar ideas like "ChatGPT chatbot for your data or your
         | website", I don't know how will they deal with it.
        
       | awfulneutral wrote:
       | The icon for the game explainer one is a die with two 5s on it. I
       | wonder if they use ChatGPT to write their blog articles as
       | well...
        
         | timdiggerm wrote:
         | Definitely a very funny example of their own product's
         | shortcomings
        
         | 1970-01-01 wrote:
         | If it was any good, the "Negotiation" GPT would quickly get you
         | paying extra for its services.
        
       | fudged71 wrote:
       | Poe has done a great job in this space, quite a large marketplace
       | of existing bots. I'm excited to see what it can do with the
       | extra vision, D*ALLE, and Code Interpreter models.
        
       | ahmedfromtunis wrote:
       | Any new tech that has the potential of advancing the human
       | species I am excited for. And this is one of these.
       | 
       | The challenge is how to adapt to create new opportunities.
        
       | colesantiago wrote:
       | Awesome, I've been waiting for something like this.
       | 
       | It looks like we are moving away from apps to web GPTs, this
       | looks like chatbot interface is here to stay and 'AI' is now the
       | default interface that is to be expected.
       | 
       | I also don't need to spend lots of money on a developer to test
       | my ideas out, this is great for product validation, I look
       | forward to playing with GPTs.
       | 
       | I can see writing, travel and other bots being more enhanced and
       | more powerful, hopefully existing startups will adapt to this
       | change.
       | 
       | Exciting and interesting times!
        
       | atleastoptimal wrote:
       | I wonder how much money I could make making "GPTs" full time.
       | Barrier to entry is nonexistent so I imagine highest revenue ones
       | if this becomes a serious thing people use will be advertised
       | externally or have some proprietary info/access.
        
         | siva7 wrote:
         | I'm not sure i understand?
        
           | atleastoptimal wrote:
           | The GPT's making the most money will be made by larger
           | companies who advertise use of it and maybe make it a funnel
           | to their in-app integration, or GPTs which are made effective
           | by information that is proprietary.
        
             | ca_tech wrote:
             | This makes me think of the Alexa skills ecosystem which is
             | full of low quality skills. Many of which have poor data
             | practices abstracted away behind the scenes. How long until
             | "Chat with your favorite character from [Intellectual
             | Property]?" which is simply made to promote a new film or
             | collect data.
             | 
             | A good paper on the state of the Alexa skills BTW
             | 
             | SkillVet: Automated Traceability Analysis of Amazon Alexa
             | Skills https://ieeexplore.ieee.org/document/9619970
        
           | bbor wrote:
           | Part of the announcement is that they'll open up an app store
           | w/ revenue sharing of some kind. "In the coming months" or
           | smtn
        
         | stevesearer wrote:
         | I bet if you combined GPT creation with Zapier integrations you
         | could help a lot of people and companies.
        
           | toomuchtodo wrote:
           | Thoughts on Zapier trying to become OpenAI faster than
           | OpenAPI can become Zapier? There will always be a long tail
           | of APIs that folks want integrating, but the most popular
           | APIs are perhaps only a few hundred in number (Google
           | Calendar and Slack, for example).
        
             | jimmyl02 wrote:
             | I feel like history has shown that those who own the
             | platform end up winning and in this case OpenAI's platform
             | of models seems much harder to recreate. My guess is this
             | would lead to Zapier using OpenAI as a platform and
             | eventually OpenAI would re-create Zapier's integrations
             | before the other way around.
        
               | toomuchtodo wrote:
               | I think this perspective is fair and historically
               | accurate wrt platform risk, but I also strongly believe
               | Zapier has substantial value beyond what historically has
               | been acting as a conduit between APIs. Customers don't
               | want a pipe between their services, they want to automate
               | their mundane work with a robot.
        
           | JCharante wrote:
           | So many zapier integrations are half baked. A lot of them are
           | good for reacting to events but not for searching for data
           | (i.e. you can use zapier to react to a jira ticket change but
           | can't use zapier to query jira for ticket info)
        
         | bbor wrote:
         | I think something could be said for "virality" as well - could
         | easily see some entertainment or lifehack themed templates
         | blowing up on TikTok. No one wants to post the output of the
         | _lame, less popular_ template on their story!
        
         | oezi wrote:
         | How many people made any money from the plugins for ChatGPT?
        
           | minimaxir wrote:
           | Plugins failing was more due to lack of visibility, which a
           | GPT Store does solve.
        
             | manojlds wrote:
             | Depends on features. How much data can I give it and how
             | much can I customize it?
        
             | singularity2001 wrote:
             | How does the store solve visibility? In the demo it looked
             | like there was a select list of Custom Assistants in the
             | left panel which he manually had to click, so not much
             | different from plug-ins?
             | 
             | Right he said something about promoting the best but what
             | about the discoverability?
        
           | avarun wrote:
           | ChatWithPDF made a TON of money, for one.
        
             | euazOn wrote:
             | Source?
        
         | colesantiago wrote:
         | I agree.
         | 
         | Anyone with factual data (proprietary or not) is now an input
         | away to AI / GPTs.
         | 
         | Data (or a new foundational model) is now the moat.
        
           | altdataseller wrote:
           | What sort of data do you think will be the most valuable to
           | input to AI?
        
           | j7ake wrote:
           | Data has always been the moat no ?
        
         | jatins wrote:
         | > I wonder how much money ...
         | 
         | I saw that announcement and my immediate thought was "God yet
         | another thing passive income youtubers will be shilling soon"
         | 
         | In general, I was a little confused by this. Sam's demo of
         | creating a GPT didn't seem particularly exciting as well.
        
           | atleastoptimal wrote:
           | OpenAI in a weird way has mediocre marketing. The examples
           | they use for Dalle-3 are way worse than the average ones I
           | see people cooking up on Twitter/Reddit. They only seem to
           | demo the most vaguely generic implementations of their app.
           | Even their DevDay logo is just 4 lines of text.
        
             | RobertDeNiro wrote:
             | Just the fact that they decided to stay with what is
             | essentially a highly technical acronym i.e. GPT, as their
             | major product line says a lot.
        
               | imdsm wrote:
               | And yet in a way I find this refreshing
        
               | falcor84 wrote:
               | I think that's actually crucial in that they want to
               | trademark this otherwise generic term
        
               | Aerbil313 wrote:
               | Omg... Thinking about their push for regulation with
               | this... Are they after something like keeping advanced
               | generative pretrained transformer LLM model technology to
               | themselves, prohibiting others, at least in American
               | economy where regulations can be applied?
        
               | andrewmunsell wrote:
               | To be fair, the name "ChatGPT" has quite a bit of
               | mindshare and I've found many non-technical folks
               | referring to _any_ generative AI product as  "ChatGPT" or
               | "GPT". Yet, if you asked any single one of them what
               | "GPT" stood for, they'd have no clue.
        
               | JCharante wrote:
               | To be fair, I'm a dev who uses chatgpt on an hourly basis
               | and I had no idea what GPT stood for until I googled it
               | just now. I think it's kinda smart to make people
               | strongly associate GPT with OpenAI
        
               | toomuchtodo wrote:
               | "Generative Pre-trained Transformers" for those who don't
               | want to leave the page.
        
               | singularity2001 wrote:
               | I've heard all permutations of "GPT" "GPP" "GTP" etc.
        
               | block_dagger wrote:
               | I think it might be the ubiquity of the term "GPT" as it
               | relates to OpenAI from a public branding perspective.
        
               | manojlds wrote:
               | Better that than do a silly rename ala X.
        
               | shawnc wrote:
               | I don't recall which interviews I saw it stated in, but I
               | believe Sam said in one or two of his world tour stops,
               | where he stated they deliberately have gone with a
               | technical name instead of a human name to help remind
               | those using it that it's not a person. So I think that
               | coupled with the mindshare (as others have stated) it
               | already holds, makes a lot of sense to stick with it.
        
             | minimaxir wrote:
             | Sam's "all our marketing is from word-of-mouth" was
             | refreshingly honest.
        
               | toomuchtodo wrote:
               | Would be somewhat humorous to plug OpenAI into ad
               | platforms, give it budget, and say "go market yourself as
               | effectively as possible."
        
             | JackFr wrote:
             | Need some Boston Dynamics flair.
        
           | conradev wrote:
           | I would pay money for a GPT that was incredible at naming
           | types in Swift
           | 
           | I'd pay for one that was good at programming rubber ducking
           | 
           | There are specific sub-tasks that everyone would pay for to
           | make their lives easier. This marketplace is trying to make
           | that efficient
        
             | NiagaraThistle wrote:
             | I find NORMAL ChatGPT a good programming Rubber Duck and
             | use it often.
        
         | bilsbie wrote:
         | I'm not clear who the market is? Why someone buy one?
        
           | imdsm wrote:
           | Imagine a "GPT" that could generate websites and provide you
           | with a live deployment as you change it using natural
           | language. A website builder GPT that is primed to output and
           | design in a decent way, that has all the prep beforehand to
           | use particular libraries, and integrations with something
           | like Render.
           | 
           | People would pay for that.
           | 
           | Sh--... I better build it!
        
             | user_7832 wrote:
             | All fun and games until someone sues because the
             | "hallucinated" product description didn't match the
             | product...
        
             | block_dagger wrote:
             | One can come up with all sorts of ideas like this, but
             | building it will be a matter of slow iterations at prompt
             | engineering in a mixture of natural language and data
             | structures and will be at the whim of changing APIs,
             | including the backing ChatGPT model. Sounds messy, hard to
             | manage, hard to test...or am I missing what the actual
             | process will be for creating one of these?
        
               | mirekrusin wrote:
               | Worry not, surely there will be GPT for creating GPTs.
        
         | CSMastermind wrote:
         | I'm more confused how the revenue share works. Do they get part
         | of my ChatGPT subscription fee? Am I paying extra? Per bot? Per
         | amount of time I consult with the bot?
        
           | zwily wrote:
           | Yeah he didn't explain at all how GPTs would be monetized.
        
         | torginus wrote:
         | And seeing how OpenAI is moving up the value chain, what's the
         | guarantee they won't come up with an in-house competitor to the
         | bot that was built on their platform?
        
         | Uehreka wrote:
         | > I wonder how much money I could make making "GPTs" full time
         | 
         | I don't get why people are thinking along these lines at all.
         | Like, if you don't own and control the LLM yourself, what makes
         | you so sure OpenAI will allow you to make money at all? They
         | could make advertising externally or hosting external
         | marketplaces against the TOS. They could copy your GPT and put
         | their "official" version at the top of the store page. Just
         | because a technology is powerful does not necessarily mean you
         | can make money off of it.
        
           | michaelmior wrote:
           | > what makes you so sure OpenAI will allow you to make money
           | at all?
           | 
           | They might not. But if they do, I'd imagine there are a lot
           | of people who will try. And as long as you're not dependent
           | on the income stream they provide, you don't have much to
           | worry about if it gets shut off.
        
           | jes5199 wrote:
           | ChatGPT is the new iPhone. Dealing with a walled-garden app
           | store is never a great experience, but we'll do it because
           | that's where the users are
        
           | stale2002 wrote:
           | The answer is because the bot market is a creator economy
           | market.
           | 
           | It takes significant effort to come up with good use cases,
           | build the prompts, and advertise the bots.
           | 
           | So a company can get a lot of value by going after this up
           | and coming type of "content creator".
        
         | ren_engineer wrote:
         | not much based on what OpenAI has been doing lately, using
         | their own customers as product research and then copying the
         | best ideas. OpenAI pretty much has to keep a huge lead in model
         | capabilities or developers are going to stop using them for
         | this reason
         | 
         | basically copying the Microsoft strategy of Embrace, Extend,
         | Extinguish. Makes sense they took so much funding from
         | Microsoft
        
         | ilaksh wrote:
         | Right now, zero. They didn't say when it was rolling out or
         | what percentage they would share. It might be something like
         | 10%. Lol.
        
           | atleastoptimal wrote:
           | I will make a bot that makes GPTs, and a bot that makes other
           | bots that make GPTs
        
       | truakon89 wrote:
       | But where is the option/link to create a GPT? I can't find it
        
         | imdsm wrote:
         | 1 pm PST -- path /create
        
       | bbor wrote:
       | I think I speak for a few of us AI Doomers here when I say that
       | this makes me excited and terribly anxious at the same time.
       | So... well done OpenAI :). Great news, and a great feature!
       | 
       | I have no doubt that this will immensely increase uptake among
       | the less technically literate, since it will allow the techy
       | people in their life (or on the app store) to introduce them with
       | much less friction. It'll be like the little examples you can
       | find on the "New Chat" screen of every chatbot, but 1000x more
       | engaging
        
         | cubefox wrote:
         | You clearly aren't much of a doomer yet. There is nothing
         | exciting about a looming catastrophe.
        
           | pphysch wrote:
           | *laughs in Accelerationism*
           | 
           | https://en.wikipedia.org/wiki/Accelerationism
        
           | MeImCounting wrote:
           | If the catastrophe is mega-corps getting to monopolize a
           | valuable technology because people saw The Terminator and
           | thought it was a documentary then any announcement from
           | OpenAI is bad news.
        
             | cubefox wrote:
             | The catastrophe is humanity going extinct from
             | superintelligent AI. Like a native species going extinct
             | after an invasive species arrives. Mentioning Terminator is
             | like saying the Earth is flat because Hitler said it is
             | round.
        
               | MeImCounting wrote:
               | This reminds of the Eliezer Yudkowsky tweet saying that
               | AI was going to hack our DNA and use our bodies to mine
               | bitcoin or something. Ridiculous fearmongering.
               | 
               | I have probably read more sci-fi than the average HN user
               | but the whole "superintelligent AI is going to kill us
               | all" hysteria is among the more ridiculous ideas I have
               | ever heard.
               | 
               | Really though I have entertained all the doomers
               | propositions and none of them seem any more likely than
               | the plot of the Matrix. The ideas that prop these fears
               | up are based on layers of ever more far fetched
               | hypothesis about things that do not exist. If you have a
               | novel reason why AI poses an x-risk I am more than
               | interested in hearing it.
               | 
               | Here is a really interesting quote that I think might go
               | against some of the misanthropic tendencies of doomers
               | and the tech crowd in general but it really is more
               | relevant than ever: "There was also the Argument of
               | Increasing Decency, which basically held that cruelty was
               | linked to stupidity and that the link between
               | intelligence, imagination, empathy and good-behaviour-as-
               | it-was-generally-understood - i.e. not being cruel to
               | others - was as profound as these matters ever got."
        
               | cubefox wrote:
               | A species doesn't automatically get more altruistic
               | towards other species once it gets smarter. Look at how
               | many species humanity drove to extinction.
        
               | MeImCounting wrote:
               | True humans have been remarkably ignorant throughout our
               | short history. Though you might notice though that most
               | folks dont go around abusing animals or hurting other
               | people on purpose. Take from that what you will.
               | 
               | Give this essay a read if youre interested in good faith
               | arguments about the danger of AI at the current state of
               | development. https://1a3orn.com/sub/essays-propaganda-or-
               | science.html
               | 
               | Maybe together as a species we can avoid hellish
               | cyberpunk dystopias brought on by regulatory capture of
               | the most powerful technology created by humans thusfar. I
               | can only hope.
        
       | siva7 wrote:
       | So i guess the next wave of startups has been killed. I'm not
       | even sure what kind of startup still makes sense as a gpt thin
       | wrapper?
        
         | ethanbond wrote:
         | None of them, but here's the thing: they _never_ made any
         | sense.
        
           | RobertDeNiro wrote:
           | Yeah none of them ever did, and that was always very obvious.
           | If you can make it by wrapping a few API calls, you have no
           | moat and anyone can steal your idea/customers.
        
           | constantly wrote:
           | They never made sense long term, of course. But, plenty of
           | first movers made a bundle of money making chatgpt wrappers
           | and marketing the hell out of them. In that context, they
           | probably made sense for a small subset of people for a small
           | slice of time.
        
         | colesantiago wrote:
         | This meme is getting very old and tired.
         | 
         | What kinds have been 'killed' I don't see this anywhere.
        
           | Tankenstein wrote:
           | Many startups have started over the past few years, trying to
           | build infrastructure (shovels) for companies to integrate
           | LLMs, or specific chat copilots trying to cater to a specific
           | usecase. Most are dead in the water once OpenAI subsumes
           | their feature set.
        
             | BoorishBears wrote:
             | ... is what people who don't understand positioning will
             | parrot time and time again.
             | 
             | Jasper isn't having a good time, but you'd think the fact
             | anyone can produce better output than they did after
             | spending millions of dollars in GPT-3 based pipelines for
             | $20 a month would mean they're dead dead.
             | 
             | But instead they went and changed their positioning,
             | changed who their target market is, adjusted the UX, the
             | messaging, and the feature set, and now it's a product that
             | has a place even if OpenAI can give all of your marketers
             | an internal ChatGPT (unless your plan is to have 100
             | different "GPTs" for every marketing task in your company)
             | 
             | tl;dr: People fail to realize that OpenAI can offer your
             | startup's core value proposition tomorrow morning, and it
             | doesn't matter if they don't offer it in a format that
             | resonates with your target users.
             | 
             | You could have a cure to cancer and you'd still have to
             | market it correctly.
        
         | sergiotapia wrote:
         | If your startup is just a thin wrapper for GPT, it's DOA
         | regardless. Trivial moat is always a trivial moat.
         | 
         | You must use LLMs as a launching point to something else, some
         | kind of 10x in a vertical.
         | 
         | Being able to "chat" is table stakes and worthless to you as a
         | company by itself.
        
         | dragonwriter wrote:
         | > I'm not even sure what kind of startup still makes sense as a
         | gpt thin wrapper?
         | 
         | Startups don't make sense as thin wrappers around another
         | company's product when that company is aggressively expanding
         | that product is a core offering that the vendor is aggressively
         | working to provide as an integrated solution for as many
         | markets as possible, which very much applies to OpenAI
         | offerings in general, and its chat models especially.
         | 
         | A wrapper that also leverages some exclusive special sauce
         | data, algorithm, etc., for which you have a real moat as a key
         | component, that makes some sense. But just a thin wrapper
         | around GPT? That's just asking to have your market eaten by
         | OpenAI.
         | 
         | It might make some sense for products where the vendor is a
         | stable, steady-state infrastructure supplier for many markets
         | without any evident interest in entering the same market as the
         | startup, where the uncertainty across markets that they would
         | create by specifically targeting your startups market would
         | hurt them more with their established customers than they would
         | gain from your niche, but even that is risky because it
         | requires lots of potentially-erroneous assessments of how the
         | vendor would expect their other customers to react to them
         | acting in your market.
        
         | api wrote:
         | Thin wrappers around anything never make much sense. They're
         | trivially replaceable.
        
           | audiala wrote:
           | Even if they create a great UX/UI? GPT would be like the
           | motor of the car, there is still all the structure to build
           | around it.
        
         | levmiseri wrote:
         | Some use cases are still valid I hope. E.g. content generation
         | on a massive scale. An example of that that I'm tinkering with:
         | https://meoweler.com
        
           | anigbrowl wrote:
           | Can I ask what the goal is here? It's cool to be able to spin
           | up a nicely designed website (and it is nicely designed and
           | has a good aesthetic), but isn't the content going to be
           | semantically empty? I don't want to be negative but it feels
           | like cheap plastic imitation of a real thing, and the web is
           | already full of spammy low quality content. Aren't you in
           | danger of your products having a very short life cycle and
           | ending up as digital landfill, so to speak?
        
             | levmiseri wrote:
             | This specific manifestation will likely have the fate of a
             | digital landfill, but I'm generally excited about the
             | prospect of mass content generation in some specific domain
             | use case (as long as it's a field where the quality either
             | doesn't matter all that much or what GPT4 spits out is
             | sufficient).
             | 
             | This particular project is mostly messing around and
             | learning how to use the API, but even here I was surprised
             | about the overall quality of the generated content.
        
         | tacone wrote:
         | It's just commoditization, hard things now becoming a lot
         | easier and value proposition to move up elsewhere.
         | 
         | It happened with hardware, operative systems, web tools such as
         | maps etc.
        
       | Implicated wrote:
       | > Your access to custom GPTs isn't ready yet. We're rolling this
       | feature out over the coming days. Check back soon.
       | 
       | > Go to ChatGPT
       | 
       | Sad. Have a real-world use case ready to go and a plus account.
        
       | empath-nirvana wrote:
       | Is this basically them deploying fine tuned models? It wouldn't
       | be very interesting to just be using custom prompts.
        
         | bbor wrote:
         | I don't know for sure but I'd bet BIG money that these do not
         | include automatic fine-tuning, though I still understand them
         | to be a bit more powerful than just "custom prompts" -- think
         | templates, or sets of custom prompts for specific
         | (sub-)situations.
         | 
         | This is the kind of feature that will prove to be a minor
         | improvement for anyone on this forum, and a complete paradigm
         | shift for the less technically-inclined. IMO.
        
         | dragonwriter wrote:
         | > Is this basically them deploying fine tuned models?
         | 
         | From the description of the past outside practice it is
         | marketed as moving into OpenAI's offering, it sounds more like
         | its custom _prompts_ , not fine-tuned models.
        
           | bearjaws wrote:
           | More specifically, it seems like a RAG (retrieval augmented
           | generation) system than fine tuning.
        
       | minimaxir wrote:
       | The GPT Store will prove to be an interesting moderation and
       | quality control experiment for OpenAI. Apple/Google have spent a
       | lot of time and money on both of those things and they still have
       | issues, and that's not even accounting for the fact that AI
       | growth hackers will be the primary creators of GPTs. And a
       | revenue sharing agreement will provide even more incentive to do
       | the traditional App Store marketing shennanigans.
        
       | teabee89 wrote:
       | "Example GPTs are available today for ChatGPT Plus and Enterprise
       | users to try out including Canva and Zapier AI Actions." and yet
       | as a paying ChatGPT Plus customer, neither the Canva nor the
       | Zapier AI Actions link work for me, I get a "GPT inaccessible or
       | not found" error for Canva or Zapier.
        
         | vinni2 wrote:
         | I have the same issue, my guess is they are still rolling out.
         | 
         | Edit: here is the message I get:
         | 
         | Your access to custom GPTs isn't ready yet. We're rolling this
         | feature out over the coming days. Check back soon.
        
         | nomel wrote:
         | OpenAI is a sane company. They do rollouts for new features.
        
           | joshstrange wrote:
           | No, they just lie in their marketing
           | 
           | > Example GPTs are available today for ChatGPT Plus
           | 
           | or
           | 
           | > Starting today, no more hopping between models; everything
           | you need is in one place.
           | 
           | Neither of which are true. I'm a paying user and I have
           | access to neither. They do this _all the time_. They announce
           | something "available immediately" and it trickles out a week
           | or more later. If they want to do gradual rollouts (which is
           | smart) then they should say as much.
        
       | duxup wrote:
       | Is this just the role and content type data being set as you
       | might with their dev tools?
        
       | gzer0 wrote:
       | Posting from another comment in a different thread, everything
       | that is new from OpenAI developer day:                 - Context
       | length extended to 128k (~300 pages).       - Better memory
       | retrieveal across a longer span of time       - 4 new APIs:
       | DALLE-3, GPT-4-vision, TTS (speech synthesis), and Whisper V3
       | (speech recognition).       - GPT-4 Turbo, a more intelligent
       | iteration, confirmed as superior to GPT-4.       - GPT-4 Turbo
       | pricing significantly reduced, about 3 times less expensive than
       | GPT-4. Input and output tokens are respectively 3x and 2x less
       | expensive than GPT-4. It's available now to all developers in
       | preview.       - Improved JSON handling (via JSON mode) and
       | function invocation for more sophisticated control.       -
       | Doubled rate limits with the option to request increases in
       | account settings.       - Built-in retrieval-augmented generation
       | (RAG) and knowledge current as of April 2023.       - Whisper V3
       | to be open-sourced and added to the API suite.       - Copyright
       | Shield initiative to cover legal fees for copyright-related
       | issues.       - Ability to create your own, custom "GPTs".
       | - Assistants API and new tools (Retrieval, Code Interpreter)
       | - 3.5 Turbo 16k now cheaper than old 4k. 0.003c per 1k in /
       | 0.004c per 1k out.
        
         | pvg wrote:
         | https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
        
       | ilaksh wrote:
       | I can't access the GPTs stuff. I haven't actually got the last
       | update either with the combined models or anything.
        
       | runjake wrote:
       | https://archive.ph/fEp7m
       | 
       | For others like me that are getting errors accessing the page.
        
       | ilaksh wrote:
       | https://chat.openai.com/gpts/editor "You do not currently have
       | access to this feature"
        
         | Roritharr wrote:
         | Probably a staged rollout once again leaving people outside of
         | the US wait.
        
           | ilaksh wrote:
           | I mean, I am in the US, have been waiting for the last
           | rollout still..
        
           | judge2020 wrote:
           | I'm in the US and still don't have it.
        
           | ccakes wrote:
           | Ahh.. is that why I've been on the waitlist since day 1 for
           | ChatGPT Enterprise?
           | 
           | C'mon OpenAI, we're /trying/ to give you money here!
        
           | chabad360 wrote:
           | Nah, I'm in the US (with a US based account) and I'm still
           | getting the message it's rolling out over the next few days
           | (you have to open a sample to see that message).
        
       | littlestymaar wrote:
       | > The best GPTs will be invented by the community
       | 
       | > We believe the most incredible GPTs will come from builders in
       | the community. Whether you're an educator, coach, or just someone
       | who loves to build helpful tools, you don't need to know coding
       | to make one and share your expertise.
       | 
       | "Please work for us for free, while we keep all the product of
       | your work for ourselves like we did with the content we scraped
       | on the internet."
       | 
       | OpenAI really is next level parasitism.
        
         | leobg wrote:
         | Isn't that the game they all play? Amazon Marketplace. Apple
         | App Store. Let the guinea pigs run. See which one gets the
         | furthest. Then take away its lunch.
        
       | glitchc wrote:
       | The cynical me thinks the point of calling these GPTs is a ploy
       | to trademark the term "GPT".
        
         | leobg wrote:
         | Thought the same thing.
        
       | heavyshark wrote:
       | Any word on when the GPTs will be available?
        
         | imdsm wrote:
         | 1 pm PST
        
       | Y_Y wrote:
       | Fuck that, release your models and let the "community" (of unpaid
       | volunteers) freely use and own what they create
        
       | rickcarlino wrote:
       | I'm very excited about all the new developments but must say that
       | they really dropped the ball on marketing this one. The feature
       | name is ambiguous and unsearchable.
        
       | bparsons wrote:
       | Hasn't this been around for a while?
        
       | cafxx wrote:
       | Would be nice also if they fixed the ubiquitous "network errors"
       | that happen approximately every single time...
        
       | mg wrote:
       | So these "GPTs" are the combination of predefined prompts and
       | custom API access? Not customly trained LLMs?
       | 
       | If so, I guess you can make such a "GPT" on your own server and
       | independent from a specific LLM service by using a prompt like
       | ...you have available an API "WEATHER_API". If you need the
       | weather for a given city for a given day, please say
       | WEATHER_API('berlin', '2022-11-24')         and I will give you a
       | new prompt including the data you         asked for...
       | 
       | Or is there some magic done by OpenAI which goes beyond this?
        
         | BoorishBears wrote:
         | If you want to be independent for academic/personal reasons,
         | sure you can.
         | 
         | If you want reasoning capabilities that Open Source hasn't
         | matched before today, and I'm guessing just got blown out of
         | the water on again today... there's no reason to bother.
        
           | mg wrote:
           | You don't need to use an open source LLM for the approach I
           | described. You can still send the prompts to OpenAI's GPT-4
           | or any other LLM which is available as a service.
        
             | BoorishBears wrote:
             | What other LLM will compete with GPT-4 Turbo (+ V)? At most
             | you're hedging that Anthropic releases a "Claude 2 Turbo (+
             | V)": is complicating your setup to such a ridiculous degree
             | vs "zero effort" worth it for that?
             | 
             | If things change down the line the fact you invested 5
             | minutes into writing a prompt isn't going to be such a huge
             | loss anyways, absolutely no reason to roll your own on
             | this.
        
               | dragonwriter wrote:
               | > If things change down the line the fact you invested 5
               | minutes into writing a prompt isn't going to be such a
               | huge loss anyways
               | 
               | If things change down the road such that your tool (or a
               | major potential downstream market for your tool) is
               | outside of OpenAI's usage policies, the fact that you
               | invested even a few developer-weeks into combining the
               | existing open source tooling to support running your
               | workload either against OpenAI's models or a toolchain
               | consisting of one or more open source models (including a
               | multimodal toolchain tied into image generation models,
               | if that's your thing) with RAG, etc., is going to be a
               | win.
               | 
               | If it doesn't, maybe its a wasted-effort loss, but
               | there's lots of other ways it could be a win, too.
        
           | dragonwriter wrote:
           | > If you want to be independent for academic/personal
           | reasons,
           | 
           | Or OpenAI's usage policy limits reasons (either because of
           | your direct use, or because of the potential scope of use you
           | want to support for downstream users of your GPT, etc.) Yes,
           | OpenAI's model is the most powerful around. Yes, it would be
           | foolish not to take advantage of it, if you can, in your
           | custom tools that depend on _some_ LLM. Depending on your
           | use, it may not make sense to be fully and exclusively
           | _dependent_ on OpenAI, though.
        
         | burningion wrote:
         | That's the thing, we don't know.
         | 
         | The lack of transparency for how the product works behind the
         | scenes will most likely make it difficult to build something
         | effectively.
        
         | jatins wrote:
         | Yeah I don't _think_ there is a lot "magic" in custom GPT.
         | 
         | However creating something like this previously required a
         | Jupyter notebook but now just...asking for it. Makes it
         | accessible to 10x more people
        
       | ChrisArchitect wrote:
       | [dupe]
       | 
       | More discussion over here:
       | https://news.ycombinator.com/item?id=38166420
        
         | minimaxir wrote:
         | HN generally allows multiple related announcements for keynotes
         | such as this.
        
       | tracerbulletx wrote:
       | Can anyone explain what "extra knowledge" means specifically? Is
       | that like fine tuning? How much data can I give it to learn? How
       | much can it retain? Can it be updated over time?
        
         | minimaxir wrote:
         | The keynote used a .txt file of a lecture that the user
         | uploaded as a data source the model can select from. From a
         | technical perspective, it's anoher data source for retrieval-
         | augmented generation (RAG) doing a vector search in the
         | background:
         | https://platform.openai.com/docs/assistants/tools/knowledge-...
        
           | tracerbulletx wrote:
           | Ah ok, thank you.
        
           | singularity2001 wrote:
           | "optimizes for quality by adding all relevant content to the
           | context of model calls." So for their own profits they
           | maximize recall and let GPT handle precision.
        
         | Racing0461 wrote:
         | I doublt it's fine tuning (actually changing the model
         | weights). It's more like "im going to paste a text blob then in
         | the following chats i will ask questions about it) type inner
         | prompt.
        
           | singularity2001 wrote:
           | For longer documents it uses vector embeddings
        
         | ankit219 wrote:
         | They showed a demo where you could upload a file while creating
         | an agent. As others have answered. I think it's about
         | configuring an agent in a way that you give it material on some
         | specific topic (one file, multiple files) and it uses Retrieval
         | to augment the answer based on the source material.
         | 
         | Google launched Notebook LM[1] a while back which does a
         | similar thing conceptually. It allows you to import a Google
         | drive folder with docs of the stuff you would want to
         | understand and then just chat with it. It's a good product but
         | restrictive in the sense that it only allowed Google docs.
         | 
         | [1] https://blog.google/technology/ai/notebooklm-google-ai/
        
         | vunderba wrote:
         | The demo just showed a file dialog box where they could upload
         | a set of static files. What I'd really like to see is the
         | ability to sync a source integration (for example into a GitHub
         | repo or a notion account), and it would always pull relevant
         | information using a RAG architecture.
        
       | dang wrote:
       | Related ongoing threads:
       | 
       |  _New models and developer products_ -
       | https://news.ycombinator.com/item?id=38166420
       | 
       |  _OpenAI DevDay, Opening Keynote Livestream [video]_ -
       | https://news.ycombinator.com/item?id=38165090
        
       | Racing0461 wrote:
       | Barrier to entry for commercial or useful GPTs/Plugins/"Agents"
       | is almost non-existant since its just a str.concat(hiddenprompt,
       | user_prompt), the secret sauce (ie the weights, chat timeout and
       | context length) are already generated/limited by OpenAI and they
       | already have the content moderation/"hr dept" baked in at the
       | weights level. So even if one was to create a "story writer
       | helper" GPT, i don't see how it would be of any value generating
       | new, unique and interesting content other than the prompt recipes
       | we already have on reddit/r/chatgpt (heres 1000 prompts for every
       | use case) that creates netflix like plots (inclusively diverse
       | casting across ethnicities and orientations, socially conscious
       | storylines, modern jargon-filled dialogue, themes of empowerment,
       | progressive characters, and non-traditional relationship
       | dynamics).
       | 
       | This will most likely be like the google play store with a 99% of
       | GPTs being a repackaged public prompt.
        
       | ofermend wrote:
       | Excited about GPT4-Turbo and longer sequence lengths. Looking
       | forward very much for faster inference. We just released
       | Vectara's "Hallucination Evaluation Model" (aka HEM) today
       | https://huggingface.co/vectara/hallucination_evaluation_mode...,
       | with a leaderboard: https://github.com/vectara/hallucination-
       | leaderboard GPT-4 was already in the lead.
       | 
       | Looking forward to seeing GPT4-Turbo there soon.
        
       | rco8786 wrote:
       | Does anyone know if this is just a "native" RAG implementation?
       | Or if it's actually fine tuned models?
        
         | minimaxir wrote:
         | Native RAG, with likely some secret sauce to align the models a
         | bit better:
         | https://platform.openai.com/docs/assistants/tools/knowledge-...
         | 
         | > Retrieval augments the Assistant with knowledge from outside
         | its model, such as proprietary product information or documents
         | provided by your users. Once a file is uploaded and passed to
         | the Assistant, OpenAI will automatically chunk your documents,
         | index and store the embeddings, and implement vector search to
         | retrieve relevant content to answer user queries.
        
       | hospitalJail wrote:
       | This is sooo nice. What is everyone using for theirs? Here is
       | mine
       | 
       | "Be brief, give 10 answers, give probabilities that each answer
       | is the best/most correct"
        
       | Vfiorx wrote:
       | Make and train your own LLM and you can literally duplicate your
       | brain power and productivity. For almost free. Where's everyone's
       | digital doubles?
        
       | Vfiorx wrote:
       | I don't understand why more people aren't creating digital
       | doubles of their brain..? You train your own LLM for practically
       | free and have your own digital double to maximize any and all
       | productivity. Why is there not more of this?
        
       | bluecrab wrote:
       | Startup funeral.
        
       | nojvek wrote:
       | Googles: Custom versions of Google.
       | 
       | Anyone can easily build their own Google. No coding is required.
       | You can make them for yourself, just for your company's internal
       | use, or for everyone. Creating one is as easy as starting a
       | conversation, giving it instructions and extra knowledge, and
       | picking what it can do, like searching the web, making images or
       | analyzing data.
       | 
       | The whole point of ChatGPT was go to one single place for all
       | your knowledge needs.
       | 
       | The whole point of Amazon is the largest collections of things
       | you can buy and have it delivered to your doorsteps in a few
       | days.
       | 
       | I don't want many GPTs. I want one GPT that can reliably digest
       | all information available on the internet, understand it,
       | organize it and allow me to do useful things with it.
       | 
       | It's the same enshittifaction on Whatsapp that Meta is doing with
       | Celebrities AI like SnoopDog AI that are gimmicks.
       | 
       | Please don't build gimmicky features. Leave that to the community
       | via integrations.
        
         | jes5199 wrote:
         | for a while, adding a custom google search to your website was
         | considered a great feature
        
         | BoorishBears wrote:
         | https://search.brave.com/goggles/create
         | 
         | https://developers.google.com/custom-search/v1/overview
         | 
         | https://learn.microsoft.com/en-us/bing/search-apis/bing-cust...
        
         | vunderba wrote:
         | They still have that - it's just regular GPT-4. One immediate
         | application about this one is that it makes it trivial to
         | create a fine tuned version of GPT based on your data, where
         | you can upload a series of documents that can basically act
         | like a set of embeddings that augment the regular GPT trained
         | data.
        
           | nomel wrote:
           | > create a fine tuned version of GPT
           | 
           | No. It's unclear fine-tuning is happening. Many are guessing
           | it's RAG.
        
         | gustavopch wrote:
         | Being able to have multiple personae could be very useful.
         | 
         | One persona may not give you the answer you're looking for, but
         | another one may. I think maybe they should require the GPTs to
         | have human names though so people intuitively understand that.
         | 
         | Like, Paul can't help me with this task, so let me ask Monica
         | instead.
         | 
         | They could even interact.
        
           | nomel wrote:
           | A good example is creative/idea work vs fact work. You don't
           | want creative facts, and you don't want fact bounded
           | creativity. You either have a prompt ready to paste, to prime
           | the conversation/context, or you can use a personality.
           | 
           | One "uber" AI is great, but it requires guidance into the
           | context you're interested in, including yourself. For
           | example, the default ChatGPT will assume you're uneducated
           | about any topic you ask about.
           | 
           | I think this all fits perfectly into what Sam Altman talked
           | about in the Lex Friedman podcast: people want an AI that
           | fits their own worldview/context. Custom instructions, and
           | "about yourself" are good starts, but sometimes you want to
           | talk to a chef, and sometimes a scientist.
        
         | gfodor wrote:
         | > The whole point of ChatGPT was go to one single place for all
         | your knowledge needs.
         | 
         | That's just your own perception. OpenAI is trying to build AGI.
         | You entered into the storyline at a specific junction and
         | jumped to conclusions based on the limits of your own
         | imagination, or something.
        
           | nojvek wrote:
           | Right. All I'm sayin is I want the God level AGI, the king of
           | kings of AGI.
           | 
           | Allowing me to customize ChatGPT is the same itty bitty
           | intelligence, like putting a new mask on ChatGPT.
           | 
           | OpenAI changed their values to 'AGI Focus'. This seems like
           | OpenAI losing focus.
        
       | jes5199 wrote:
       | they mentioned revenue sharing in the keynote, and I'm eager to
       | find out how that is going to work. There isn't much money in the
       | $20/month subscription to go around to very many other developers
        
         | ilaksh wrote:
         | What I was thinking for my own agent hub thing was to sell
         | universal credits and charge per use or token. Then agent
         | developers could specify what they want to charge.
        
           | jes5199 wrote:
           | that's kinda interesting but I'm not sure it maps well to the
           | value added by a GPT app. Like, I'm imagining that I'll do
           | old fashioned API work and GPT will the UI layer - sure, the
           | tokens are the most expensive part, but the value for the
           | customer comes from easy access to whatever is on the backend
        
       | Dowwie wrote:
       | Does this summarize to Pre defined custom instructions and
       | workflows? What categories of fine tuning are associated with
       | this work?
        
       | dongobread wrote:
       | I don't think the target market for this is people looking for
       | extremely knowledgeable LLMs that can handle deep technical
       | tasks, given that you can't even finetune these models.
       | 
       | I'd guess this is more of an attempt to poach the market of
       | companies like character.ai. The market for models with a
       | distinct character/personality is absolutely massive right now
       | (see: app store rankings) and users are willing to spend insane
       | amounts of money on it (in part because of the "digital
       | girlfriend" appeal).
        
         | crooked-v wrote:
         | > digital girlfriend
         | 
         | The ban on "adult themes" is part of the reason people use
         | services other than OpenAI for that kind of thing in the first
         | place.
        
       | JCharante wrote:
       | > ChatGPT Plus now includes fresh information up to April 2023
       | 
       | I'm so happy; I can finally ask questions about expo and trpc and
       | get fresh answers. I confirmed this by asking chatgpt about the
       | superbowl winners in 2022 & 2023.
        
       | singularity2001 wrote:
       | "you can now create custom versions of ChatGPT"
       | 
       | how? login opens unrelated tab.
       | 
       | Found it:
       | 
       | https://chat.openai.com/gpts/discovery -> create own ->
       | 
       | https://chat.openai.com/gpts/editor
        
       | anonymouse008 wrote:
       | Is this scary? They said revenue share - it sounds like a
       | streaming platform software licensing model. That sounds like
       | getting paid much less than 70%.
        
       | m3kw9 wrote:
       | So basically selling prompts but openai keeps the prompt a
       | secret. Is that it?
        
       | anonu wrote:
       | A couple dozen startups just died.
        
       | joshstrange wrote:
       | > We've made ChatGPT Plus fresher and simpler to use
       | 
       | > Finally, ChatGPT Plus now includes fresh information up to
       | April 2023. We've also heard your feedback about how the model
       | picker is a pain. Starting today, no more hopping between models;
       | everything you need is in one place. You can access DALL*E,
       | browsing, and data analysis all without switching. You can also
       | attach files to let ChatGPT search PDFs and other document types.
       | Find us at chatgpt.com.
       | 
       | It's so annoying how they say this, I refresh and I still have to
       | hop between models. Just say "rolling out over the next week" if
       | that's what's happening. I even logged out and back in and still
       | the same old way of doing it.
        
       | thih9 wrote:
       | Is this going to be openai's moat?
       | 
       | E.g. one popular comment in the submission about twitter's new
       | "edgy ai" was that it could be reimplemented as a chatgpt
       | prompt[1]. Looks like this is even more relevant now.
       | 
       | [1]: https://news.ycombinator.com/item?id=38148845
        
       ___________________________________________________________________
       (page generated 2023-11-06 21:00 UTC)