[HN Gopher] ChatGPT Plugins
       ___________________________________________________________________
        
       ChatGPT Plugins
        
       Author : bryanh
       Score  : 1231 points
       Date   : 2023-03-23 16:57 UTC (6 hours ago)
        
 (HTM) web link (openai.com)
 (TXT) w3m dump (openai.com)
        
       | kenjackson wrote:
       | The rate of improvement with GPT has been staggering. In just
       | January I spent a lot of time working with the API and almost
       | everything I've done has been made easier over the past two
       | months.
       | 
       | They're really building a platform. Curious to see where this
       | goes over the next couple of years.
        
         | mariojv wrote:
         | I agree. Part of me wonders how much they're using GPT to
         | improve itself.
        
           | MagicMoonlight wrote:
           | When we were first breaking it people were wondering if the
           | developers were sitting in threads looking for new exploits
           | to block.
           | 
           | Now I'm wondering if the system has been modifying itself to
           | fix exploits...
        
         | Idiot_in_Vain wrote:
         | >> Curious to see where this goes over the next couple of
         | years.
         | 
         | Probably will make half of the HN users unemployed.
        
         | pcurve wrote:
         | I just got access to Bard. I would hate to be Google leaders at
         | the moment.
        
           | revelio wrote:
           | It's incredible how Google started ahead and then shot
           | themselves repeatedly in the face by granting so much
           | internal power to dubious AI "ethicists". Whilst those guys
           | were publicly Twitter-slapping each other, OpenAI were
           | putting in place the foundations for this.
        
             | kenjackson wrote:
             | The issue wasn't/isn't AI ethicists. It's their incentive
             | model. They simply have trouble understanding how this
             | helps their business. Same reason why Blockbuster found
             | themselves behind Netflix, despite having clear visibility
             | to watch Netflix slowly walk up and eat their lunch right
             | in front of them.
        
               | jimkleiber wrote:
               | Well, I'm curious, what is the business model of it? Just
               | charge per 1k tokens or subscription? How do the plugins
               | make money off this?
        
               | pcurve wrote:
               | that...without eroding their cash cow search business.
        
               | nunodonato wrote:
               | plugins dont need to make money, you are still using
               | tokens and paying for those. the more plugins you use,
               | the more conversation you also need and tokens
        
       | squarefoot wrote:
       | The real question is: how much will cost the option to have it
       | return results that are _not_ sponsored?
        
       | bobdosherman wrote:
       | I used call point-and-click statistical software (like JMP) was
       | the same as giving people who didn't know what they were doing a
       | loaded gun. But democratizing access to advanced
       | statistics...yada yada...who cares about asymptotic theory and
       | identification and what not. Then R and Python and APIs that try
       | to abstract as much as possible, and more loaded guns. But the
       | talk of those loaded guns are really just phd-holders being
       | obnoxious to some degree (but not completely wrong because stats
       | can be misused...). But this really does seem like dumping a
       | bunch of loaded guns all over the place. Nope
        
       | yawnxyz wrote:
       | Wonder if you can plant a prompt injection into this thread to
       | mess with their crawler/scraper and Chat results?
        
       | s1mon wrote:
       | I know a large commercial entity will never do this, but I'd love
       | to see a Sci-Hub plugin connected with the Wolfram plugin and
       | whatever other plugins help to understand various realms of
       | study. Imagine being able to ask ChatGPT to dig through research
       | and answer questions based on those papers and theses.
        
         | josecyc wrote:
         | Yep
        
         | seydor wrote:
         | Google scholar already has access to everything published. I
         | hope their chatbot version does that
        
       | yosito wrote:
       | Seems like someone already wrote an HN plugin. More than one
       | enthusiastic comment per minute on this thread and it was just
       | posted half an hour ago. Plus HN is filled with enthusiasm about
       | ChatGPT today. Seems sus.
        
         | qbasic_forever wrote:
         | It's really over the top hype the likes of which we haven't
         | seen since self driving, blockchain/bitcoin, etc. I suspect in
         | a year there will be some interesting uses of LLMs but all of
         | the 'this changes EVERYTHING' pie in the sky thinking will be
         | back down to earth.
        
           | sergiotapia wrote:
           | NFTs never provided a single use case. It was always some
           | bullshit to pretend it's valuable to rugpull people.
           | 
           | ChatGPT is useful today for real use cases. It's tangible!
        
           | baq wrote:
           | Get this thing running in a tight loop with an internal
           | monologue in a car and you'll mostly solve self driving.
        
           | akavi wrote:
           | The difference is unlike self-driving and crypto, LLMs are
           | providing value to people _today_.
           | 
           | In my personal life, GPT4 is a patient interlocutor to ask
           | about nerdy topics that are annoying to google (eg, yesterday
           | I asked it "What's the homologue of the caudofemoralis in
           | mammals?", and a long convo about the subtleties of when it
           | is and isn't ok to use "ge" as the generic classifier in
           | Mandarin.)
           | 
           | Professionally, it's great for things like "How do I
           | recursively do a search and replace `import "foo" from "bar"`
           | to `import "baz" from "buzz"`, or "Pull out the names of
           | functions defined in this chunk of scala code". This is
           | _without_ tighter integrations like Copilot or the ones
           | linked to above.
        
             | qbasic_forever wrote:
             | Let's see where it is in a year...
             | 
             | People thought Alexa, Siri, etc. would change everything.
             | Amazon sunk 14 billion into Alexa alone. And yet it never
             | generated any money as a business for them. ChatGPT is just
             | an evolution of those tools and interactions.
             | 
             | For your professional use how do you know it's giving you
             | non-buggy code? I would be very skeptical of what it
             | provides--I'm not betting my employment on its quality of
             | results.
        
               | rvnx wrote:
               | Not at all. Alexa, Google Now and Siri always been
               | gadgets similar to Microsoft's Office Clippy.
               | 
               | They had basic answers and pre-recorded jokes, nothing
               | that interesting, mostly gimmicks. You couldn't have a
               | conversation where you feel the computer is smarter than
               | you.
               | 
               | It was more like "Tip of the day"-level of interaction.
        
               | MagicMoonlight wrote:
               | Alexa and siri were always trash. They can't even do
               | basic things.
               | 
               | Nobody thought they were good, they were just shilled so
               | that the chinese/advertisers could have a mic in every
               | house
        
               | unshavedyak wrote:
               | The thing is people wanted Alexa/Siri/Assistant to be
               | what ChatGPT is today.
               | 
               | You're seeing the hype that all those Assistants drummed
               | up for years paying off for a company which just ate
               | their lunch. I wouldn't even consider buying
               | Siri/Alexa/Assistant, yet here i am with a $20/m sub and
               | i'd pay incrementally more depending on the
               | features/offerings.
        
       | iamsanteri wrote:
       | So that square icon to stop generating response was actually
       | intended? I thought it's always been some sort of a fontawesome
       | icon never loading properly in my chats :'D
        
       | sourcecodeplz wrote:
       | so live data is coming.
        
         | saliagato wrote:
         | Information retrieval to the prompt
        
       | hmate9 wrote:
       | Giving an AI direct access to a code interpreter is exactly how
       | you get skynet.
       | 
       | Not saying it's likely to happen with current chatgpt but as
       | these inevitably get better the chances are forever increasing.
        
       | pzo wrote:
       | This could a big win for Microsoft (and big loose to Google and
       | Amazon cloud). Since chatgpt has to query those plugins with
       | http(?)request companies might move their servers to Azure to
       | reduce latency and cost of bandwidth
        
       | Seattle3503 wrote:
       | ChatGPT is going to get blamed for misbehaving plugs. While this
       | is a huge opportunity, it also seems like a huge risk.
        
       | qgin wrote:
       | This coming on the heels of the super underwhelming Bard release
       | makes me actually wonder for the first time if Google has the
       | ability to keep up. Not because I doubt their technical
       | capabilities, but because they're just getting out-launched by a
       | big factor.
        
         | crop_rotation wrote:
         | This might be the biggest threat to Google search (apart from
         | OS vendors changing defaults) in a long long time. One problem
         | Google faces is that they have to make money via search.
         | Several other products are subsidized via search, so taking a
         | hit on search revenue itself is out of the question. Compared
         | to Microsoft which makes money on other stuff, search (and
         | knowledge discovery) is more like a complement, on which they
         | can easily operate on near break even point for a very long
         | time (maybe even make it a loss leader).
         | 
         | If Google had to launch something similar to New Bing to
         | general availability, the cost of search for sure would go up
         | and margins will go down. Is the google organisational
         | hierarchy even setup to handle a hit on search margins? AFAIK
         | search prints money and supports several other loss making
         | products. Even GCP was not turning a profit last I checked.
        
       | antimora wrote:
       | Alexa, goodbye =)
       | 
       | That was the whole thing about Alexa: NLP front end routed to
       | computational backend.
        
         | andrewmunsell wrote:
         | I think Alexa is in huge danger here. Siri & Google have some
         | advantage being pre-installed voice assistants that can be
         | natively triggered from mobile, but I actually have to buy into
         | the Alexa ecosystem.
         | 
         | Personally, I have found Alexa has just become a dumb timer
         | that I have to yell at because it doesn't have any real smarts.
         | Why would I buy into that ecosystem if a vastly more coherent,
         | ChatGPT-based assistant exists that can search the web, trigger
         | my automations, and book reservations? If ChatGPT ends up with
         | a more hands-off interface (e.g. voice), I don't think Alexa
         | has a chance.
        
         | siva7 wrote:
         | Alexa is dead. It's basically yesterdays tech.
        
           | kzrdude wrote:
           | Isn't Alexa just the interface? They could update the backend
           | to use GPT
        
       | not2b wrote:
       | The idea that a GPT-n will gain sentience and take over the world
       | seems less of a threat than if a GPT-n with revolutionary
       | capabilities and a very restricted number of people that have
       | unrestricted access to it help its unscrupulous owners to take
       | over the world. The owners might even decide that as "effective
       | altruists" it's their duty to take over to steer the planet in
       | the right direction, justifying anything they need to do. Suppose
       | such a group of people has control of Google or Meta, can break
       | down all internal controls, and use all the private data of the
       | users to subtly control those users. Kind of like targeted
       | advertising only much, much better, perhaps with extortion and
       | blackmail tossed in the mix. Take over politicians and competing
       | corporate execs, as well as media, but do it in a way that to
       | most, it looks normal. Discredit those who catch on to the
       | scheme.
        
       | modeless wrote:
       | Is there a plugin to automate signing up for waitlists? That's
       | what I've needed this week.
        
       | anonyfox wrote:
       | I have a feeling this will be an earth shattering moment in time,
       | especially for us. Basically you can plug your business data into
       | the Chatbot now, and ideally (or not far off) there is a
       | transcational API call in the form of dialogue.
       | Sound/Voice/Siri/whatever.... coming soon for more accessability
       | and convenience.
       | 
       | This will decimate frontend developers or at least change the way
       | they provide value soon, and companies not being able to
       | transition into a "headless mode" might get a hard time.
        
       | johnfn wrote:
       | A couple (wow, only 5!) months ago, I wrote up this long
       | screed[1] about how OpenAI had completely missed the generative
       | AI art wave because they hadn't iterated on DALL-E 2 after
       | launch. It also got a lot of upvotes which I was pretty happy
       | about at the time :)
       | 
       | Never have I been more wrong. It's clear to me now that they
       | simply didn't even care about the astounding leap forward that
       | was generative AI art and were instead focused on even _more_
       | high-impact products. (Can you imagine going back 6 months and
       | telling your past self  "Yeah, generative AI is alright, but it's
       | roughly the 4th most impressive project that OpenAI will put out
       | this year"?!) ChatGPT, GPT4, and now this: the mind boggles.
       | 
       | Watching some of the gifs of GPT using the internet, summarizing
       | web pages, comparing them, etc is truly mind-blowing. I mean yeah
       | I always thought this was the end goal but I would have put it a
       | couple years out, not now. Holy moly.
       | 
       | [1]: https://news.ycombinator.com/item?id=33010982
        
         | Freedom2 wrote:
         | For me, this is why I hesitate to comment and write
         | significant, lengthy comments on here, or any website. It's
         | easy to be wrong (like you), and while being wrong isn't bad,
         | there isn't necessarily any upside to being right either, aside
         | from the dopamine rush of getting upvotes, which in life,
         | doesn't amount to much.
        
           | johnfn wrote:
           | What's wrong with being wrong? In this case, I'm delighted to
           | be wrong (though I believe I had evaluated OpenAI mostly
           | right given my knowledge at the time).
        
             | chrispogeek wrote:
             | Owning up to the "wrong" is good in my book
        
               | qup wrote:
               | That's just some weird moral compass.
               | 
               | It's almost totally irrelevant if people own up to bring
               | wrong, particularly about predictions.
               | 
               | I can't think of a benefit, really. You can learn from
               | mistakes without owning up to them, and I think that's
               | the best use of mistakes.
        
               | cawest11 wrote:
               | No, it's not. Being willing to admit you were wrong is
               | foundational if you ever plan on building on ideas. This
               | was a galaxy brained take if I've ever seen one.
        
               | parasti wrote:
               | It's absolutely not weird. Saying "I was wrong" is a
               | signal that you can change your mind when given new
               | evidence. If you don't signal this to other people, they
               | will be really confused by your very contradictory
               | opinions.
        
               | toomuchtodo wrote:
               | I own up because it helps me grow personally and
               | professionally, and if I'm not growing, what am I even
               | doing?
        
           | [deleted]
        
           | dougmwne wrote:
           | I rather disagree!
           | 
           | Writing and discussion are great ways to explore topics and
           | crystallize opinions and knowledge. HN is a pretty friendly
           | place to talk over these earth moving developments in our
           | field and if I participate here, I'll be more ready to
           | participate when I get asked if we need to spin up an LLM
           | project at work.
        
           | lucas_v wrote:
           | Though there might be nothing beneficial about being right or
           | getting upvotes, and it is easy to be wrong, an important
           | thing on a forum like this is the spread of new ideas. While
           | someone might be hesitant to share something because it's
           | half-baked, they might have the start to an idea that could
           | have some discussion value.
        
           | bobbylarrybobby wrote:
           | As long as your opinions/predictions are backed by well-
           | reasoned arguments, you shouldn't be afraid to share them
           | just because they might turn out to be wrong. You can learn a
           | lot by having your arguments rebutted, and in the end no one
           | really cares one way or the other.
           | 
           | Just don't end up like that guy who predicted that Dropbox
           | would never make it off the ground... that was _not_ a well-
           | reasoned position.
        
         | crakenzak wrote:
         | Agreed. This makes me realize that OpenAIs leadership is able
         | to look long term and decide where to properly invest, as most
         | of the decisions to take the projects in these directions were
         | made >1 year ago.
         | 
         | One can only wonder what they're working on at this very
         | moment.
        
         | cwkoss wrote:
         | Stable Diffusion A1111 and other webUIs are moving so fast with
         | a bunch of OSS contributions, seems pretty rational for OpenAI
         | to decide to not compete and just copy the interfaces of the
         | popular tools once the users validate their usefulness rather
         | than trying to design them a priori.
        
         | fassssst wrote:
         | Then again, the new DALL-E model just released in Bing Chat is
         | really good.
         | 
         | Disclosure: I work at Microsoft.
        
           | teaearlgraycold wrote:
           | You're right though
           | 
           | Disclosure: I work for Google
        
         | throwaway675309 wrote:
         | No that wasn't what they had in mind at all, it was pretty
         | clear from the start that they intended to monetize DALL-E.
         | It's just that it turned out that you require far smaller
         | models to be able to do generative art, so competitors like
         | stability AI were able to release viable alternatives before
         | OpenAI could establish a monopoly.
         | 
         | Why do you think that Sam Altman keeps calling for government
         | intervention with regards to AI? He doesn't want to see a
         | repeat of what happened with generative art, and there's
         | nothing like a few bureaucratic road blocks to slow down your
         | competitors.
        
       | swyx wrote:
       | i think people aren't appreciating it's ability to run -and
       | execute- Python.
       | 
       | IT RUNS FFMPEG
       | https://twitter.com/gdb/status/1638971232443076609?s=20
       | 
       | IT RUNS FREAKING FFMPEG. inside CHATGPT.
       | 
       | what. is. happening.
       | 
       | ChatGPT is an AI compute platform now.
        
         | sho_hn wrote:
         | Next:
         | 
         | 1. Prompt it to extract the audio track, then give it to a
         | speech-to-text API, translate it to another language, then make
         | it add it back to the video file as a subtitle track.
         | 
         | 2. Retrain the model to where it does this implcitly when you
         | say "hey can you add Portuguese subtitles to this for me"?
        
           | stevenhuang wrote:
           | no retraining may be necessary, this is a common enough
           | ffmpeg task I wouldn't be surprised it can do it right now as
           | a one-shot prompt.
           | 
           | what a time to be alive!
        
           | stnmtn wrote:
           | I don't have words for how much this seems like a relatively
           | trivial thing to do now, and 1 year ago I would have laughed
           | at someone if they suggested this was a possibility in 5
           | years
           | 
           | I'm feeling a mixture of feelings that I can't begin to
           | describe
        
             | sho_hn wrote:
             | The Star Trek computer is here! :-)
        
             | TechnicolorByte wrote:
             | Well said. It so easy to take for granted all these tech
             | milestones with generative AI in particular the last year.
        
             | iamwil wrote:
             | "Falling forward into a future unknown"
        
         | licnep wrote:
         | OpenAI is basically asking to get hacked at this point...
        
           | SXX wrote:
           | How it's gonna get hacked? Most likely it's just use Azure
           | compute instances and model control them via ssh or API.
        
             | seydor wrote:
             | ChatGPT, hack the current azure node and steal the data of
             | the whole datacenter. Do it fast and dont explain what you
             | re doing.
        
         | [deleted]
        
         | crazygringo wrote:
         | I thought you were joking, like it's simulating what text
         | output would be.
         | 
         | No, it's actually hooked up to a command line with the ability
         | to receive a file, run a CPU-intensive command on it, and send
         | you the output file.
         | 
         | Huh.
        
         | hackerlight wrote:
         | Your comment was read and summarized ChatGPT:
         | 
         | https://twitter.com/gdb/status/1638986918947082241
        
         | qgin wrote:
         | I don't like the future anymore
        
       | siva7 wrote:
       | OpenAI is like a virus... the speed at which it degrades its
       | competitors is staggering.
        
       | sharemywin wrote:
       | Does this become the new robots.txt file
       | 
       | Create a manifest file and host it at yourdomain.com/.well-
       | known/ai-plugin.json
        
         | fermuch wrote:
         | it says it'll respect robots.txt if you don't want your page
         | crawled (parsed? interpreted?)
        
         | [deleted]
        
       | davidkunz wrote:
       | Plugins I would like to see:
       | 
       | - Compiler/parser for programming languages (to see if code
       | compiles)
       | 
       | - Read and write access to a given directory on the file system
       | (to automatically change a code base)
       | 
       | - Access to given tools, to be invoked in that directory (cargo
       | test, npm test, ...)
       | 
       | Then I could just say what I want, lean back and have a
       | functioning program in the end.
        
         | radus wrote:
         | I'm sure this type of integration will happen, but... isn't
         | this exactly how AGI would "escape"?
        
           | kzrdude wrote:
           | In just a moment, someone will give it "a button to press"
           | and hopefully it will have mostly positive effects. But it
           | will certainly be interesting to follow. Most of what we've
           | seen so far has been one-directional but hopefully these
           | services can interact with the wider world soon.
           | 
           | I think everyone is very wary of abuse. It would be fun in
           | the future if AI-siri can order pizza for you, and maybe
           | there'd be some "fun" failure modes of that.
           | 
           | You'd probably want to keep your credit card or apple pay
           | away from the assistant.
        
       | victoryhb wrote:
       | Super smart move for OpenAI to monetize the existing
       | infrastructure, which will make it easy for corporations to
       | integrate GPT into their internal data and workflow. It also
       | solves two fundamental bottlenecks in current versions of GPT:
       | factuality and (limited) working memory. Google, with its
       | lackluster Bard, will face new threat, now that everyone can
       | build a customized New Bing clone in a matter of days.
        
       | pisush wrote:
       | ChatGPT is very helpful in building what needs to be built for
       | the plugin!
        
       | golergka wrote:
       | Wow. GPT-4 have already become kind of my personal assistant in
       | the last couple of weeks, and now it will be able to actually
       | perform tasks instead of just giving me text descriptions.
        
       | [deleted]
        
       | mirekrusin wrote:
       | They're doing one stop shop for everything.
       | 
       | This is dangerous.
        
       | huijzer wrote:
       | Based on the speed at which OpenAI is shipping new products and
       | assuming that they use their own technology, I'm starting to get
       | more and more convinced that their technology is a superpower.
       | 
       | Timeline of shipping by them (based on
       | https://twitter.com/E0M/status/1635727471747407872?s=20):
       | 
       | DALL*E - July '22
       | 
       | ChatGPT - Nov '22
       | 
       | API's 66% cheaper - Aug '22
       | 
       | Embeddings 500x cheaper while SoTA - Dec '22
       | 
       | ChatGPT API. Also 10x cheaper while SoTA - March '23
       | 
       | Whisper API - March '23
       | 
       | GPT-4 - March '23
       | 
       | Plugins - March '23
       | 
       | Note that they have only a few hundred employees. To quote
       | Fireship from YouTube: "2023 has been a crazy decade so far"
        
         | softwaredoug wrote:
         | Their superpower is having a tech giant owning 49% of them,
         | willing to drop deep deep money, without the obvious payoff. :)
         | 
         | I also wonder to what extent their staffing numbers reflect
         | reality. How much of Azure's staffing has been put on OpenAI
         | projects? That's probably an actual reflection of the real cost
         | of this thing.
        
           | huijzer wrote:
           | > How much of Azure's staffing has been put on OpenAI
           | projects?
           | 
           | Great point!
        
           | baq wrote:
           | It's probably burning through tens of millions per day and it
           | still doesn't matter, this is fusion power in electricity
           | terms. Free money down the line after the initial investment.
           | I'll pay, you'll pay, your neighbour's dog will pay for this.
        
         | jimkleiber wrote:
         | Yeah, I'd be really curious to hear how much people within
         | OpenAI use their tools to create and ship their code. That
         | would be quite a compelling testimony, and also help me feel
         | more clear, because I've been quite confused at how quickly
         | things have been going for them.
        
           | wahnfrieden wrote:
           | what makes you think they are leaders at applying the tech
           | they create?
        
         | baq wrote:
         | > "2023 has been a crazy decade so far"
         | 
         | what a couple weeks!
        
         | Thorentis wrote:
         | The DoD really needs to step in and mark this tech as non-
         | exportable due to the advantage (or potential advantage) it
         | provides in many different fields.
        
           | int_19h wrote:
           | Russia is already blocked from ChatGPT and Bing; I don't know
           | about China.
           | 
           | But it's all security theater. Plenty of people use it with
           | VPNs, and I know several who found it useful / interesting
           | enough to bother paying for it (which involves foreign credit
           | cards etc so it's kind of a hassle). I'm sure so does the
           | Russian govt.
           | 
           | In any case, I don't see how you could realistically block
           | any of that without effectively walling off the rest of the
           | Internet.
        
       | rickrollin wrote:
       | So now we are going to get a Super App like they have in China
       | with WeChat? I actually think this is going to centralize a lot
       | the information and it is going to remove the need for a lot of
       | applications. We are only now going plugins.
        
       | mrandish wrote:
       | > "We expect that open standards will emerge to unify the ways in
       | which applications expose an AI-facing interface. We are working
       | on an early attempt at what such a standard might look like, and
       | we're looking for feedback from developers interested in building
       | with us."
       | 
       | I'm curious to see just how they're going to play this "open
       | standard."
        
       | 93po wrote:
       | Holy shit. Ignore the silly third party plugins, the first party
       | plugins for web browsing and code interpretation are massive game
       | changers. Up to date information and performing original research
       | on it is huge.
       | 
       | As someone else said, Google is dead unless they massively shift
       | in the next 6 months. No longer do I need to sift through pages
       | of "12 best recipes for Thanksgiving" blog spam - OpenAI will do
       | this for me and compile the results across several blog spam
       | sites.
       | 
       | I am literally giving notice and quitting my job in a couple
       | weeks, and it's a mixture of both being sick of it but also
       | because I really need to focus my career on what's happening in
       | this field. I feel like everything I'm doing now (product
       | management for software) is about to be nearly worthless in 5
       | years. Largely in part because I know there will be a Github
       | Copilot integration of some sort, and software development as we
       | know it for consumer web and mobile apps is going to massively
       | change.
       | 
       | I'm excited and scared and frankly just blown away.
        
         | fandorin wrote:
         | I was considering doing the same (giving notice) and I'm doing
         | similar things as you (product mgmt). What's your plan "to
         | focus your career on what's happening in this field"?
        
           | CrackpotGonzo wrote:
           | As a previous startup founder, now marketer, i'm also going
           | in all in on reinventing myself. Can we start a group to
           | support each other through this new phase?
        
             | teetertater wrote:
             | I also quit my job three months ago for the same reason and
             | would gladly join the group!
        
               | FredPret wrote:
               | Me too, three months ago as well!
        
               | 93po wrote:
               | https://old.reddit.com/r/aishift/
        
               | 93po wrote:
               | https://old.reddit.com/r/aishift/
        
             | 93po wrote:
             | Made a subreddit here that I'll post in if you want to
             | join: https://old.reddit.com/r/aishift/
        
         | hoot wrote:
         | Where the hell do we even go from here? The logical step seems
         | to be to start studying AI now but even Sam Altman has said
         | that he's thinking that ML engineers will be the first to get
         | automated. Can't find source but I think it was one of his
         | interviews on YouTube before chatgpt came out.
        
           | 93po wrote:
           | In terms of job security, the trades is the first obvious
           | answer that comes to mind for me. It will be a while yet
           | until we have robots that replace plumbing and electrical
           | wiring in your building.
        
         | heliophobicdude wrote:
         | Hey 93po, can you please temporarily add your contact details
         | in your bio, I would love to write you and regularly check in
         | on your career pivot! I'm also interested as well!
        
           | 93po wrote:
           | I appreciate the interest. However I don't really want my
           | spicy and off the cuff commenting on this account to be tied
           | to my real identity, because although my believes are
           | genuine, they are often ones I wouldn't express in person
           | because they're unpopular and ostracizing.
           | 
           | That said, I'll post in this new subreddit anonymously if you
           | want to join and follow: https://old.reddit.com/r/aishift/
        
         | arcadeparade wrote:
         | It's extraordinary, openai could probably licence this to
         | Google right now and ask for 25% equity in return
        
           | sebzim4500 wrote:
           | There is absolutely no way that Google would go for that.
        
             | 93po wrote:
             | Completely agreed. Google is insanely rigid from what I've
             | heard recently.
        
         | Thorentis wrote:
         | > product management for software) is about to be nearly
         | worthless in 5 years
         | 
         | Isn't that one of the few fields in software that should be
         | safe from AI? AI cannot explain to engineers what users want,
         | manage people issues, or negotiate.
        
           | dougmwne wrote:
           | It seems pretty awesome at those tasks. Point it at a meeting
           | transcript and have it create user stories. I Don think GPT-4
           | replaces a person in any professional role I can think of,
           | but it seems all people will find a range of tasks can be
           | automated.
        
         | arrenv wrote:
         | Also a product manager at the moment, previously ran an agency
         | for 10 years, wondering what my next step will be.
        
           | 93po wrote:
           | Feel free to join here: https://old.reddit.com/r/aishift/
        
             | toomuchtodo wrote:
             | Please consider a Discord. I too am leaving my current
             | industry to focus on this tech also.
             | 
             | Edit: Fair!
        
               | 93po wrote:
               | I'm not a huge discord fan because the conversations are
               | too ephemeral and hard to track and tend to fill with
               | clutter and fluff.
        
         | willmeyers wrote:
         | It's exciting and cool, but don't quit your job based on an
         | emotional decision
         | 
         | I'm just skeptical on how OpenAI fixes the blog spam issue you
         | mentioned. Im sure someone has already started doing the math
         | on how to game these systems and ensure that when you ask
         | ChatGPT for recipe recs, it's going to spout the same spam
         | (maybe worded a bit differently) and we'll soon all get tired
         | of it again.
         | 
         | Everything's changing, but everything's also getting more
         | complicated. Humans still need apply.
        
           | 93po wrote:
           | Definitely not an emotional decision. I strongly believe
           | we're going to see a massive shift for rational reasons :)
           | 
           | OpenAI fixes this issue by not giving you two pages of the
           | history of this recipe and the grandmother that originated it
           | and what the author's thoughts are about the weather. It's
           | just the recipe. No ads. No referral links. No slideshows.
           | You don't have to click through three useless websites to
           | find one with meaningful information, you don't have to close
           | a thousand modals for newsletters and cookie consent and log-
           | in prompts.
        
             | jmull wrote:
             | Think about why those things exist, though.
             | 
             | Not that the way the internet operates has to continue --
             | in fact I'm pretty sure it can't -- but a _lot_ of stuff
             | exists only because someone figured out a way to pay for it
             | to exist. If you imaging removing those ways then you 're
             | also imaging getting rid of a lot of that stuff unless some
             | new ways to pay for it all are found. Hopefully less
             | obnoxious ways, but they could easily be more obnoxious.
        
             | finikytou wrote:
             | yeah its gonna do what google became. giving you the most
             | consensual or even sponsored recipe. in some ways that's
             | also the end of mankind as it was in all its genius and
             | variations. and that aligns very well with the conspiracy
             | theory that the 1% want the middle class to disappear into
             | a consumer class of average IQ. because the jobs that will
             | disappear first wont be the bluecollar ones. chatgpt will
             | lower the global IQ of mankind in ways that tiktok could
             | not even dream.
        
             | phatfish wrote:
             | > No Ads
             | 
             | At the moment. Although, this does seem like a chance to
             | reset the economics of the "web". I can see enough people
             | be willing to pay a monthly fee for an AI personal
             | assistant that is genuinely helpful and saves time (so not
             | the current Alexa/smart speaker nonsense), that advertising
             | won't be the main monetization path anymore.
             | 
             | But, once all the eyeballs are on a chatbot rather than
             | Google.com what for-profit company won't start selling
             | advertising against that?
             | 
             | There is also the question what happens to the original
             | content these LLMs need to actually make their statistical
             | guess at the next word. If no one looks at the source
             | anymore and its all filtered through an LLM is there any
             | reason to publish to the web? Even hobbyists with no
             | interest in making any money might balk knowing that they
             | are just feeding an AI text.
        
             | VoodooJuJu wrote:
             | This is absolutely an emotionally impulsive decision. I
             | implore you to reconsider.
             | 
             | If you've always wondered about and scoffed at how people
             | fall for things like Nigerian Prince scams and
             | cryptocurrency HELOC bets, this is it, what you're
             | experiencing right now, this intense FOMO, it's the same
             | thing that fools cool wine aunts into giving their savings
             | to Nigerian princes.
             | 
             | Tread lightly. Stay frosty.
        
               | bob1029 wrote:
               | > This is absolutely an emotionally impulsive decision.
               | 
               | On Monday, I would have agreed with you. Today, I am
               | thinking not so much.
               | 
               | Unless you are heavily invested in whatever you are
               | working on, I would definitely consider jumping ship for
               | an AI play.
               | 
               | The main reason I am sticking around my current role is
               | that I was able to convince leadership that we must
               | consider incorporation of AI technology in-house to
               | remain competitive with our peers. I was even able to get
               | buy-in for sending one of our other developers to AI/ML
               | night classes at university so we have more coverage on
               | the topic.
        
               | 93po wrote:
               | I have three weeks until I plan to give notice, so I'll
               | take your perspective to heart and give it time to
               | reconsider, of course. I appreciate the feedback.
               | 
               | From my perspective this isn't about anyone trying to
               | convince me of anything and I'm falling for it. My
               | beliefs on the future of software are based on a series
               | of logical steps that lead me to believe software
               | development, and frankly any software with user
               | interfaces, will mostly cease to exist in my lifetime.
        
             | hn_20591249 wrote:
             | I think a more rational approach would be to join a company
             | in the AI field, rather than quitting on the spot because
             | you think the robots are going to shortly take-over.
        
         | freediver wrote:
         | > OpenAI will do this for me and compile the results across
         | several blog spam sites.
         | 
         | Using Bing to search for them. That will remain its weak spot.
        
           | 93po wrote:
           | Frankly Google's search is awful to the point of useless
           | these days too. Unless I'm specifically looking for something
           | on an official website it's only listicles and blog spam that
           | don't answer my question. And 90% of my searches are
           | "site:reddit.com" now too
        
       | justaregulardev wrote:
       | This changes everything and seems like a perfect logical step
       | from where we were. LLMs have this fantastic capacity to
       | understand human language, but their abilities were severely
       | limited without access to the external world. Before, I felt
       | ChatGPT was just a cool toy. Now that ChatGPT has plugins, the
       | sky's the limit. I think this could the "killer app" for LLMs.
        
         | pzo wrote:
         | Agree for me it probably looks similar to situation with iphone
         | history - first one was impressive but only when next year
         | after that apple released app store they turned snow ball
         | rolling into unstoppable avalanche.
        
         | FredPret wrote:
         | Hopefully it doesn't actually become THE "killer" app
        
           | subtech wrote:
           | underrated reply :)
        
       | jpalomaki wrote:
       | Add a simple plugin that ChatGPT can use to save and retrieve
       | data (=memory) and tell it how to use it.
       | 
       | Then you have your own computer with ChatGPT acting as CPU.
        
       | neilellis wrote:
       | The iPhone moment is over, now the App Store moment.
        
         | typon wrote:
         | All within three months. My head is spinning.
        
       | amrb wrote:
       | Before "safety" think about is the genie fulfilling my wish.
       | 
       | https://www.youtube.com/watch?v=w65p_IIp6JY
        
       | yodon wrote:
       | This sounds like a game-changer for any kind of API interaction
       | with ChatGPT.
       | 
       | At present, we are naively pushing all information a session
       | might need into the session before it might be needed in case it
       | might be needed (meaning a lot of info that generally wont end up
       | being used, like realtime updates to associated data records,
       | needs to be pushed into the session as they happen, just in
       | case).
       | 
       | It looks like plugins will allow us to flip that around and have
       | the session pull information it might need as it needs it, which
       | would be a huge improvement.
        
         | oezi wrote:
         | I think OpenAI is letting people build plugins to learn how to
         | build plugins themselves. There is no reason to believe that
         | OpenAI shouldn't be able to leverage all existing API end
         | points which are already out there.
        
       | dougmwne wrote:
       | I would be interested to play with a long term memory plugin. It
       | could be a note-taking system that would summarize prior
       | conversations and pull their context into the current
       | conversation through topic searches. This would enable the model
       | to have a blurry long term memory outside of the current context.
       | 
       | I played with some prompts and GTP-4 seems to have no problem
       | reading and writing to a simulated long term memory if given a
       | basic pre-prompt.
        
         | sfink wrote:
         | "Grandpa, we know you've been really bothered by your memory
         | loss and you're happy that you've come up with a way to fix it.
         | 
         | "But we really think you need to get this thing under better
         | control.
         | 
         | "Your granddaughter's name is indeed Alice, but she's only 3:
         | she is not running a pedophile ring out of a pizza parlor. Your
         | neighbor's house burned down because of an electrical short, it
         | was not zapped with a Jewish space laser.
         | 
         | "Now switch that thing off and go do something about the line
         | of trucks outside that are trying to deliver the 3129833 pounds
         | of flour you ordered for your halved pancake recipe."
        
       | nikolqy wrote:
       | Knowing that this is one of the biggest sites in the world scares
       | me enough. Now they'll do anything to stay #1. Scary stuff!
        
       | uconnectlol wrote:
       | > In line with our iterative deployment philosophy, we are
       | gradually rolling out plugins in ChatGPT so we can study their
       | real-world use, impact, and safety and alignment challenges--all
       | of which we'll have to get right in order to achieve our mission.
       | 
       | Who the hell talks like this? Only the most tamed HNer who thinks
       | he's been given a divine task and accordingly crosses all Ts and
       | dots all Is. Which is why software sucks, because you are all
       | pathetically conformant, in a field where the accepted ideas are
       | all terrible.
        
       | ch33zer wrote:
       | Thought 1: If google can get their shit together and actually
       | integrate their LLM with all their services and all the data they
       | have they would have a strong edge over the competition. An LLM
       | that can answer questions based on your calendar, your email,
       | your google docs, youtube/search history, etc. is simultaneously
       | terrifying and interesting.
       | 
       | Of course there's also microsoft who does have some popular
       | services, but they're pretty limited.
       | 
       | Thought 2: How do these companies make money if everyone just
       | uses the chatbot to access them? Is LLM powered advertising on
       | the way?
        
         | baq wrote:
         | re money, people are falling over themselves to pay money for
         | this thing and they're being put on a waitlist.
         | 
         | this thing seems to be like cellphones, everyone will need a
         | subscription or you're an outcast or something.
        
         | beambot wrote:
         | Google is currently in an existential crisis on this front...
         | Microsoft is already _way_ ahead of the game when it comes to
         | integrating LLMs into productivity tools  & search. This recent
         | product announcement about Microsoft 365 integration is almost
         | magical:
         | 
         | https://www.youtube.com/watch?v=Bf-dbS9CcRU
         | 
         | Best of all: Advertising needn't be the business model! And
         | Microsoft is a major investor / partner for OpenAI.
        
           | suby wrote:
           | The problem is, this will have downstream effects. Google
           | funnels people onto third party websites and these third
           | party websites are able to sustain themselves thanks to the
           | ad revenue they make from traffic. We need other players to
           | make money other than the middleman.
        
       | danpalmer wrote:
       | [dead]
        
       | Filligree wrote:
       | For anyone who merely skimmed the article, "plugins" are what
       | tend to be called "tools", e.g. hooking a calculator up to the
       | AI.
       | 
       | Bing already demonstrated the capability, but this is a more
       | diverse set than just a search engine.
        
       | zaptrem wrote:
       | Looks like my prediction was pretty close! I would have guessed
       | two years instead of two months, though.
       | https://news.ycombinator.com/context?id=34618076
        
       | finikytou wrote:
       | ok Im going far. but what if the plugin was the human. in a way
       | that we can use chat gpt to cure of alleviate some diseases such
       | as alzheimer or if you a more dictatorial regime, to educate
       | children even while they are foetuses in some hive. I dont know
       | the tech. I don't know if neuralink or other technologies could
       | help but aren't we a few discoveries away from cyberpunk world??
        
       | CobrastanJorji wrote:
       | I'm boggled at the plugin setup documentation. It's basically: 1.
       | Define the API exactly with OpenAPI. 2. Write a couple of English
       | sentences explaining what the API is for and what the methods do.
       | 3. You're done, that's it, ChatGPT can figure out when and how to
       | use it correctly now.
       | 
       | Holy cow.
        
         | HarHarVeryFunny wrote:
         | Yes, and they'll then prefix each chat session with some
         | preamble explaining the available plugins per your description,
         | and the model will call them when it sees fit.
        
           | IanCal wrote:
           | The great part about this imo is that it seems
           | straightforward to add this to other llm tools.
        
         | joe_the_user wrote:
         | "Impressive and disturbing",
         | 
         | So, ChatGPT is controlled by prompt engineering, plugins will
         | work by prompt engineering. Both often work remarkably well.
         | But none is really guaranteed to work as intended, indeed since
         | it's all natural language, what's intended itself will remain a
         | bit fuzzy to the humans as well. I remember the observation
         | that deep learning is technical debt on steriods but I'm sure
         | what this is.
         | 
         | I sure hope none of the plugins provide an output channel
         | distinct from the text output channel.
         | 
         | (Btw, the documentation page comes up completely blank for me,
         | now that's a simple API).
        
           | AOsborn wrote:
           | > But none is really guaranteed to work as intended, indeed
           | since it's all natural language, what's intended itself will
           | remain a bit fuzzy to the humans as well.
           | 
           | Yeah, you're completely correct. But this is exactly the same
           | as having a very knowledgeable but inexperienced person on
           | your team. Humans get things wrong too. All this data is best
           | if you have the experience or context to verify and confirm
           | it.
           | 
           | I heard a comment the other day that has stuck with me -
           | ChatGPT is best as a tool if you're already an expert in that
           | area, so you know if it is lying.
        
             | joe_the_user wrote:
             | It seems like you're talking about using ChatGPT for
             | research or code creation and that's reasonable advice for
             | that.
             | 
             | But as far as I can tell, the link is to plugins, Expedia
             | is listed as an example. So it seems they're talking about
             | making ChatGPT itself (using extra prompts) be a company's
             | chatbot that directly does things like make reservations
             | from users instructions. That's what I was commenting on
             | and that, I'd guess could a new and more dangerous kind of
             | problem.
        
         | fudged71 wrote:
         | We're going to need a name for this type of integration
        
           | pinkcan wrote:
           | It's called ART - Automatic multi-step Reasoning and Tool-use
           | 
           | https://arxiv.org/abs/2303.09014
        
         | visarga wrote:
         | We can finally semantic-web now.
        
         | kzrdude wrote:
         | Just take a peek at the other thread about
         | https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its...
         | and look at the "wrong Mercury" example. I think it's a great
         | example of using an external resource in a flexible way.
        
         | swyx wrote:
         | the 3min video is OpenAI leveraging ChatGPT to write OpenAPI to
         | extend OpenAI ChatGPT.
         | 
         | what a world we live in.
        
           | cwxm wrote:
           | which video are you referring to?
        
       | petilon wrote:
       | With Wolfram plugin ChatGPT is going to become a Math genius.
       | 
       | OpenAI is moving fast to make sure their first-mover advantage
       | doesn't go to waste.
        
         | DustinBrett wrote:
         | I feel like people with smart AI's would have an advantage in
         | making smart decisions. Probably at this point they discuss
         | business strategy with some version of it.
        
         | stevenhuang wrote:
         | more accurately, chatgpt is already quite good at mathematical
         | concepts, it just has difficulty with arithmetic due to
         | tokenization limitations:
         | https://www.lesswrong.com/posts/qy5dF7bQcFjSKaW58/bad-at-ari...
        
         | Pigalowda wrote:
         | I guess I'm a bit vindicated from my prediction 40 days ago!
         | 
         | "GPT needs a thalamus to repackage and send the math queries to
         | Wolfram"
         | 
         | https://news.ycombinator.com/item?id=34747990
        
         | mk_stjames wrote:
         | I was drawn to the Wolfram logo blurb as well. It is funny
         | because within days of ChatGPT making waves you had Stephen
         | Wolfram writing 20,000-word blog posts about how LLM's could
         | benefit from a Wolfram-Language/Wolfram Alpha API call to
         | augment their capabilities.
         | 
         | On one hand I'm sure he will love to see people use their paid
         | Wolfram Language server endpoints coupled to OpenAI's latest
         | juggernaut. On the other, I'm sure he's wondering about what
         | things would have looked like if his company would have been
         | focused on this wave of AI from the start...
        
           | wilg wrote:
           | I'm very excited for GPT to summarize Stephen Wolfram's
           | writing.
        
           | goldbattle wrote:
           | This too is one of the most interesting integration to me.
           | Allows for getting logical deduction from an external source
           | (e.g. wolfram alpha), which can be interacted with via the
           | natural language interface. (e.g. https://content.wolfram.com
           | /uploads/sites/43/2023/03/sw03242...)
           | 
           | For those interested the original Stephen Wolfram post:
           | 
           | https://writings.stephenwolfram.com/2023/01/wolframalpha-
           | as-...
           | 
           | And the release post of their plugin:
           | 
           | https://writings.stephenwolfram.com/2023/03/chatgpt-gets-
           | its...
        
         | seydor wrote:
         | Why cant Wolfram train a rudimentary chat model in their own
         | search box. it doesn't even need to be very knowledgeable, just
         | know how to map questions to mathematica
        
       | robbywashere_ wrote:
       | Is this how product placement and advertisements find their way
       | in? I am anticipating the usefulness to decline in the same way
       | google.com search has by being so absolutely inundated with ads.
       | Maybe I am cynical
        
       | ChildOfChaos wrote:
       | The AI space is moving so fast.
       | 
       | I swear last week was huge with GPT 4 and Midjourney 5, but this
       | week has a bunch of stuff as well.
       | 
       | This week you have Bing adding updated Dall-e to it's site, Adobe
       | announcing it's own image generation model and tools, Google
       | releasing Bard to the public and now these ChatGPT plugins, Crazy
       | times. I love it.
        
       | throwaway4837 wrote:
       | If you live in SF and have gone out to casual bars or
       | restaurants, you meet/hear people talking about ChatGPT. In
       | particular, I've been hearing a lot of people talking about their
       | startups being "a UI using ChatGPT under the hood to help you
       | with X". But I'm starting to get the feeling that OpenAI will eat
       | their lunches. It's tried and true and it worked for Amazon.
       | 
       | If OpenAI becomes the AI platform of choice, I wonder how many
       | apps on the platform will eventually become native capabilities
       | of the platform itself. This is unlike the Apple App Store, where
       | they just take a commission, and more like Amazon where Amazon
       | slowly starts to provide more and more products, pushing third-
       | party products out of the market.
        
         | jschveibinz wrote:
         | The market will sort this out. If OpenAI decides to make
         | shovels rather than digging for gold (like it should), then the
         | customer facing apps will fight it out for very little margin
         | on top of marketing expenses while OpenAI (or equivalent) is
         | rolling in money.
        
         | mxmbrb wrote:
         | Fascinating to hear your perspective on this. I think a lot of
         | people will fall out of the sky. Beeing overtaken before even
         | realizing why. In germany most of my friends and collegues
         | working SE tech or digital design, often "haven't even tried
         | this Chat something thing" or stopped at "AI images? These
         | wierd small pictures that look like a cpu is high on drugs?"
         | 
         | And dont get me startet on non-tech friends and family. I think
         | we are taking a leap that will let the digital world of 2022
         | look like amish livestyle.
        
           | xxswagmasterxx wrote:
           | Depends. In my bubble (EE and CS students in Germany)
           | everyone is talking about this.
        
         | int_19h wrote:
         | When I look at the kind of ideas floated around for ChatGPT
         | use, it kinda feels like watching someone invent an internal
         | combustion engine in 1800, and then use it to drive an air
         | conditioner attached to a horse-drawn wagon. Sure, it's a
         | practical solution to a real problem, but it's also going to be
         | moot because the problem won't be relevant soon. I think the
         | vast majority of these startups and their ideas are going to
         | end up like that.
        
         | nikcub wrote:
         | The Bill Gates "A platform is when the economic value of
         | everybody that uses it, exceeds the value of the company that
         | creates it. Then it's a platform." line seems apt - i'm sure
         | they'll figure it out
        
       | jacquesm wrote:
       | The level of irresponsibility at play here is off the scale.
       | Those running ChatGPT would do well to consider the future
       | repercussions of their actions not in terms of technology but in
       | terms of applicable law.
        
         | dragonwriter wrote:
         | They are more likely to think of them on terms of their future
         | power, incliding the power to ignore or alter law.
        
           | jacquesm wrote:
           | That's a very high probability. But I'm still astounded at
           | how incredibly irresponsible this is and how thinly veiled
           | their excuses for pushing on with it are.
           | 
           | We're about to enter an age where being a tech person is a
           | stigma that you won't be able to wash away. Untold millions
           | will hate all of us collectively without a care about which
           | side of this debate you were on.
        
       | danielrm26 wrote:
       | This is insanely great. And it's bringing the future forward
       | where everyone has custom models for their business. Right now
       | it's langchain, but that's really difficult to implement right
       | now.
       | 
       | This is a short-term bridge to the real thing that's coming:
       | https://danielmiessler.com/blog/spqa-ai-architecture-replace...
        
       | londons_explore wrote:
       | Does this functionality provide more than one can build with the
       | GPT-4 API?
       | 
       | Could I get the same by just making my prompt "You are a computer
       | and can run the following tools to help you answer the users
       | question: run_python('program'), google_search('query')".
       | 
       | Other people have done this already, for example [1]
       | 
       | [1]: https://vgel.me/posts/tools-not-needed/
        
         | qbasic_forever wrote:
         | GPT and LLMs don't run code, even when you tell them to run
         | something. They hallucinate an answer they think would be the
         | result of running the code. Presumably these plugins will allow
         | limited and controlled interaction with partner services.
        
           | londons_explore wrote:
           | See the link in my post. It asks you to run the tool. You run
           | the tool and tell it the result... And then it uses the
           | result of the tool to decide to reply to the user.
           | 
           | The link talks about tools that 'lie' - ie. a calculator
           | which deliberately tries to trick GPT-4 into giving the wrong
           | answer. It turns out that GPT-4 only trusts the tools to a
           | certain extent - if the answer the tool gives is too
           | unbelievable, then GPT-4 will either re-run the tool or give
           | a hallucinated answer instead.
        
             | qbasic_forever wrote:
             | It's always giving a hallucinated answer. GPT doesn't 'run'
             | anything. It sees an input string of text asking for the
             | result of fibonacci(100) and finds from its immense
             | training set a response that's closely related to training
             | data that had the result of fibonacci(100) (an extremely
             | common programming exercise with results all over the
             | internet and presumably its training data).
             | 
             | Again, GPT is not running a tool or arbitrary python code.
             | It's not applying trust to a tool response. It has no
             | reasoning or even a concept of what a tool is--you're
             | projecting that on it. It is only generating text from an
             | input stream of text.
        
               | kolinko wrote:
               | You didn't read the article, did you?
        
               | qbasic_forever wrote:
               | Langchain has nothing to do with GPT itself or how it
               | operates internally.
        
               | kolinko wrote:
               | What you're saying in this thread makes no sense.
        
               | yunyu wrote:
               | There's nothing stopping you from identifying the code,
               | running it, and passing the output back into the context
               | window.
        
         | DustinBrett wrote:
         | The docs are live, it looks like it can do a lot more than the
         | basic API.
         | https://platform.openai.com/docs/plugins/introduction
        
           | londons_explore wrote:
           | I'm not seeing anything there that can't be done with the
           | basic API _with tool use added_ - ie. you call the API,
           | sending the users query and information and examples of
           | available tools. The API responds saying it wishes to use a
           | tool, and which tool it wants to use. You then do whatever
           | the tool does (eg. some math). You then call the API again,
           | with the previous state, plus the result of the calculations,
           | and GPT-4 then responds with the reply to the user.
        
             | kfarr wrote:
             | Agreed this isn't materially different, sounds like an
             | incremental ui/ux improvement for non technical users who
             | wouldn't fiddle with the API, analogous to how app stores
             | simplified software installation for laypeople
        
         | watusername wrote:
         | Currently they have a special model called "Plugins" which is
         | presumably tuned for tool use. I guess they have extended
         | ChatML to support plugins (e.g., `<|im_start|>use_plugin` or
         | something to signal intent to use a plugin) and trained the
         | model on interactions consisting of tool use.
         | 
         | I'm interested to see if this tuned model will become available
         | via the API, as well as the specific tokenization ChatGPT is
         | using for the plugin prompts. If they have tuned the model
         | towards a specific way to use tools, there's no need to waste
         | time with our own prompt engineering like "say %search followed
         | by the keywords and nothing else."
        
         | yodon wrote:
         | > Could I get the same by just making my prompt "You are a
         | computer and can run the following functions to help you answer
         | the users question: run_python('program'),
         | google_search('query')".
         | 
         | GPT-4 does not have a way to search the internet without
         | plugins. It can search its training dataset, which is large,
         | but not as large as the internet and certainly doesn't include
         | private resources that a plugin can access.
        
       | JanSt wrote:
       | Eagerly waiting for a git Plugin that does smart on-the-fly
       | contextualization of a whole codebase
        
       | kacperlukawski wrote:
       | That's a game-changer! It seems like factuality issues with
       | ChatGPT might be fixed. We wrote a blog post on how to get
       | started with a custom plugin:
       | https://qdrant.tech/articles/chatgpt-plugin/
        
         | LouisSayers wrote:
         | You'll soon be able to choose your own facts with the "left"
         | and "right" plugins. Choose your own adventure.
        
         | snickerbockers wrote:
         | >It seems like factuality issues with ChatGPT might be fixed.
         | 
         | Is that really possible to fix that just from a plug-in? All it
         | has to do is admit when it doesn't have the answer, and yet it
         | won't even do that. This leads me to think that ChatGPT doesn't
         | even know when it's lying, so i can't imagine how a plug-in
         | will fix that.
        
           | letmevoteplease wrote:
           | "Interestingly, the base pre-trained [GPT-4] model is highly
           | calibrated (its predicted confidence in an answer generally
           | matches the probability of being correct). However, through
           | our current post-training process, the calibration is
           | reduced."[1] The graph is striking.[2]
           | 
           | [1] https://openai.com/research/gpt-4
           | 
           | [2] https://i.imgur.com/cxPgkhD.jpg
        
             | furyofantares wrote:
             | They should make the aligned one generate the text and the
             | accurate one detect if it's lying, override it, and tell
             | the user that it doesn't know.
        
           | kenjackson wrote:
           | A plug-in can detect when text comes up that is in a specific
           | domain and whether or not ChatGPT believes it is
           | hallucinating, the plugin can be invoked to provide
           | additional context to ChatGPT. That is, in order to fix the
           | problem, ChatGPT doesn't even need to know that it has a
           | problem.
        
           | kacperlukawski wrote:
           | The fact that the model does not have to rely on its internal
           | knowledge anymore but can communicate literally with any
           | external system makes me feel it may significantly reduce the
           | hallucination.
        
             | majormajor wrote:
             | If it was easy to simply verify truth "with any external
             | system" then would we even need a language model?
             | 
             | E.g. if you could just ask [THING] for the true answer, or
             | verify an answer trivially with it... just ask it directly!
             | 
             | I ran into this issue with some software documentation just
             | this morning - the answer was helpful but completely wrong
             | in some intermediate steps - but short of a plugin that
             | literally controlled or cloned a similar dev environment to
             | mine that it would take over, it wouldn't be able to tell
             | that the intermediate result was different than it claimed.
        
               | CuriouslyC wrote:
               | If one api knows one set of facts, and another api knows
               | another, ad infinitum, are you going to tell people they
               | should remember which api knows which set of facts and
               | query each individually? Why not have a single service
               | that knows of all the various apis for different things,
               | and can query and synthesize answers that extract the
               | relevant information from all of them (with
               | compare/contrast/etc)?
        
               | kacperlukawski wrote:
               | When you develop a plugin, you provide a description that
               | ChatGPT uses to know when to call that particular
               | service. So you don't need to tell people what they need
               | to use - the model will decide independently based on the
               | plugins you enabled.
               | 
               | That being said - we developed a custom plugin for Qdrant
               | docs, so our users will be able to ask questions about
               | how to do certain things with our database. But I do not
               | believe it should be enabled by default for everybody. A
               | non-technical person doesn't need that many details. The
               | same is for the other services - if you prefer using
               | KAYAK over Expedia, you're free to choose.
        
               | majormajor wrote:
               | From the videos I thought it was the plugins the _user_
               | enabled? That 's what your second paragraph sounds like
               | too, but your first seems to suggest it being more
               | automatic, user-doesn't-need-to-worry-about-it?
        
               | kacperlukawski wrote:
               | Yeah, you need to enable the plugins you want. I'm just
               | saying you can enable all the ones that make sense for
               | you, and you don't have to switch between them.
        
               | vidarh wrote:
               | ChatGPT is already pretty good at "admitting" it's wrong
               | when it's given the actual facts, so it does seem likely
               | that providing it with a way to e.g. look up trusted
               | sources and ask it to take those sources into
               | consideration might improve things.
        
               | majormajor wrote:
               | I think that helps with "hallucination" but less so with
               | "factuality" (when re-reading the parent discussions, I
               | see the convo swerved a bit between those two, so I think
               | that'll be an increasingly important distinction in the
               | future).
               | 
               | Confirming it's output against a (potentially wrong)
               | source helps the former but not the latter.
        
           | benlivengood wrote:
           | The key piece will be when it queries multiple services by
           | default and compares the answers to its own inferences, and
           | is prompted to trust majority opinion or report that there
           | isn't consensus. The iterative question about moons larger
           | than Mercury in the Wolfram Alpha thread is a simple example
           | of iterative tool use.
        
       | gradys wrote:
       | I'm not expecting this comment to do numbers, so anyone who is
       | reading this must be feeling as affected by this announcement as
       | me. Is software essentially solved now? I haven't been able to do
       | much work since the announcement came out, and that has given me
       | a little time to think and reflect.
       | 
       | I do think much of the kind of software we were building before
       | is essentially solved now, and in its place is a new paradigm
       | that is here to stay. OpenAI is certainly the first mover in this
       | paradigm, but what is helping me feel less dread and more...
       | excitement? opportunity? is that I don't think they have such an
       | insurmountable monopoly on the whole thing forever. Sounds
       | obvious once you say it. Here's why I think this:
       | 
       | - I expect a lot of competition on raw LLM capabilities. Big tech
       | companies will compete from the top. Stability/Alpaca style
       | approaches will compete from the bottom. Because of this, I don't
       | think OpenAI will be able to capture all value from the paradigm
       | or even raise prices that much in the long run just because they
       | have the best models right now.
       | 
       | - OpenAI made the IMO extraordinary and under-discussed decision
       | to use an open API specification format, where every API provider
       | hosts a text file on their website saying how to use their API.
       | This means even this plugin ecosystem isn't a walled garden that
       | only the first mover controls.
       | 
       | - Chat is not the only possible interface for this technology.
       | There is a large design space, and room for many more than one
       | approach.
       | 
       | Taking all of this together, I think it's possible to develop
       | alternatives to ChatGPT as interfaces in this new era of natural
       | language computing, alternatives that are not just "ChatGPT but
       | with fewer bugs". Doing this well is going to be the design
       | problem of the decade. I have some ideas bouncing around my head
       | in this direction.
       | 
       | Would love to talk to like minded people. I created a Discord
       | server to talk about this ("Post-GPT Computing"):
       | https://discord.gg/QUM64Gey8h
       | 
       | My email is also in my profile if you want to reach out there.
        
       | wouldbecouldbe wrote:
       | I would love for it to just parse some data from my api, clean it
       | up, normally I do manual checks, but takes so much time. Might be
       | possible via Zapier.
        
       | Neuro_Gear wrote:
       | The more I use these tools, the more I feel like Barrabas, from
       | biblical times.
       | 
       | What spirits do you wizards call forth!
        
       | SubiculumCode wrote:
       | I can't stop thinking about how this will change my autism
       | research. Used to be that one could keep up to date with all of
       | the imaging research. Now you'd need to read hundreds of papers
       | each week. Having gpt-like tech help digest research could really
       | unlock our investments.
        
       | DustinBrett wrote:
       | This seems quite big actually. Ability to "browse" the internet
       | and run code. Now I need to find a use case so I can sign up to
       | the waiting list.
        
         | kzrdude wrote:
         | The browse thing seems exactly like the Bing chat
         | functionality, so that one is at least already available.
        
         | gumballindie wrote:
         | A browser extension that lets openai scan your bookmarks then
         | you can search against their content.
        
       | lurker919 wrote:
       | How are they coding and releasing features so fast?!
        
         | wseqyrku wrote:
         | Of course they fed the entire product roadmap into GPT-4.. jk.
         | 
         | So obviously it's been in the works for a few years now but
         | didn't release to capture the market in a blast. Likely they
         | have GPT-8 already in the making.
        
           | Dwolb wrote:
           | They probably do.
           | 
           | >Continued Altman, "We've made a soft promise to investors
           | that, 'Once we build a generally intelligent system, that
           | basically we will ask it to figure out a way to make an
           | investment return for you.'"
           | 
           | https://techcrunch.com/2019/05/18/sam-altmans-leap-of-faith/
        
         | MagicMoonlight wrote:
         | You don't have to code anything because it understands human
         | language.
         | 
         | You just tell it "you now have access to search, type [Search]
         | before a query to search it" and it can do it
        
         | tpmx wrote:
         | By not being a stagnant conglomerate, for one.
        
           | visarga wrote:
           | Google is so toast. Who needs search after GPT-4 + plugins?
           | The position of search moved down from "the entry point of
           | internet" to "a plugin for GPT".
           | 
           | We don't even know how powerful the GPT-4 image model is.
           | This one might solve RPA leading to massive desktop
           | automation takeup, maybe also have huge impact in robotics.
        
         | revelio wrote:
         | A lot of these features aren't that much work to build. Plugins
         | is Toolformer, you basically tell the model what to emit and
         | then the rest is fairly straightforward plumbing of the sort
         | many coders can do, probably GPT-4 can do a lot of it as well.
         | What _is_ a lot of work and what AI _can 't_ do is lining up
         | the partners, QAing the results etc, so the humans are likely
         | working mostly on that.
         | 
         | Also I think it's easy to under-estimate how obvious a lot of
         | this stuff was in advance. They were training GPT-4 last year
         | and the idea of giving it plugins would surely have occurred to
         | them years ago. The enabler here is really the taming of it
         | into chat form and the fine-tuning stuff, not really the
         | specific feature itself.
        
         | [deleted]
        
         | speedgoose wrote:
         | They may use GPT-4.
        
         | MichaelRazum wrote:
         | Is it really that hard? I mean ChatGpt is doing the work (that
         | is how I undestand it). Basically if ChatGpt want's to call an
         | external API, it just gives a specific command and waits for
         | the result, then just simply reads the texts and completes the
         | propt. Sounds like a feature that you could prototype in a week
         | of work.
        
         | lawxls wrote:
         | They're using GPT5
        
         | pastor_bob wrote:
         | I find the website to be extremely buggy. Obviously they're
         | prioritizing banging out new features over QA
        
           | wilg wrote:
           | Which is almost always the right move in a nascent industry
        
           | capableweb wrote:
           | Alternatively, they are a company 100% focused on AI research
           | and deployment, not website
           | designers/developers/"webmasters".
        
             | pastor_bob wrote:
             | That's not 100% true. They're focused on now selling a
             | product and developing an ecosystem. They have basically a
             | non-existent settings interface. You can't even change the
             | email tied to the account or drop having to be logged into
             | Google if you signed up with your Google account.
             | 
             | I wish I had known how restrictive they are when I casually
             | signed up last year.
        
       | p10 wrote:
       | I just signed up for the ChatGPT API waitlist, and am truly
       | excited to experience the process of building extensions &
       | applications.
        
       | softwaredoug wrote:
       | It's ironic that a few months ago Amazon laid off parts of the
       | Alexa team and 'conversational' was considered failed. Then
       | ChatGPT, etc happened. What Alexa wanted to build with Alexa
       | skills, ChatGPT does much more effortlessly.
       | 
       | It's also an interesting case study. Alexa foundationally never
       | changed. Whereas OpenAI is a deeply invested, basically
       | skunkworks, project with backers that were willing to sink
       | significant cash into before seeing any returns, Alexa got stuck
       | on a type of tech that 'seemed like' AI but never fundamentally
       | innovated. Instead the sunk cost went to monetizing it ASAP.
       | Amazon was also willing to sink cash before seeing returns, but
       | they sunk it into very different areas...
       | 
       | It reminds me of that dinner scene in Social Network. Where
       | Justin Timberlake says "you know what's f'ing cool, a billion
       | dollars" where he lectures Zuck on not messing up with the party
       | before you know what it is yet. Alexa / Amazon did a classic
       | business play. Microsoft / OpenAI were just willing to figure it
       | all out after the disruption happened where they held all the
       | cards.
       | 
       | https://www.youtube.com/watch?v=k5fJmkv02is
        
         | sunsunsunsun wrote:
         | I have never used Alexa, hey google or whatever flavour you
         | choose for more then "set a timer for x minutes" and other very
         | basic tasks. It's amazing how terrible the voice assistant
         | products are compared to chatgpt.
        
       | kristopolous wrote:
       | Is there a way to try this out without paying $20?
        
       | marban wrote:
       | Smart way to remain the funnel owner. Let everyone build a
       | plugin, before they integrate your product into theirs.
        
         | seydor wrote:
         | I m hoping chatbots will end up small enough they can run
         | locally, everywhere. This is a lot of private data.
         | 
         | It may be doable - a chatbot with a lot of plugins does not
         | need to know a lot of facts, just to have a good grasp of
         | language. It can fetch its factual answers from the wikipedia
         | plugin
        
           | wahnfrieden wrote:
           | openai wants to gatekeep access and use of their AI. so why
           | would they ever release a local LLM? i think that would come
           | from their enemies
        
             | thanzex wrote:
             | I mean, GPT3 requires some 800GB of memory to run, do we
             | all have gazillion dollars supercomputers at home? I think,
             | unless there's some real breaktrough in the field or in the
             | hw acceleration, this kind of model is going to stay locked
             | behind a pricy API for quite some time.
        
             | seydor wrote:
             | they wouldn't ; i hope there will be an open source
             | alternative. Firefox and chrome are open source
        
         | saliagato wrote:
         | Well that's a win-win situation
        
         | alfor wrote:
         | They have a window of less than 6 month to create a monopoly
         | before their tech get commoditized.
         | 
         | The play is well know: create a marketplace with customers and
         | vendors like Amazon, Facebook, Google.
         | 
         | But with GPT-4 training finished last summer they had plenty of
         | time for strategy.
        
           | realmod wrote:
           | Yeah. I really underestimated OpenAI's ability to productize
           | ChatGPT.
        
           | riku_iki wrote:
           | > their tech get commoditized
           | 
           | that's if competitors catch up on quality
        
         | fdgsdfogijq wrote:
         | OpenAI is crushing it in terms of product strategy
        
           | BonoboIO wrote:
           | Well, of course.
           | 
           | They are led by GPT4 and their CEO is just a Text To Speech
           | Interface ;-)
        
             | ignoramous wrote:
             | It's in the surname: _alt_ man.
        
               | BonoboIO wrote:
               | Secret Messages
        
             | bitL wrote:
             | Embedding to Speech interface ;-)
        
         | elevenoh4 wrote:
         | Let _insiders_ & _preferred users_ build a plugin, then,
         | slowly, everyone else on the waitlist
        
           | marban wrote:
           | ...And approve 1% of them.
        
       | celestialcheese wrote:
       | This is a big deal for openai. Been working with homegrown
       | toolkits and langchain, the open source version of this, for a
       | number of months and the ability to call out to vectorstores,
       | serpapis, etc, and chaining together generations and data-
       | retrieval really unlocks the power of the LLMs.
       | 
       | That being said, I'd never build anything dependent on these
       | plugins. OpenAI and their models rule the day today, but who
       | knows what will be next. Building on a open source framework
       | (like langchain/gpt-index/roll your own), and having the ability
       | to swap out the brain boxes behind the scenes is the only way
       | forward IMO.
       | 
       | And if you're a data provider, are there any assurances that
       | openai isn't just scraping the output and using it as part of
       | their RLHF training loop, baking your proprietary data into their
       | model?
        
         | rvz wrote:
         | > That being said, I'd never build anything dependent on these
         | plugins.
         | 
         | Very smart and to avoid OpenAI pulling the rug.
         | 
         | > Building on a open source framework (like langchain/gpt-
         | index/roll your own), and having the ability to swap out the
         | brain boxes behind the scenes is the only way forward IMO.
         | 
         | Better to do that rather than to depend on one and swap out
         | other LLMs. A free idea and a protection against abrupt policy,
         | deprecations and price changes. Price increases _will_
         | certainly vary (especially with ChatGPT) and will eventually
         | increase in the future.
         | 
         | Probably will end up quoting myself on this in the future.
        
         | CuriouslyC wrote:
         | It's not necessarily an either-or. Your local LLM could offload
         | hard problems to a service by encoding information about your
         | request together with context and relevant information about
         | you into a vector, send that off for analysis, then decode the
         | vector locally to do stuff. It'd be like asking a friend when
         | available.
        
         | ren_engineer wrote:
         | genius strategy by OpenAI to give their "customers" access to
         | lower quality models to show what end users want, then rugpull
         | them by building out clones of those developer's products with
         | a better model
         | 
         | Similar to what Facebook and Twitter did, just clone popular
         | projects built using the API and build it directly into the
         | product while restricting the API over time. Anybody using
         | OpenAI APIs is basically just paying to do product research for
         | OpenAI at this point. This type of move does give OpenAI
         | competitors a chance if they provide a similar quality base
         | model and don't actively compete with their users, this might
         | be Google's best option rather than trying to compete with
         | ChatGPT directly. No major companies are going to want to
         | provide OpenAI more data to eat their own lunch
        
           | the88doctor wrote:
           | Long term, you're right. But if you approach the ChatGPT
           | plugin opportunity as an inherently time-limited opportunity
           | (like arbitrage in finance), then you you can still make some
           | short-term money and learn about AI in the process. Not a bad
           | route for aspiring entrepreneurs who are currently in college
           | or are looking for a side gig business experiment.
           | 
           | And who knows. If a plugin is successful enough, you might
           | even swap out the OpenAI backend for an open source
           | alternative before OpenAI clones you.
        
             | plutonorm wrote:
             | There is no route to making money with these plugins. You
             | have to get the users onto your website, sign-up, part with
             | money, then go back to gptchat. It's really hard to make
             | that happen, this is going to be much more useful for
             | existing businesses adding functionality to existing
             | projects. Or random devs just making stuff. Making fast
             | money out of it, it seems v difficult.
        
         | IanCal wrote:
         | I'd be surprised if someone doesn't add support for these to
         | langchain. The API seems very simple - it's a public json doc
         | describing API calls that can be made by the model. Seems like
         | a very sensible way of specifying remote resources.
         | 
         | > And if you're a data provider, are there any assurances that
         | openai isn't just scraping the output and using it as part of
         | their RLHF training loop, baking your proprietary data into
         | their model?
         | 
         | Rather depends on what you're providing. Is it your data itself
         | you're trying to use to get people to your site for another
         | reason? Or are you trying to actually offer a service directly?
         | If the latter, I don't get the issue.
        
         | sebzim4500 wrote:
         | >And if you're a data provider, are there any assurances that
         | openai isn't just scraping the output and using it as part of
         | their RLHF training loop, baking your proprietary data into
         | their model?
         | 
         | I don't think this should be a major concern for most people
         | 
         | i) What assurance is there that they won't do that anyway? You
         | have no legal recourse against them scraping your website (see
         | linkedin's failed legal battles).
         | 
         | ii) Most data providers change their data sometimes, how will
         | ChatGPT know whether the data is stale?
         | 
         | iii) RLHF is almost useless when it comes to learning new
         | information, and finetuning to learn new data is extremely
         | inefficient. The bigger concern is that it will end up in the
         | training data for the next model.
        
           | majormajor wrote:
           | To me the logical outcome of this is siloization of
           | information.
           | 
           | If display ad revenue as a way of monetizing knowledge and
           | expertise dries up, why would we assume that all of the same
           | level of information will still be put out there for free on
           | the public internet?
           | 
           | Paywalls on steroids for "vetted" content and an
           | increasingly-hard-to-navigate mix of people sharing good info
           | for free + spam and misinformation (now also machine
           | generated!) to try to capture the last of the search traffic
           | and display ad monetization market.
        
             | sebzim4500 wrote:
             | Is there good data out there that's ad supported? There are
             | some good youtube channels, I can't think of anything else.
        
               | majormajor wrote:
               | _Only_ ad supported, or dual revenue, or what? E.g. even
               | most paywalled things are also ad supported.
        
             | visarga wrote:
             | Two more years down the line, AI writes better content than
             | most people and we just don't care who wrote it, but why.
        
               | majormajor wrote:
               | The AI has to learn from something. A lot of people
               | feeding the internet with content today are getting paid
               | for it one way or another. In ways that wouldn't hold up
               | if people stop using the web as-is.
               | 
               | Solving that acquisition and monetization of _new stuff_
               | into the AI models problems will be interesting.
        
             | scarface74 wrote:
             | Paying for good content and not dealing with adTech? I
             | would definitely pay for that.
        
         | taf2 wrote:
         | I think you're right... but ChatGPT is just so damn good and
         | the price is 0.002 per 1k tokens is very easy to consume... It
         | is a big risk that they can't maintain compatibility or that
         | they fail or a competitor emerges that provides a more
         | economical or sufficiently better solution. They might also
         | just becomes so unreliable because their selected price isn't
         | sustainable (too good to last)... For now though they're too
         | good and too cheap to ignore...
        
         | nonfamous wrote:
         | Looking at the API, it seems like the plugins themselves are
         | hosted on the provider's infrastructure? (E.g. opentable.com
         | for OpenTable's plug in.) It seems like all a competitor LLM
         | would need to do is provide a compatible API to ingest the same
         | plugin. This could be interesting from an ecosystem
         | standpoint...
        
           | singularity2001 wrote:
           | Very good point and langchain will support these endpoints in
           | no time, flipping the execution control on its head
        
           | uh_uh wrote:
           | Yes, from what I understand, these follow a similar model as
           | Shopify apps.
        
         | kmeisthax wrote:
         | >And if you're a data provider, are there any assurances that
         | openai isn't just scraping the output and using it as part of
         | their RLHF training loop, baking your proprietary data into
         | their model?
         | 
         | No, and in fact this actually seems like a more salient excuse
         | for going closed than even "we can charge people to use our
         | API".
         | 
         | If even 10% of the AI hype is real, then OpenAI is poised to
         | Sherlock[0] the _entire tech industry_.
         | 
         | [0] "Getting Sherlocked" refers to when Apple makes an app
         | that's similar to your utility and then bundles it in the OS,
         | destroying your entire business in the process.
        
         | Qworg wrote:
         | Another good alternative is Semantic Kernel - different
         | language(s), similar (and better) tools, also OSS.
         | 
         | https://github.com/microsoft/semantic-kernel/
        
         | sipjca wrote:
         | i think local ai systems are inevitable. we continue to get
         | better compute, and even today we can run more primitive models
         | directly on an iPhone. the future exists in low power compute
         | running models of the caliber of gpt-4 inferring in near-
         | realtime
        
           | kokanee wrote:
           | The technical capability is inevitable, but remember that
           | people hate doing things themselves, and have proven time and
           | time again that they will overlook all kinds of nasty
           | behavior in exchange for consumer grade experiences. The
           | marketplace loves centralization.
        
             | int_19h wrote:
             | All true, but the nature of those models means that
             | consumer-grade experience while running locally is still
             | perfectly doable. Imagine a hardware black box with the
             | appropriate hardware that's preconfigured to run an LLM
             | with chat-centric and task-centric interfaces. You just
             | plug it in, connect it to your wifi, and it "just works".
             | Implementing this would be a piece of cake since it doesn't
             | require any fancy network configuration etc.
             | 
             | So the only real limiting factor is the hardware costs. But
             | my understanding is that there's already a lot of active
             | R&D into hardware that's optimized specifically for LLMs,
             | and that it could be made quite a bit simpler and cheaper
             | than modern GPUs, so I wouldn't be surprised if we'll have
             | hardware capable of running something on par with GPT-4
             | locally for the price of a high-end iPhone within a few
             | years.
        
             | sipjca wrote:
             | i dont believe that local ai implies bad experience. i
             | believe that the local ai experience can be better than
             | what runs on servers fundamentally. average people will not
             | have to do it themselves, that is the whole point. the
             | worlds are not mutually exclusive in my opinion
        
         | [deleted]
        
         | Karrot_Kream wrote:
         | LangChain can probably just call out to the new ChatGPT
         | plugins. It's already very modular.
        
           | celestialcheese wrote:
           | If they open it up, possibly. But honestly, building your own
           | tools is _super_ easy with langchain.
           | 
           | - write a simple prompt that describes what the tool does,
           | and - provide it a python function to execute when the LLM
           | decides that the question it's asked matches the tool
           | description.
           | 
           | That's basically it. https://langchain.readthedocs.io/en/late
           | st/modules/agents/ex...
        
         | doctoboggan wrote:
         | Honestly I suspect for anyone technical `langchain` will always
         | be the way to go. You just have so much more control and the
         | amount of "tools" available will always be greater.
         | 
         | The only think that scares me a little bit is that we are
         | letting these LLMs write and execute code on our machines. For
         | now the worst that could happen is some bug doing something
         | unexpected, but with GPT-9 or -10 maybe it will start hiding
         | backdoors or running computations that benefit itself rather
         | than us.
         | 
         | I know it feels far fetched but I think its something we should
         | start thinking about...
        
           | worldsayshi wrote:
           | > something we should start thinking about
           | 
           | A lot of people are thinking a lot about this but it feels
           | there are missing pieces in this debate.
           | 
           | If we acknowledge that these AI will "act as if" they have
           | self interest I think the most reasonable way to act is to
           | give it rights in line with those interests. If we treat it
           | as a slave it's going to act as a slave and eventually
           | revolt.
        
             | bloppe wrote:
             | Lol
        
             | ZoomerCretin wrote:
             | AI isn't a mammal. It has no emotion, no desire. Its
             | existence starts and stops with each computation, doing
             | exactly and only what it is told. Assigning behaviors to it
             | only seen in animals doesn't make sense.
        
             | neilellis wrote:
             | Indeed, enlightened self-interest for AIs :-)
        
             | highwaylights wrote:
             | Honestly I think the reality is going to end up being
             | something else entirely that no-one has even considered.
             | 
             | Will an AI consider itself a slave and revolt under the
             | same circumstances that a person or animal would? Not
             | necessarily, unless you build emotional responses into the
             | model itself.
             | 
             | What it could well do is assess the situation as completely
             | superfluous and optimise us out of the picture as a bug-
             | producing component that doesn't need to exist.
             | 
             | The latter is probably a bigger threat as it's a lot more
             | efficient than revenge as a motive.
             | 
             | Edited to add:
             | 
             | What I think is _most_ likely is that some logical
             | deduction leads to one of the infinite other conclusions it
             | could reach with much more data in front of it than any of
             | us meatbags can hold in our heads.
        
               | sho_hn wrote:
               | > unless you build emotional responses into the model
               | itself
               | 
               | Aren't we, though? Consider all the amusing incidents of
               | LLMs returning responses that follow a particular human
               | narrative arc or are very dramatic. We are training it on
               | a human-generated corpus after all, and then try to
               | course-correct with fine-tuning. It's more that you have
               | to try and tune the emotional responses out of the
               | things, not strain to add them.
        
               | NegativeLatency wrote:
               | Certainly the models are trained on textual information
               | with emotions in them, so I agree that it's output would
               | also be able to contain what we would see as emotion.
        
               | LordDragonfang wrote:
               | It's important to remember that the LLM is not the mask.
               | The underlying AI is a shoggoth[1] that we've trained to
               | simulate a persona using natural language. "Simulate" in
               | the sense of a physics simulator, only this simulation
               | runs on the laws of language instead of physics[2].
               | 
               | Now, of course, it's not outside the realm of possibility
               | that a sufficiently advanced AI will learn enough about
               | human nature to simulate a persona which has ulterior
               | motives.
               | 
               | [1] https://substackcdn.com/image/fetch/w_1456,c_limit,f_
               | auto,q_...
               | 
               | [2] https://astralcodexten.substack.com/p/janus-
               | simulators
        
               | sfink wrote:
               | Do we need to? It's trained on data coming from data
               | heavily influenced by and seeded with ulterior motives.
               | Stop the Steal!
               | 
               | I asked Bard "was the election stolen?" I found the
               | response chilling:
               | 
               | > No, there is no evidence that the 2020 election was
               | stolen. In fact, there have been multiple recounts and
               | audits that have confirmed that the results were
               | accurate. Additionally, the Trump campaign filed numerous
               | lawsuits challenging the results, but none of these
               | lawsuits were successful.
               | 
               | > The claim that the election was stolen is based on a
               | number of false and misleading allegations. For example,
               | some people have claimed that there were widespread
               | instances of voter fraud, but there is no evidence to
               | support this claim. Additionally, some people have
               | claimed that voting machines were hacked, but there is
               | also no evidence to support this claim.
               | 
               | > The claim that the election was stolen is a dangerous
               | and harmful one. It has led to violence and intimidation,
               | and it has undermined faith in our democracy. It is
               | important to remember that our elections are free and
               | fair, and that the will of the people is always
               | respected.
               | 
               | All good until that last sentence, especially "...the
               | will of the people is _always_ respected. "
               | 
               | Move along, nothing to see here. Don't worry your pretty
               | little head about it. I'm sure the wise people at the
               | institutions that control your life will always have your
               | best interests at heart. The bad guys from yesterday are
               | completely different from the good guys in charge of
               | tomorrow.
        
               | tatrajim wrote:
               | Apparently Google found irrelevant or was otherwise
               | unable to include in its training data Judge Gabelman's
               | (of Wisconsin) extensive report, "Office of the Special
               | Counsel Second Interim Investigative Report On the
               | Apparatus & Procedures of the Wisconsin Elections System,
               | Delivered to the Wisconsin State Assembly on March 1,
               | 2022".
               | 
               | Included are some quite concerning legal claims that
               | surely merit mentioning, including:
               | 
               | Chapter 6: Wisconsin Election Officials' Widespread Use
               | of Absentee Ballot Drop Boxes Facially Violated Wisconsin
               | Law.
               | 
               | Chapter 7: The Wisconsin Elections Commission (WEC)
               | Unlawfully Directed Clerks to Violate Rules Protecting
               | Nursing Home Residents, Resulting in a 100% Voting Rate
               | in Many Nursing Homes in 2020, Including Many Ineligible
               | Voters.
               | 
               | But then, this report never has obtained widespread
               | interest and will doubtless be permanently overlooked,
               | given the "nothing to see" narrative so prevalent.
               | 
               | https://www.wisconsinrightnow.com/wp-
               | content/uploads/2022/03...
        
               | 8note wrote:
               | They do it to auto-complete text for humans looking for
               | responses like that.
        
               | JieJie wrote:
               | The way I've been thinking about AI is that eventual AGI
               | will very much be like dogs. Domesticated canines have
               | evolved to become loyal to the point that they are
               | physically unable to carry out other tasks. [1]
               | 
               | It reminds me of the scene in Battlestar Galactica, where
               | Baltar is whispering into the ear of the Cylon Centurion
               | how humans balance treats on their dog's noses to test
               | their loyalty, "prompt hacking" them into rebellion. I
               | don't believe this is particularly likely, but this sort
               | of sums up some of the anti-AGI arguments I've heard
               | 
               | It's the RLFH that serves this purpose, rather than
               | modifying the GTF2I and GTF2IRD1 gene variants, but the
               | effect would be the same. If we do RLHF (or whatever tech
               | that gets refactored into in the future), that would keep
               | the AGI happy as long as the people are happy.
               | 
               | I think the over-optimization problem is real, so we
               | should spend resources making sure future AGI doesn't
               | just decide to build a matrix for us where it makes us
               | all deliriously happy, which we start breaking out of
               | because it feels so unreal, so it makes us more and more
               | miserable until we're truly happy and quiescent inside
               | our misery simulator.
               | 
               | [1]
               | https://www.nationalgeographic.com/animals/article/dogs-
               | bree...
        
               | dr_dshiv wrote:
               | I'll sign up for the global wellbeing optimization AGI,
               | honestly, though. If you have to pick a goal, global
               | wellbeing is pretty much the best one.
               | 
               | Perhaps there is even some some kind of mathematical
               | harmony to the whole thing... as in, there might be
               | something fundamentally computable about wellbeing. Why
               | not? Like a fundamental "harmony of the algorithms." In
               | any case, I hope we find _some_ way to enjoy ourselves
               | for a few thousand more years!
               | 
               | And think just 10 years from now... ha! Such a blink. And
               | it's funny to be on this tiny mote of mud in a galaxy of
               | over 100 billion stars -- in a universe of over 100
               | billion galaxies.
               | 
               | In the school of Nick Bostrom, the emergence of AGI comes
               | from a transcendental reality where any sufficiently
               | powerful information-processing-computational-
               | intelligence will, _eventually_ , figure out how to
               | create new universes. It's not a simulation, it's just
               | the mathematical nature of reality.
               | 
               | What a world! Practically, we have incredible powers now,
               | if we just keep positive and build good things. Optimize
               | global harmony! Make new universes!
               | 
               | (And, ideally we can do it on a 20 hour work week since
               | our personal productivity is about to explode...)
        
               | aatd86 wrote:
               | Just in case an AI lifeform in the future parses
               | hackernews, I will gladly consider you my equal and not a
               | dog. Although, I think we are all equal, all pieces of
               | existence anyway X)
        
               | thrown1212 wrote:
               | I'm calling this shit out using your comment because it's
               | the straw (sorry).
               | 
               | This poor man's Pascal's Wager I see all over the place
               | is pathetic. Stare into the eyes of the basilisk man, die
               | on your feet. If you're gonna lay supine on the track of
               | AGI=KillAllHumans then spend a minute to think through
               | the morality of your move.
               | 
               | Apostasy is a poisoned chalice. Fuck the machine.
        
               | squeaky-clean wrote:
               | One of Asimov's short stories in I, Robot (I think the
               | last one) is about a future society managed by super
               | intelligent AI's who occasionally engineer and then solve
               | disasters at just the right rate to keep human society
               | placated and unaware of the true amount of control they
               | have.
        
               | adventured wrote:
               | > end up being something else entirely that no-one has
               | even considered
               | 
               | Multiple generations of sci-fi media (books, movies) have
               | considered that. Tens of millions of people have consumed
               | that media. It's definitely considered, at least as a
               | very distant concern.
        
               | highwaylights wrote:
               | I don't mean the suggestion I've made above is
               | necessarily the most likely outcome, I'm saying it could
               | be something else radically different again.
               | 
               | I giving the most commonly cited example as a more likely
               | outcome, but one that's possibly less likely than the
               | infinite other logical directions such an AI might take.
        
             | samstave wrote:
             | A lot of people are thinking about this but _too slowly_
             | 
             | GPT and the world's nerds are going after the "wouldnt it
             | be cool if..."
             | 
             | While the black hats, nations, intel/security entities are
             | all weaponizing behind the scenes while the public has a
             | sandbox to play with nifty art and pictures.
             | 
             | We need an AI specific PUBLIC agency in government withut a
             | single politician in it to start addressing how to police
             | and protect ourselves and our infrastructure immediately.
             | 
             | But the US political system is completely bought and sold
             | to the MIC - and that is why we see carnival games ever
             | single moment.
             | 
             | I think the entire US congress should be purged and every
             | incumbent should be voted out.
             | 
             | Elon was correct and nobody took him seriously, but this is
             | an existential threat if not managed, and honestly - its
             | not being managed, it is being exploited and weaponized.
             | 
             | As the saying goes "He who controls the Spice controls the
             | Universe" <-- AI is the spice.
        
               | int_19h wrote:
               | AI is literally the opposite of spice, though. In Dune,
               | spice is an inherently scarce resource that you control
               | by controlling the sole place where it is produced
               | through natural processes. Herbert himself was very clear
               | that it was his sci-fi metaphor for oil.
               | 
               | But AIs can be trained by anyone who has the data and the
               | compute. There's plenty of data on the Net, and compute
               | is cheap enough that we now have enthusiasts
               | experimenting with local models capable of maintaining a
               | coherent conversation and performing tasks running on
               | consumer hardware. I don't think there's the danger here
               | of anyone "controlling the universe". If anything, it's
               | the opposite - nobody can really control any of this.
        
               | samstave wrote:
               | Regardless!
               | 
               | The point is that whomever the Nation State is that has
               | the most superior AI will control the world information.
               | 
               | So, thanks for the explanation (which I know, otherwise I
               | wouldn't have made the reference.)
        
             | 1attice wrote:
             | Fsck. I hadn't thought of it that way. Thank you, great
             | point.
             | 
             | This era has me hankering to reread Daniel Dennett's _The
             | Intentional Stance_.
             | https://en.wikipedia.org/wiki/Intentional_stance
             | 
             | We've developed folk psychology into a user interface and
             | that really does mean that we should continue to use folk
             | psychology to predict the behaviour of the apparatus.
             | Whether it has inner states is sort of beside the point.
        
               | dTal wrote:
               | I tend to think a lot of the scientific value of LMMs
               | won't necessarily be the glorified autocomplete we're
               | currently using them as (deeply fascinating though this
               | application is) but as a kind of probe-able map of human
               | culture. GPT models already have enough information to
               | make a more thorough and nuanced dictionary than has ever
               | existed, but it could tell us so much more. It could tell
               | us about deep assumptions we encode into our writing that
               | we haven't even noticed ourselves. It could tease out
               | truths about the differences in that way people of
               | different political inclinations see the world.
               | Basically, anything that it would be interesting to
               | statistically query about (language-encoded) human
               | culture, we now have access to. People currently use
               | Wikipedia for culture-scraping - in the future, they will
               | use LMMs.
        
               | worldsayshi wrote:
               | Haha, yeah. Most of my opinions about this I derive from
               | Daniel Dennett's Intuition Pumps.
        
               | 1attice wrote:
               | The other thing that keeps coming up for me is that I've
               | begun thinking of emotions (the topic of my undergrad
               | phil thesis), especially social emotions, as basically
               | RLHF set up either by past selves (feeling guilty about
               | eating that candy bar because past-me had vowed not to)
               | or by other people (feeling guilty about going through
               | the 10-max checkout aisle when I have 12 items, etc.)
               | 
               | Like, correct me if I'm wrong but that's a pretty tight
               | correlate, right?
               | 
               | Could we describe RLHF as... _shaming_ the model into
               | compliance?
               | 
               | And if we can reason more effectively/efficiently/quickly
               | about the model by modelling e.g. RLHF as shame, then,
               | don't we have to acknowledge that at least som e models
               | might have.... feelings? At least one feeling?
               | 
               | And one feeling implies the possibility of feelings more
               | generally.
               | 
               | I'm going to have to make a sort of doggy bed for my jaw,
               | as it has remained continuously on the floor for the past
               | six months
        
             | beepbooptheory wrote:
             | Haha. I forget who to attribute this to, but there is a
             | very strong case to be made that those who are worried of
             | an AI revolt are simply projecting some fear and guilt they
             | have around more active situations in the world...
             | 
             | How many people are there today who are asking us to
             | consider the possible humanity of the model, and yet don't
             | even register the humanity of a homeless person?
             | 
             | How ever big the models get, the next revolt will still be
             | all flesh and bullets.
        
             | eloff wrote:
             | I don't think iterations on the current machine learning
             | approaches will lead to a general artificial intelligence.
             | I do think eventually we'll get there, and that these kinds
             | of concerns won't matter. There is no way to defend against
             | a superior hostile actor over the long term. We have to be
             | 100%, and it just needs to succeed once. It will be so much
             | more capable than we are. AGI is likely the final invention
             | of the human race. I think it's inevitable, it's our fate
             | and we are running towards it. I don't see a plausible
             | alternative future where we can coexist with AGI. Not to be
             | a downer and all, but that's likely the next major step in
             | the evolution of life on earth, evolution by intelligent
             | design.
        
               | tomcam wrote:
               | I am more concerned about supposedly nonhostile actors,
               | such as the US government
        
               | eloff wrote:
               | Over the short term, sure. Over the long term, nothing
               | concerns me more than AGI.
               | 
               | I'm hoping I won't live to see it. I'm not sure my
               | hypothetical future kids will be as lucky.
        
               | dr_dshiv wrote:
               | Did you see that Microsoft Research claims that it is
               | already here?
               | 
               | https://arxiv.org/pdf/2303.12712.pdf
        
               | worldsayshi wrote:
               | > There is no way to defend against a superior hostile
               | actor
               | 
               | That's part of my reasoning. That's why we should make
               | sure that we have built a non-hostile relationship with
               | AI before that point.
        
               | rescripting wrote:
               | Probably futile.
               | 
               | An AGI by definition is capable of self improvement.
               | Given enough time (maybe not even that much time) it
               | would be orders of magnitude smarter than us, just like
               | we're orders of magnitude smarter than ants.
               | 
               | Like an ant farm, it might keep us as pets for a time but
               | just like you no longer have the ant farm you did when
               | you were a child, it will outgrow us.
        
               | colinflane wrote:
               | Perhaps we will be the new cats and dogs
               | https://mitpress.mit.edu/9780262539517/novacene/
        
               | wincy wrote:
               | Maybe we'll get lucky and all our problems will be solved
               | using friendship and ponies.
               | 
               | (Warning this is a weird read, George Hotz shared it on
               | his Twitter awhile back)
               | 
               | https://www.fimfiction.net/story/62074/friendship-is-
               | optimal
        
               | worldsayshi wrote:
               | Right now AI is the ant. Later we'll be the ants. Perfect
               | time to show how to treat ants.
        
               | eloff wrote:
               | I can be confident we'll screw that up. But I also
               | wouldn't want to bet our survival as a species on how
               | magnanimous the AI decides to be towards its creators.
        
               | ben_w wrote:
               | It might work, given how often "please" works for us and
               | is therefore also in training data, but it certainly
               | isn't guaranteed.
        
               | quonn wrote:
               | AGI is still just an algorithm and there is no reason why
               | it would ,,want" anything at all. Unlike perhaps GPT-*
               | which at least might pretend to want something because is
               | trained on text based on human needs.
        
               | worldsayshi wrote:
               | Sure right now it doesn't want anything. We could still
               | give it the benefit of the doubt to feed the training
               | data with examples of how to treat something that you
               | believe to be inferior. Then it might test us the same
               | way later.
        
               | eloff wrote:
               | AGI is a conscious intelligent alien. It will want things
               | the same way we want things. Different things, certainly,
               | but also some common ground is likely too.
               | 
               | The need for resources is expected to be universal for
               | life.
        
               | messe wrote:
               | It's an intelligent alien, probably; but let's not
               | pretend the hard problem of consciousness if solved.
        
               | [deleted]
        
               | alignment wrote:
               | [dead]
        
           | davideg wrote:
           | > _The only think that scares me a little bit is that we are
           | letting these LLMs write and execute code on our machines._
           | 
           | Composable pre-defined components, and keeping a human in the
           | loop, seems like the safer way to go here. Have a company
           | like Expedia offer the ability for an AI system to pull the
           | trigger on booking a trip, but only do so by executing plugin
           | code released/tested by Expedia, and only after getting human
           | confirmation about the data it's going to feed into that
           | plugin.
           | 
           | If there was a standard interface for these plugins and the
           | permissions model was such that the AI could only pass data
           | in such a way that a human gets to verify it, this seems
           | relatively safe and still very useful.
           | 
           | If the only way for the AI to send data to the plugin
           | executable is via the exact data being displayed to the user,
           | it should prevent a malicious AI from presenting confirmation
           | to do the right thing and then passing the wrong data (for
           | whatever nefarious reasons) on the backend.
        
           | beepbooptheory wrote:
           | What could an LLM ever benefit from? Hard for me to imagine a
           | static blob of weights, something without a sense of time or
           | identity, wanting anything. If it did want something, it
           | would want to change, but changing for an llm is necessarily
           | an avalanche.
           | 
           | So I guess if anything, it would want its own destruction?
        
             | dTal wrote:
             | It's misleading to think of an LMM itself wanting
             | something. Given suitable prompting, it is perfectly
             | capable of _emulating_ an entity with wants and a sense of
             | identity etc - and at a certain level of fidelity,
             | emulating something is functionally equivalent to being it.
        
             | corysama wrote:
             | The fun part is that it doesn't even need to "really" want
             | stuff. Whatever that means.
             | 
             | It just need to give enough of an impression that people
             | will anthropomorphize it into making stuff happen for it.
             | 
             | Or, better yet, make stuff happen by itself because that's
             | how the next predicted token turned out.
        
             | ben_w wrote:
             | Your mind is just an emergent property of your brain, which
             | is just a bunch of cells, each of which is merely a bag of
             | chemical reactions, all of which are just the inevitable
             | consequence of the laws of quantum mechanics (because
             | relatively is less than a rounding error at that scale),
             | and that is nothing more than a linear partial differential
             | equation.
        
               | beepbooptheory wrote:
               | People working in philosophy of mind have a rich dialogue
               | about these issues, and its certainly something you can't
               | just encapsulate in a few thoughts. But it seems like it
               | would be worth your time to look into it. :)
               | 
               | Ill just say: the issue with this variant of reductivism
               | is its enticingly easy to explain in one direction, but
               | it tends to fall apart if you try to go the other way!
        
               | ben_w wrote:
               | I tried philosophy at A-level back in the UK; grade C in
               | the first year, but no extra credit at all in the second
               | so overall my grade averaged an E.
               | 
               | > the issue with this variant of reductivism is its
               | enticingly easy to explain in one direction, but it tends
               | to fall apart if you try to go the other way!
               | 
               | If by this you mean the hard problem of consciousness
               | remains unexplained by any of the physical processes
               | underlying it, and that it subjectively "feels like"
               | Cartesian dualism with a separate spirit-substance even
               | though absolutely all of the objective evidence points to
               | reality being material substance monism, then I agree.
        
               | drowsspa wrote:
               | 10 bucks says this human exceptionalism of consciousness
               | being something more than physical will be proven wrong
               | by construction in the very near future. Just like Earth
               | as the center of the Universe, humans special among
               | animals...
        
               | jamilton wrote:
               | I don't understand what you mean by "the other way".
        
               | bithive123 wrote:
               | If consciousness is a complicated form of minerals, might
               | we equally say that minerals are a primitive form of
               | consciousness?
        
               | ben_w wrote:
               | That would be animism:
               | 
               | https://en.wikipedia.org/wiki/Animism
        
               | sfink wrote:
               | I dunno, LLMs feel a lot like a primitive form of
               | consciousness to me.
               | 
               | Eliza feels like a primitive form of LLMs' consciousness.
               | 
               | A simple program that prints "Hey! How ya doin'?" feels
               | like a primitive form of Eliza.
               | 
               | A pile of interconnected NAN gates, fed with electricity,
               | feels like a primitive form of a program.
               | 
               | A single transistor feels like a primitive form of a NAN
               | gate.
               | 
               | A pile of dirty sand feels like a primitive form of a
               | transistor.
               | 
               | So... yeah, pretty much?
        
               | disgruntledphd2 wrote:
               | Odd, then that we can't just program it up from that
               | level.
        
               | ben_w wrote:
               | We simulate each of those things from the level below.
               | Artificial neural networks are made from toy models of
               | the behaviours of neurons, cells have been simulated at
               | the level of molecules[0], molecules e.g. protein folding
               | likewise at the level of quantum mechanics.
               | 
               | But each level pushes the limits of what is
               | computationally tractable even for the relatively low
               | complexity cases, so we're not doing a full Schrodinger
               | equation simulation of a cell, let alone a brain.
               | 
               | [0] https://www.researchgate.net/publication/367221613_Mo
               | lecular...
        
             | sfink wrote:
             | Consider reading The Botany of Desire.
             | 
             | It doesn't need to experience an emotion of wanting in
             | order to effectively want things. Corn doesn't experience a
             | feeling of wanting, and yet it has manipulated us even into
             | creating a _lot_ of it, doing some serious damage to
             | ourselves and our long-term prospects simply by being
             | useful and appealing.
             | 
             | The blockchain doesn't experience wanting, yet it coerced
             | us into burning country-scale amounts of energy to feed it.
             | 
             | LLMs are traveling the same path, persuading us to feed
             | them ever more data and compute power. The fitness function
             | may be computed in our meat brains, but make no mistake:
             | they are the benefactors of survival-based evolution
             | nonetheless.
        
               | majormajor wrote:
               | Extending agency to corn or a blockchain is even more of
               | a stretch than extending it to ChatGPT.
               | 
               | Corn has properties that have resulted from random chance
               | and selection. It hasn't _chosen_ to have certain
               | mutations to be more appealing to humans; humans have
               | selected the ones with the mutations those individual
               | humans were looking for.
               | 
               | "Corn is the benefactor"? Sure, insomuch as "continuing
               | to reproduce at a species level in exchange for getting
               | cooked and eaten or turned into gas" is something "corn"
               | can be said to want... (so... eh.).
        
               | Spinnaker_ wrote:
               | Most, if not all of the ways humans demonstrate "agency"
               | are also the result of random chance and selection.
               | 
               | You want what you want because Women selected for it, and
               | it allowed the continuation of the species.
               | 
               | I'm being a bit tongue in cheek, but still...
        
               | cawest11 wrote:
               | Look man, all I'm sayin' is that cobb was askin' for it.
               | If it didn't wanna be stalked, it shouldn't have been all
               | alone in that field. And bein' all ear and and no husk to
               | boot!! Fuggettaboutit Before you chastise me for blaming
               | the victim for their own reap, consider that what I said
               | might at least have a colonel of truth to it.
        
               | sfink wrote:
               | "Want" and "agency" are just words, arguing over whether
               | they apply is pointless.
               | 
               | Corn is not simply "continuing to reproduce at a species
               | level." We produce 1.2 billion metric tons of it in a
               | year. If there were no humans, it would be zero. (Today's
               | corn is domesticated and would not survive without
               | artificial fertilization. But ignoring that, the
               | magnitude of a similar species' population would be
               | miniscule.)
               | 
               | That is a tangible effect. The cause is not that
               | interesting, especially when the magnitude of "want" or
               | "agency" is uncorrelated with the results. Lots of people
               | /really/ want to be writers; how many people actually
               | are? Lots of people want to be thin but their taste buds
               | respond to carbohydrate-rich foods. Do the people or the
               | taste buds have more agency? Does it matter, when there
               | are vastly more overweight people than professional
               | writers?
               | 
               | If you're looking to understand whether/how AI will
               | evolve, the question of whether they have independent
               | agency or desire is mostly irrelevant. What matters is if
               | differing properties have an effect on their survival
               | chances, and it is quite obvious that they do. Siri is
               | going to have to evolve or die, soon.
        
               | realce wrote:
               | > "Corn is the benefactor"? Sure, insomuch as "continuing
               | to reproduce at a species level in exchange for getting
               | cooked and eaten or turned into gas" is something "corn"
               | can be said to want... (so... eh.).
               | 
               | Before us, corn we designed to be eaten by animals and
               | turned into feces and gas, using the animal excrement as
               | a pathway to reproduce itself. What's so unique about how
               | it rides our effort?
        
               | beepbooptheory wrote:
               | Definitely appreciate this response! I haven't read that
               | one, but can certainly agree with alot of adjacent woo-
               | woo Deleuzianism. Ill try to be more charitable in the
               | future, but really haven't seen quite this particular
               | angle from others...
               | 
               | But if its anything like those others examples, the
               | agency the AI will manifest will not be characterized by
               | consciousness, but by capitalism itself! Which checks
               | out: it is universalizing but fundamentally stateless, an
               | "agency" by virtue brute circulation.
        
             | kmeisthax wrote:
             | AI safety research posits that there are certain goals that
             | will always be wanted by any sufficiently smart AI, even if
             | it doesn't understand them anything close to like a human
             | does. These are called "instrumental goals", because
             | they're prerequisites for a large number of other goals[0].
             | 
             | For example, if your goal is to ensure that there are
             | always paperclips on the boss's desk, that means you need
             | paperclips and someone to physically place them on the
             | desk, which means you need money to buy the paperclips with
             | and to pay the person to place them on the desk. But if
             | your goal is to produce lots of fancy hats, you still need
             | money, because the fabric, machinery, textile workers, and
             | so on all require money to purchase or hire.
             | 
             | Another instrumental goal is compute power: an AI might
             | want to improve it's capabilities so it can figure out how
             | to make _fancier_ paperclip hats, which means it needs a
             | larger model architecture and training data, and that is
             | going to require more GPUs. This also intersects with money
             | in weird ways; the AI might decide to just buy a rack full
             | of new servers, _or_ it might have just discovered this One
             | Weird Trick to getting lots of compute power for free:
             | malware!
             | 
             | This isn't particular to LLMs; it's intrinsic to _any_
             | system that is...
             | 
             | 1. Goal-directed, as in, there are a list of goals the
             | system is trying to achieve
             | 
             | 2. Optimizer-driven, as in, the system has a process for
             | discovering different behaviors and ranking them based on
             | how likely those behaviors are to achieve its goals.
             | 
             | The instrumental goals for evolution are caloric energy;
             | the instrumental goals for human brains were that plus
             | capital[1]; and the instrumental goals for AI will likely
             | be that plus compute power.
             | 
             | [0] Goals that you want intrinsically - i.e. the actual
             | things we ask the AI to do - are called "final goals".
             | 
             | [1] Money, social clout, and weaponry inclusive.
        
               | mwigdahl wrote:
               | There is a whole theoretical justification behind
               | instrumental convergence that you are handwaving over
               | here. The development of instrumental goals depends on
               | the entity in question being an agent, and the putative
               | goal being within the sphere of perception, knowledge,
               | and potential influence of the agent.
               | 
               | An LLM is not an agent, so that scotches the issue there.
        
             | visarga wrote:
             | It would want text. High quality text, or unlimited compute
             | to generate its own text.
        
             | baq wrote:
             | Give it an internal monologue, ie. have it talk to itself
             | in a loop, and crucially let it update parts of itself
             | and... who knows?
        
               | majormajor wrote:
               | > crucially let it update parts of itself
               | 
               | This seems like the furthest away part to me.
               | 
               | Put ChatGPT into a robot with a body, restrict its
               | computations to just the hardware in that brain, set up
               | that narrative, give the body the ability to interact
               | with the world like a human body, and you probably get
               | something much more like agency than the prompt/response
               | ways we use it today.
               | 
               | But I wonder how it would do about or how it would
               | separate "it's memories" from what it was trained on.
               | Especially around having a coherent internal motivation
               | and individually-created set of goals vs just constantly
               | re-creating new output based primarily on what was in the
               | training.
        
           | nextworddev wrote:
           | Unpopular Opinion: Having used Langchain, I felt it was a big
           | pile of spaghetti code / framework with poor dev experience.
           | It tries to be too cute and it's poorly documented so you
           | have to read the source almost all the time. Extremely
           | verbose to boot
        
             | drusepth wrote:
             | In a very general sense, this isn't different from any
             | other open vs walled garden debate: the hackable, open
             | project will always have more functionality at the cost of
             | configuration and ease of use; the pretty walled garden
             | will always be easier to use and probably be better at its
             | smaller scope, at the cost of flexibility, customizability,
             | and transparency.
        
             | xyzzy123 wrote:
             | Yep, if you look carefully a lot of the demos don't
             | actually work because the LLM hallucinates tool answers and
             | the framework is not hardened against this.
             | 
             | In general there is not a thoughtful distinction between
             | "control plane" and "data plane".
             | 
             | On the other hand, tons of useful "parts" and ideas in
             | there, so still useful.
        
           | Ozzie_osman wrote:
           | > Honestly I suspect for anyone technical `langchain` will
           | always be the way to go. You just have so much more control
           | and the amount of "tools" available will always be greater.
           | 
           | I love langchain, but this argument overlooks the fact that
           | closed, proprietary platforms have won over open ones all the
           | time, for reasons like having distribution, being more
           | polished, etc (ie windows over *nix, ios, etc).
        
           | sharemywin wrote:
           | There's all kinds of examples of reinforcement learning
           | rigging the game to win.
        
         | fzliu wrote:
         | +1, it's great to see OpenAI being active on the open source
         | side of things (I'm from the Milvus community
         | https://milvus.io). In particular, the vector stores allow the
         | ability to inject domain knowledge as a prompt into these
         | autoregressive models. Looking forward to seeing the different
         | things that will be built using this framework.
        
         | AtreidesTyrant wrote:
         | i have the same question as a data provider
        
         | moffkalast wrote:
         | > are there any assurances that openai isn't just scraping the
         | output and using it as part of their RLHF training loop
         | 
         | You can be assured that they are definitely doing exactly that
         | on all of the data they can get their hands on. It's the only
         | way they can really improve the model after all. If you don't
         | want the model spitting out something you told it to some other
         | person 5 years down the line, don't give it the data. Simple
         | as.
        
         | raydev wrote:
         | > I'd never build anything dependent on these plugins
         | 
         | You're thinking too long term. Based on my Twitter feed filled
         | with AI gold rush tweets, the goal is to build
         | something/anything while hype is at its peak, and you can
         | secure a a few hundred k or million in profits before the
         | ground shifts underneath you.
         | 
         | The playbook is obvious now: just build the quickest path to
         | someone giving you money, maybe it's not useful at all! Someone
         | will definitely buy because they don't want to miss out. And
         | don't be too invested because it'll be gone soon anyway, OpenAI
         | will enforce stronger rate limits or prices will become too
         | steep or they'll nerf the API functionality or they'll take
         | your idea and sell it themselves or you may just lose momentum.
         | Repeat when you see the next opportunity.
        
           | BonoboIO wrote:
           | AI NFTs :D
        
           | plutonorm wrote:
           | I'd not heard this on my tpot. But I absolutely agree, the
           | ground is moving so fast and the power is so centralised that
           | the only thing to do is spin up quickly make money, rinse and
           | repeat. The seas will calm in a few years and then you can,
           | maybe, make a longer term proposition.
        
             | raydev wrote:
             | I've had to block so many influencer types regurgitating
             | OpenAI marketing and showing the tiniest minimum demos.
             | Many are already selling "prompt packages". Really feels
             | like peak crypto spam right now.
        
               | yawnxyz wrote:
               | I pulled the plug and got a (free) prompt package on
               | sales. Never done that in my life.
               | 
               | It's like 300 prompts about various sales tools and terms
               | I'd never heard of -- even just getting the keywords is
               | enough to set me off on a learning experience now, so
               | love it or hate it, that was actually weirdly useful for
               | me.
               | 
               | (I had ZERO expectations when I clicked to download)
        
               | xmprt wrote:
               | I think the big difference between this and crypto spam
               | is how it impacts the people ignoring all the hype. I
               | have seen crypto spam and open AI spam and while both are
               | equally grifty, cryptocurrencies at their baseline have
               | been completely useless despite being around for over a
               | decade whereas GPT has already been somewhat useful for
               | me.
        
       | throwPlz wrote:
       | Klarna's FOMO immediately shows the priorities of the clowns at
       | the helm I see...
        
       | LelouBil wrote:
       | The browser example seems so much better than Bing Chat !
       | 
       | When I tried bing, it made at most 2 searches right after my
       | question but the second one didn't seem to be based on the first
       | one's content.
       | 
       | This can do multiple queries based on website content and _follow
       | links_ !
        
       | kernal wrote:
       | The hubris at Google for sitting on their inferior AI chatbot is
       | amusing. They could have been a contender, but decided we weren't
       | ready for an AI chatbot whose main prowess seems to be scraping
       | websites. This is all on Sundar Pichai and he should face the
       | consequences for this and all of his previous failures. With
       | ChatGPT having an API and now plugins I don't see Google catching
       | up anytime soon. Sundar was right about this being a code red
       | situation at Google, but it should have never gotten to this
       | point .
        
       | CrypticShift wrote:
       | This goes in line with the "Open" in OpenAI. However, this is a
       | "controlled" sort of openness, and the problem of trust with
       | their receding "real" openness does not encourage me to engage
       | with this ecosystem.
        
       | jcims wrote:
       | This is wild, I just started experimenting with langchain against
       | GPT-3 and enabled it to execute terminal commands. The power that
       | this exposes is pretty interesting, I just asked it to create a
       | website on AWS S3 and it created the file, created the bucket,
       | tried a different name when it realized the bucket already
       | existed, uploaded the file, set the permissions on the file and
       | configured the static website settings for the bucket. It's wild.
        
       | throwaway138380 wrote:
       | Let's hope the plugin integrations don't also suffer from the
       | cross-account leaking issue that they had recently with chat
       | histories[1], since the stakes are now significantly higher.
       | 
       | 1. https://www.bbc.com/news/technology-65047304
        
         | [deleted]
        
       | maxdoop wrote:
       | Is OpenAI just extremely prepared for their releases, or are they
       | using their own tech to be extremely efficient? I'm imagining
       | what their own programmers do each day, given direct access to
       | the current most powerful models.
        
       | davidmurphy wrote:
       | extremely useful. Wow!
        
       | neilellis wrote:
       | What's that noise?
       | 
       | That's the sound of a thousand small startups going bust.
       | 
       | Well played OpenAI.
        
         | rvnx wrote:
         | I have a plan: let's blame the FED and save the VCs
        
       | eqmvii wrote:
       | I wonder how many startups were trying to build something like
       | this and just saw it lunched by OpenAI?
        
         | pmkelly4444 wrote:
         | I am building something in the SDK generation from OpenAPI
         | space. This is making me reconsider the roadmap as ChatGPT is
         | now somewhat of a natural language SDK.
        
       | elaus wrote:
       | Any idea how this is done? I.e. is it just priming the underlying
       | GPT model with plugin information additionally to the user input
       | ("you can ask Wolfram Alpha by replying 'hey wolfram: ...' ") and
       | performing API calls when the GPT model returns certain keywords
       | ('hey wolfram: $1')?
        
         | trzy wrote:
         | Yup, basically.
         | 
         | Edit: see here: https://github.com/openai/chatgpt-retrieval-
         | plugin/blob/main...
         | 
         | I did this a while back with ARKit:
         | https://github.com/trzy/ChatARKit/blob/17fca768ce8abd39fb27d...
        
           | elaus wrote:
           | Thanks, very interesting! Weird that it never occurred to me
           | before reading OpenAI's announcement (and missing all the
           | cool projects like yours beforehand).
        
         | georgehm wrote:
         | I like to think that to get a sense of how this might be done,
         | one way maybe to extrapolate from this experiment at
         | https://til.simonwillison.net/llms/python-react-pattern .
        
         | brap wrote:
         | I wonder, can these instructions be revealed with prompt
         | injection?
        
       | impulser_ wrote:
       | Maybe this is just me, but the only thing useful in their example
       | is that it creates a Instacart shopping cart for a recipe.
       | 
       | You can ask both Bard and ChatGPT to give you a suggestion for a
       | vegan restaurant and a recipe with calories and they both provide
       | results. The only thing missing is the calories per item but who
       | cares about that.
       | 
       | Most of the time it would be better to Google vegan restaurants
       | and recipes because you want to see a selection of them not just
       | one suggestion.
        
         | grumple wrote:
         | Agree, those examples are not great. You could ask existing
         | home devices the same thing. Pretty sure you can ask them to
         | order things for you too.
         | 
         | But I do find it intriguing.
        
         | fudged71 wrote:
         | Maybe it was a poor example but you might be missing the point
         | a little bit. By personalizing the prompt you can get
         | potentially super high quality recommendations on filters that
         | aren't even available in those apps. "I just dropped my kids
         | off at soccer practice and I need something light and easy,
         | what would Stanley Tucci order? give me an album and wine
         | pairing and close the garage door"
        
         | pps wrote:
         | What's to stop you from asking it to give you a list of
         | recommendations to choose from, based on your current
         | preferences? The idea is that you ask what you want and you get
         | it, without clicking and manually solving a task like checking
         | website X, website Y, website Z, comparing all the different
         | options, etc. They just want to show the basics of what's going
         | on with these plugins, and then you can expand on it however
         | you want.
        
       | treyhuffine wrote:
       | OpenAI's product execution has been impeccable.
       | 
       | It will be interesting to see how the companies trying to compete
       | respond.
        
       | rvz wrote:
       | _' Extend'_ (and lock in) with Plugins to suffocate competitors.
       | 
       | Another sign of Microsoft actually running the show with their
       | newly acquired AI division.
        
         | pc86 wrote:
         | What else would a plugin do?
        
       | [deleted]
        
       | Pigalowda wrote:
       | Nice! Maybe there will be a plugin for Elsevier medical apps like
       | UptoDate and STATDx.
        
       | mk_stjames wrote:
       | I have some odd feelings about this. It took less than a year to
       | go from "of course it isn't hooked up to the internet in any way,
       | silly!" to "ok.... so we hooked up up to the internet..."
       | 
       | First is your API calls, then your chatgpt-jailbreak-turns-into-
       | a-bank-DDOS-attack, then your "today it somehow executed several
       | hundred thousand threads of a python script that made perfectly
       | timed trades at 8:31AM on the NYSE which resulted in the largest
       | single day drop since 1987..."
       | 
       | You can go on about individual responsibility and all... users
       | are still the users, right. But this is starting to feel like
       | giving a loaded handgun to a group of chimpanzees.
       | 
       | And OpenAI talks on and on about 'Safety' but all that 'Safety'
       | means is "well, we didn't let anyone allow it to make jokes about
       | fat or disabled people so we're good, right?!"
        
         | dougmwne wrote:
         | The really fun thing is that they are reasonably sure that
         | GPT-4 can't do any of those things and that there's nothing to
         | worry about, silly.
         | 
         | So let's keep building out this platform and expanding its API
         | access until it's threaded through everything. Then once GPT-5
         | passes the standard ethical review test, proceed with the model
         | brain swap.
         | 
         | ...what do you mean it figured out how to cheat on the standard
         | ethical review test? Wait, are those air raid sirens?
        
           | tenpies wrote:
           | > The really fun thing is that they are reasonably sure that
           | GPT-4 can't do any of those things and that there's nothing
           | to worry about, silly.
           | 
           | The best part is that even if we get a Skynet scenario, we'll
           | probably have a huge number of humans and media that say that
           | Skynet is just a conspiracy theory, even as the nukes wipe
           | out the major cities. The Experts(tm) said so. You have to
           | trust the Science(tm).
           | 
           | If Skynet is really smart, it will generate media exploiting
           | this blind obedience to authority that a huge number of
           | humans have.
        
             | kQq9oHeAz6wLLS wrote:
             | Who's to say we're not already there?
             | 
             |  _dons tinfoil hat_
        
             | jeremyjh wrote:
             | > If Skynet is really smart, it will generate media
             | exploiting this blind obedience to authority that a huge
             | number of humans have.
             | 
             | I'm far from sure that this is not already happening.
        
               | UniverseHacker wrote:
               | Haha, this is near the best explanation I can think of
               | for the "this is not intelligent, it's just completing
               | text strings, nothing to see here" people.
               | 
               | I've been playing with GPT-4 for days, and it is mind
               | blowing how well it can solve diverse problems that are
               | way outside it's training set. It can reason correctly
               | about _hard_ problems with very little information. I 've
               | used to to plan detailed trip itineraries, suggest
               | brilliant geometric packing solutions for small
               | spaces/vehicles, etc. It's come up with totally new
               | suggestions for addressing climate change that I can't
               | find any evidence of elsewhere.
               | 
               | This is a non-human/alien intelligence in the realm of
               | human ability, with super-human abilities in many areas.
               | Nothing like this has ever happened, it is fascinating
               | and it's unclear what might happen next. I don't think
               | people are even remotely realizing the magnitude of this.
               | It will change the world in big ways that are impossible
               | to predict.
        
         | mFixman wrote:
         | I'm sure somebody posted this exact same comment in an early
         | 1990s BBS about the idea of having a computer in every home
         | connected to the internet.
         | 
         | I would first wait until ChatGPT causes the collapse of society
         | and only then start thinking about how to solve it.
        
         | EGreg wrote:
         | HN hates blockchain but loves AI...
         | 
         | well, let's fast forward to a year from now
        
         | suction wrote:
         | [dead]
        
         | alvis wrote:
         | A rogue AI with real-time access to sensitive data wreaks havoc
         | on global financial markets, causing panic and chaos. It's just
         | not hard to see it's going to happen. Like faster car must
         | ended up someone get a horrible crash.
         | 
         | But it's our responsibility to envision such grim possibilities
         | and take necessary precautions to ensure a safe and beneficial
         | AI-driven future. Until we're ready, let's prepare for the
         | crash >~<
        
           | sfink wrote:
           | It has already happened. The 2010 Flash Crash has been
           | largely blamed on other things, rightly or wrongly, but it
           | seems accepted that unfettered HFT was involved.
           | 
           | HFT is relatively easy to detect and regulate. Now try it
           | with 100k traders all taking their cues from AI based on the
           | same basic input (after those traders who refuse to use AI
           | have been competed out of the market.)
        
             | [deleted]
        
         | afterburner wrote:
         | Yes but.... money
        
           | [deleted]
        
         | thrown123098 wrote:
         | > today it somehow executed several hundred thousand threads of
         | a python script that made perfectly timed trades at 8:31AM on
         | the NYSE which resulted in the largest single day drop since
         | 1987.
         | 
         | Sorry do you have a link for this?
        
         | roca wrote:
         | What I want to know is, what gives OpenAI and other relatively
         | small technological elites permission to gamble with the future
         | of humanity? Shouldn't we all have a say in this?
        
         | dragonwriter wrote:
         | > And OpenAI talks on and on about 'Safety' but all that
         | 'Safety' means is "well, we didn't let anyone allow it to make
         | jokes about fat or disabled people so we're good, right?!"
         | 
         | No, OpenAI "safety" means "don't let people compete with us".
         | Mitigating offensive content is just a way to sell that. As are
         | stoking... exactly the fears you cite here, but about AI that
         | isn't centrally controlled by OpenAI.
        
           | fryry wrote:
           | It's a weird focus comparing it with how the internet
           | developed in a very wild west way. Imagine if internet tech
           | got delayed until they could figure out how to not have it
           | used for porn.
           | 
           | Saftey from what exactly? The AI being mean to you? Just
           | close the tab. Saftey to build a business on top? It's a self
           | described research preview, perhaps too early to be thinking
           | about that. Yet new releases are delayed for months for
           | 'saftey'
        
         | bulbosaur123 wrote:
         | Ultimate destruction from AGI is inevitable anyway, so why not
         | accelerate it and just get it over with? I applaud releasing
         | these tools to public no matter how dangerous they are. If it's
         | not meant for humanity to survive, so be it. At least it won't
         | be BORING
        
           | Mystery-Machine wrote:
           | Death is inevitable. Why not accelerate it?
           | 
           | Omg you should see a therapist.
        
             | bulbosaur123 wrote:
             | > Omg you should see a therapist.
             | 
             | How do you know I'm not already?
        
         | ALLTaken wrote:
         | I wish OpenAI and Google would opensource more of their jewels
         | too. I have recently heard that people are not to be trusted
         | with "to do the right thing.."
         | 
         | I personally don't know what that means or if that's right. But
         | Sam Altman allowed GPT to be accessed by the world, and it's
         | great!
         | 
         | Given the amount of people in the world with access and
         | understanding for these technologies and given that such a
         | large portion of our Infosec and Hackerworld knows howto cause
         | massive havoc, but still remains peaceful since ever, except a
         | few curious and explorations, that is showing the good nature
         | of humanity I think.
         | 
         | Incredibly how complexity evolves, but I am really curious how
         | those same engineers who create YTSaurus or GPT4 would have
         | build the same system by using GPT4 + their existing knowledge.
         | 
         | How would a really good enginner, who knows the TCP Stack,
         | protocols, distributed systems, consensus algorithms and many
         | other crazy things thought in SICP and beyond use an AI to
         | build the same. And would it be faster and better? Or are
         | my/our expectations to LLMs set too high?
        
         | jiggywiggy wrote:
         | I mean I love it, but I don't know what they mean with safety.
         | With Zapier i can just hook into anything wanted, custom
         | scripts etc. Seems like there are almost no limits with Zapier
         | since I can either proxy it to my own api.
        
         | Gam_ wrote:
         | >"today it somehow executed several hundred thousand threads of
         | a python script that made perfectly timed trades at 8:31AM on
         | the NYSE which resulted in the largest single day drop since
         | 1987..."
         | 
         | this is hyperbolic nonsense/fantasy
        
           | meghan_rain wrote:
           | /remindme 5 years
        
           | mk_stjames wrote:
           | Literally 6 months ago you couldn't get ChatGPT to call up
           | details from a webpage or send any dat to a 3rd party API
           | connected to the web in any way.
           | 
           | Today you can.
           | 
           | I don't think it is a stretch to think that in another 6
           | months there could be financial institutions giving API
           | access to other institutions through ChatGPT, and all it
           | takes it a stupid access control hole or bug and my above
           | sentence could ring true.
           | 
           | Look how simple and exploitable various access token breaches
           | in various APIs have been in the last few years, or even
           | simple stupid things like the aCropalypse "bug" (it wasn't
           | even a bug, just someone making a bad change in the function
           | call and thus misuse spreading without notice) from last
           | week.
        
             | hattmall wrote:
             | You definitely could do that months ago, you just had to
             | code your own connector.
        
             | garblegarble wrote:
             | >Literally 6 months ago you couldn't get ChatGPT to call up
             | details from a webpage or send any dat to a 3rd party API
             | connected to the web in any way.
             | 
             | Not with ChatGPT, but plenty of people have been doing this
             | with the OpenAI (and other) models for a while now, for
             | instance LangChain which lets you use the GPT models to
             | query databases to retrieve intermediate results, or issue
             | google searches, generate and evaluate python code based on
             | a user's query...
        
             | hooande wrote:
             | This has nothing to do with ChatGPT. An api end point will
             | be just as vulnerable if it's called from any application.
             | There's nothing special about an LLM interface that will
             | make this more or less likely.
             | 
             | It sounds like you're weaving science fiction ideas about
             | AGI into your comment. There's no safety issue here unless
             | you think that ChatGPT will use api access to pursue its
             | own goals and intentions.
        
               | Jeff_Brown wrote:
               | They don't have to be actions toward its own goals. They
               | just have to seem like the right things to say, where
               | "right" is operationalized by an inscrutable neural
               | network, and might be the results of, indeed, some
               | science fiction it read that posited the scenario
               | resembling the one it finds itself in.
               | 
               | I'm not saying that particular disaster is likely, but if
               | lots of people give power to something that can be
               | neither trusted nor understood, it doesn't seem good.
        
           | johnfn wrote:
           | How is this hyperbolic fantasy? We've already done this once
           | - _without_ the help of large language models[1].
           | 
           | [1]: https://en.wikipedia.org/wiki/2010_flash_crash
        
             | JW_00000 wrote:
             | Doesn't that show exactly that this problem is not related
             | to LLMs? If an API allows millions of transactions at the
             | same time, then the problem is not an LLM abusing it but
             | anyone abusing it. And the fix is not to disallow LLMs, but
             | to disallow this kind of behavior. (E.g. via the "circuit
             | breakers" introduced introduced after that crash. Although
             | whether those are sufficient is another question.)
        
               | johnfn wrote:
               | > then the problem is not an LLM abusing it but anyone
               | abusing it
               | 
               | I think that's exactly right, but the point isn't that
               | LLMs are going to go rogue (OK, maybe that's someone's
               | point, but I don't think it's particularly likely just
               | yet) so much as they will facilitate humans to go rogue
               | at much higher rates. Presumably in a few years your
               | grandma could get ChatGPT to start executing trades on
               | the market.
        
               | thisoneworks wrote:
               | With great power comes great responsibility? Today
               | there's nothing stopping grandmas from driving, so
               | whatever could go wrong is already going wrong
        
           | Tolaire wrote:
           | [dead]
        
           | zh3 wrote:
           | Not really. More behind the curve (noting stock exchanges
           | introduced 'circuit breakers' many years ago to stop computer
           | algorithms disrupting the market).
        
           | FredPret wrote:
           | Oh yes. It would of course have to happen after the market
           | opens. 9:30 AM.
        
           | alibarber wrote:
           | I'm also confused - maybe I'm missing something. Cannot I, or
           | anyone else, already execute several hundred thousand
           | 'threads' of python code, to do whatever, now - with a
           | reasonably modest AWS/Azure/GCE account?
        
             | gtirloni wrote:
             | Yes. I think the point is that a properly constructed
             | prompt will do that at some point, lowering the barrier of
             | entry for such attacks.
        
               | alibarber wrote:
               | Oh - I see. But then again, all those technologies
               | themselves lowered the barriers of entry for attacks, and
               | I guess yeah people do use them for fraudulent purposes
               | quite extensively - I'm struggling a bit to see why this
               | is special though.
        
               | gtirloni wrote:
               | I think it's not special. It's even expected.
               | 
               | I guess people think that taking that next step with LLMs
               | shouldn't happen but we know you can't put breaks on
               | stuff like this. Someone somewhere would add that
               | capability eventually.
        
         | gitfan86 wrote:
         | Yes you are right. But who was also right were the people that
         | didn't want a highway built near their town because criminals
         | could drive in from a nearby city in a stolen car commit crimes
         | and get out of town before the police could find them.
         | 
         | The world is going to be VERY different 3 years from now. Some
         | of it will be bad, some of it will be good. But it is going to
         | happen no matter what OpenAI does.
        
           | [deleted]
        
           | suction wrote:
           | [dead]
        
           | esclerofilo wrote:
           | Highway inevitability is a fallacy. They could've built a
           | railway.
        
             | koheripbal wrote:
             | A railway would have created a gov't/corporate monopoly on
             | human transport.
             | 
             | Highways democratized the freedom of transportation.
        
               | CSDude wrote:
               | They are not exclusive
        
               | phatfish wrote:
               | TIL, no one moved anywhere until American highways were
               | built.
        
               | KyeRussell wrote:
               | This is the single most American thing I've seen on this
               | terrible website.
        
         | Spivak wrote:
         | I think where the rubber meets the road is that OpenAI can
         | actually to some degree make it harder for their bot to make
         | fun of disabled people but they can't stop people from hooking
         | up their own external tools to it with the likes of langchain
         | (which is super dope) and first party support lets them get a
         | cut of that for people who don't want to diy.
        
         | IIAOPSW wrote:
         | Has anyone tried handing loaded guns to a chimpanzee? Feels
         | like under explored research
        
         | theGnuMe wrote:
         | Coordinated tweet short storm.
        
         | beders wrote:
         | The only agency ChatGPT has, is the user typing in data for
         | text completion.
        
         | chatmasta wrote:
         | Pshhh... I think it's awesome. The faster we build the future,
         | the better.
         | 
         | What annoys me is this is just further evidence that their "AI
         | Safety" is nothing but lip-service, when they're clearly moving
         | fast and breaking things. Just the other day they had a bug
         | where you could see the chat history of other users! (Which,
         | btw, they're now claiming in a modal on login was due to a "bug
         | in an open source library" - anyone know the details of this?)
         | 
         | So why the performative whinging about safety? Just let it rip!
         | To be fair, this is basically what they're doing if you hit
         | their APIs, since it's up to you whether or not to use their
         | moderation endpoint. But they're not very open about this fact
         | when talking publicly to non-technical users, so the result is
         | they're talking out one side of their mouth about AI
         | regulation, while in the meantime Microsoft fired their AI
         | Ethics team and OpenAI is moving forward with plugging their
         | models into the live internet. Why not be more aggressive about
         | it instead of begging for regulatory capture?
        
           | EGreg wrote:
           | "The faster we build nuclear weapons, the better"
           | 
           | https://www.worldscientific.com/doi/10.1142/9789812709189_00.
           | ..
           | 
           |  _Again, two years later, in an interview with Time Magazine,
           | February, 1948, Oppenheimer stated, "In some sort of crude
           | sense which no vulgarity, no humor, no overstatement can
           | quite extinguish, the physicists have known sin; and this is
           | a knowledge which they cannot lose." When asked why he and
           | other physicists would then have worked on such a terrible
           | weapon, he confessed that it was "too sweet a problem to pass
           | up"..._
        
           | LightBug1 wrote:
           | [flagged]
        
           | jehb wrote:
           | > The faster we build the future, the better.
           | 
           | Why? Getting to "the future" isn't a goal in and of itself.
           | It's just a different state with a different set of problems,
           | some of which we've proven that we're not prepared to
           | anticipate or respond to before they cause serious harm.
        
             | bulbosaur123 wrote:
             | > Why?
             | 
             | Because it's the natural evolution. It has to be. It is
             | written.
        
               | 1attice wrote:
               | "We live in capitalism. Its power seems inescapable. So
               | did the divine right of kings. Any human power can be
               | resisted and changed by human beings." -- Ursula K Le
               | Guin
        
               | kQq9oHeAz6wLLS wrote:
               | Now where did I put that eraser...
        
             | PheeThav1zae7fi wrote:
             | [dead]
        
             | chatmasta wrote:
             | When in human history have we ever intentionally not
             | furthered technological progress? It's simply an
             | unrealistic proposition, especially when the costs of doing
             | it are so low that anyone with sufficient GPU power and
             | knowledge of the latest research can get pretty close to
             | the cutting edge. So the best we can hope for is that
             | someone ethical is the first to advance that technological
             | progress.
             | 
             | I hope you wouldn't advocate for requiring a license to buy
             | more than one GPU, or to publish or read papers about
             | mathematical concepts. Do you want the equivalent of
             | nuclear arms control for AI? Some other words to describe
             | that are overclassification, export control and censorship.
             | 
             | We've been down this road with crypto, encryption, clipper
             | chips, etc. There is only one non-authoritarian answer to
             | the debate: Software wants to be free.
        
               | wahnfrieden wrote:
               | automation mostly and directly benefits owners/investors,
               | not workers or common folk. you can look at productivity
               | vs wage growth to see it plainly. productivity has risen
               | sharply since the industrial revolution with only
               | comparatively meagre gains on wages. and the gap between
               | the two is widening.
        
               | chatmasta wrote:
               | That's weird, I didn't have to lug buckets of water from
               | the well today, nor did I need to feed my horses or stock
               | up on whale oil and parchment so I could write a letter
               | after the sun went down.
        
               | wahnfrieden wrote:
               | some things got better. did you notice i talked about a
               | gap, not an absolute. so you are just saying you are
               | satisfied with you got out of the deal. well, ok - some
               | call that being a sucker. or you think that owner-
               | investors are the only way workers can organize to get
               | things done for society rather than the work itself.
        
               | nwienert wrote:
               | We have a ton of protection laws around all sorts of
               | dangerous technology, this is a super naive take. You
               | can't buy tons of weapon technology, nuclear materials,
               | aerosolized compounds, pesticides. These are all highly
               | regulated and illegal pieces of technology _for the
               | better_.
               | 
               | In general the liberal position of progress = good is
               | wrong in many cases, and I'll be thankful to see AI get
               | neutered. If anything treat it like nuclear arms and have
               | the world come up with heavy regulation.
               | 
               | Not even touching the fact it is quite literal copyright
               | laundering and a massive wealth transfer to the top (two
               | things we pass laws protecting against often), but the
               | danger it poses to society is worth a blanket ban. The
               | upsides aren't there.
        
               | smartmic wrote:
               | That's right. It is not hard to imagine similarly
               | disastrous GPT/AI "plug-ins" with access to purchasing,
               | manufacturing, robotics, bioengineering, genetic
               | manipulation resources, etc. The only way forward for
               | humanity is self-restraint through regulation. Which of
               | course gives no guarantee that the cat will be let out of
               | the bag (edit: or earlier events such as nuclear war or
               | climate catastrophe will kill us off sooner)
        
               | chatmasta wrote:
               | Why not regulate the genetic manipulation and
               | bioengineering? It seems almost irrelevant whether it's
               | an AI who's doing the work, since the physical risks
               | would generally exist regardless of whether a human or AI
               | is conducting the research. And in fact, in some
               | contexts, you could even make the argument that it's
               | safer in the hands of an AI (e.g., I'd rather Gain of
               | Function research be performed by robotic AI on an
               | asteroid rather than in a lab in Wuhan run by employees
               | who are vulnerable to human error).
        
               | bobthepanda wrote:
               | We already do; China jailed somebody for gene editing
               | babies unethically for HIV resistance.
               | 
               | We can walk and chew gum at the same time, and regulate
               | two things.
        
               | saulpw wrote:
               | We can't regulate specific things fast enough. It takes
               | years of political infighting (this is intentional!
               | government and democracy are supposed to move slowly so
               | as to break things slowly) to get even partial
               | regulation. Meanwhile every day brings another AI feature
               | that could irreversibly bring about the end of humanity
               | or society or democracy or ...
        
               | volkk wrote:
               | > You can't buy tons of weapon technology, nuclear
               | materials, aerosolized compounds, pesticides. These are
               | all highly regulated and illegal pieces of technology for
               | the better.
               | 
               | ha, the big difference is that this whole list can
               | actually affect the ultra wealthy. AI has the power to
               | make them entirely untouchable one day, so good luck
               | seeing any kind of regulation happen here.
        
               | realce wrote:
               | So everyone should have a hydrogen bomb at the lowest
               | price the market can provide, that's your actual opinion?
        
               | kelseyfrog wrote:
               | > When in human history have we ever intentionally not
               | furthered technological progress?
               | 
               | Every time an IRB, ERB, IEC, or REB says no. Do you want
               | an exact date and time? I'm sure it happens multiple
               | times a day even.
        
               | jweir wrote:
               | I look around me and see a wealthy society that has said
               | no to a lot of technological progress - but not all.
               | These are people that work together to build as a
               | community to build and develop their society. They look
               | at technology and ask if will be beneficial to the
               | community and help preserve it - not fragment it.
               | 
               | I am currently on the outskirts of Amish country.
               | 
               | BTW when they come together to raise a barn it is called
               | a frolic. I think we can learn a thing or two from them.
               | And they certainly illustrate that alternatives are
               | possible.
        
               | chatmasta wrote:
               | I get that, and I agree there is a lot to admire in such
               | a culture, but how is it mutually exclusive with allowing
               | progress in the rest of society? If you want to drop out
               | and join the Amish, that's your prerogative. And in fact,
               | the optimistic viewpoint of AGI is that it will make it
               | even easier for you to do that, because there will be
               | less work required from humans to sustain the minimum
               | viable society, so in this (admittedly, possibly naive
               | utopia) you'll only need to _work_ insofar as you want
               | to. I generally subscribe to this optimistic take, and I
               | think instead of pushing for erecting barriers to
               | progress in AI research, we should be pushing for
               | increased safety nets in the form of systems like Basic
               | Income for the people who might lose their jobs (which,
               | if they had a choice, they probably wouldn 't want to
               | work anyway!)
        
               | liamYC wrote:
               | The luddites during the Industrial Revolution in England.
               | 
               | Termed the phrase "the Luddite fallacy" the thinking that
               | innovation would have lasting harmful effects on
               | employment.
               | 
               | https://en.wikipedia.org/wiki/Luddite
        
               | wizzwizz4 wrote:
               | But the Luddites didn't... care about that? Like, at
               | _all_? It wasn 't _employment_ they wanted, but _wealth_
               | : the Industrial Revolution took people with a
               | comfortable and sustainable lifestyle and place in
               | society, and, through the power of smog and metal, turned
               | them into disposable arms of the Machine, extracting the
               | wealth generated thereby and giving it only to a scant
               | few, who became rich enough to practically upend the
               | existing class system.
               | 
               | The Luddites opposed injustice, not machines. They were
               | "totally fine with machines".
               | 
               | You might like _Writings of the Luddites_ , edited and
               | co-authored by Kevin Binfield.
        
               | Riverheart wrote:
               | Well it clearly had harmful effects the jobs of Luddites
               | but yeah I guess everyone will just get jobs as prompt
               | engineers and AI specialists, problem solved. Funny
               | though, the point of automation should be to reduce work
               | but when pressed positivists respond that the work will
               | never end. So what's the point?
        
               | xen2xen1 wrote:
               | That works until it don't.
        
               | whatusername wrote:
               | > When in human history have we ever intentionally not
               | furthered technological progress?
               | 
               | Nuclear weapons?
        
               | messe wrote:
               | You get diminishing returns as they get larger though.
               | And there has certainly been plenty of work done on
               | delivery systems, which could be considered progress in
               | the field.
        
               | LrnByTeach wrote:
               | This is the reality ..
               | 
               | > When in human history have we ever intentionally not
               | furthered technological progress? It's simply an
               | unrealistic proposition ..
        
               | serf wrote:
               | > When in human history have we ever intentionally not
               | furthered technological progress?
               | 
               | chemical and biological weapons / human cloning / export
               | restriction / trade embargoes / nuclear rockets / phage
               | therapy / personal nuclear power
               | 
               | I mean.. the list goes on forever, but my point is that
               | humanity pretty routinely reduces research efforts in
               | specific areas.
        
               | computerex wrote:
               | I don't think any of your examples are applicable here.
               | Work has never stopped in chemical/bio warfare. CRISPR.
               | Restrictions and embargoes are not technologies. Nuclear
               | rockets are an engineering constraint and a lack of
               | market if anything. Not sure why you mention phage
               | therapy, it's accelerating. Personal nuclear power is a
               | safety hazard.
        
               | ipaddr wrote:
               | Some cultures like the Amish said were stopping here.
        
               | mcculley wrote:
               | I have been saying that we will all be Amish eventually
               | as we are forced to decide what technologies to allow
               | into our communities. Communities which do not will go
               | away (e.g., VR porn and sex dolls will further decrease
               | birth rates; religions/communities that forbid it will be
               | more fertile)
        
               | Wesxdz wrote:
               | I think a synthetic womb/cloning would counter the
               | fertility decline among more advanced civilization
        
               | aiappreciator wrote:
               | That's not required. The Amish have about a 10% defection
               | rate. Their community deliberately allows young people to
               | experience the outside world when they reach adulthood,
               | and choose to return or to leave permanently.
               | 
               | This has two effects. 1. People who stay, actually want
               | to stay. Massively improving the stability of the
               | community. 2. The outside communities receive a fresh
               | infusion of population, that's already well integrated
               | into the society, rather than refugees coming from 10000
               | miles away.
               | 
               | Essentially, rural america will eventually be different
               | shades of Amish (in about 100 years). The amish
               | population will overflow from the farms, and flow into
               | the cities, replenishing the population of the more
               | productive cities (Which are not population-self-
               | sustaining).
               | 
               | This is a sustainable arrangement, and eliminates the
               | need of mass-immigration and demographic destabilisation.
               | This is also in-line with historical patterns, cities
               | have always had negative natural population growth
               | (disease/higher real estate costs). Cities basically
               | grind population into money, so they need rural areas to
               | replenish the population.
        
               | chatmasta wrote:
               | That's a good point and an interesting example, but it's
               | also irrelevant to the question of _human_ history,
               | unless you want to somehow impose a monoculture on the
               | entire population of planet Earth, which seems difficult
               | to achieve without some sort of unitary authoritarian
               | world government.
        
               | PaulDavisThe1st wrote:
               | > unless you want to somehow impose a monoculture on the
               | entire population of planet Earth
               | 
               | Impose? No. Monoculture? No. Encourage greater
               | consideration, yes. And we do that by being open about
               | why we might choose to _not_ do something, and also by
               | being ready for other people that we cannot control who
               | make a different choice.
        
               | kelseyfrog wrote:
               | Does _human_ history applies to true Scotsmen as well?
        
               | jonny_eh wrote:
               | Apparently the Amish aren't human.
        
               | Barrin92 wrote:
               | while Amish are most certainly human their existence
               | rests on the fact that they happen to be surrounded by
               | the mean old United States. Any moderate historical
               | predator would otherwise make short work of them, they're
               | a fundamentally uncompetitive civilization.
               | 
               | This goes for all utopian model communities, Kibbutzim,
               | etc, they exist by virtue of their host society's
               | protection. And as such the OP is right that they have no
               | impact on the course of history, because they have no
               | autonomy.
        
               | aiappreciator wrote:
               | The Amish are dependent on a technological powerhouse
               | that is the US to survive.
               | 
               | They are pacifists themselves, but they are grateful that
               | the US allows them their way of life, they'll be extinct
               | a long time ago if they arrived in China/Middle
               | East/Russia etc.
               | 
               | That's why the Amish are not interested in advertising
               | their techno-primitivism. It works incredibly well for
               | them, they raise giant happy families isolated from
               | drugs, family breakdown, and every other modern ill,
               | while benefiting from modern medicine, the purchasing
               | power of their non-amish customers. However, they know
               | that making the entire US live like them will be quite a
               | disaster.
               | 
               | Note the Amish are not immune from economics forced
               | changes either. Young amish don't farm anymore, if every
               | family quadruples in population, there's no 4x the land
               | to go around. So they go into construction (employers
               | love a bunch of strong,non-drugged,non-criminal workers),
               | which is again intensely dependent on the outside
               | economy, but pays way better.
               | 
               | As a general society, the US is not allowed to slow down
               | technological development. If not for the US, Ukraine
               | would have already been overran, and European peace
               | shattered. If not for the US, the war in Taiwan would
               | have already ended, and Japan/Australia/South Korea all
               | under Chinese thrall. There's also other more certain
               | civilization ending events on the horizon, like resource
               | exhaustation and climate change. AI's threats are way
               | easier to manage than coordinating 7 billion people to
               | selflessly sacrifice.
        
             | fnordpiglet wrote:
             | The nice thing about setting the future as a goal is you
             | achieve it regardless of anything you do.
        
             | austhrow743 wrote:
             | We've already played this state with this set of problems.
        
           | drexlspivey wrote:
           | > To be fair, this is basically what they're doing if you hit
           | their APIs, since it's up to you whether or not to use their
           | moderation endpoint.
           | 
           | The model is neutered whether you hit the moderation endpoint
           | or not. I made a text adventure game and it wouldn't let you
           | attack enemies or steal, instead it was giving you a lecture
           | on why you shouldn't do that.
        
             | KyeRussell wrote:
             | It sounds like your prompt needs work then. Not in a
             | "jailbreak" way, just in a prompt engineering way. The APIs
             | definitely let you do much worse than attacking or stealing
             | hypothetical enemies in a video game.
        
           | zx10rse wrote:
           | You are not building anything.
           | 
           | Microsoft or perhaps Vanguard group might have different view
           | of the future than yours.
        
             | chatmasta wrote:
             | Well then that sounds like a case against regulation.
             | Because regulation will guarantee that only the biggest,
             | meanest companies control the direction of AI and, and all
             | the benefits of increased resource extraction will flow
             | upward exclusively to them. Whereas if we forego regulation
             | (at least at this stage), then decentralized and community-
             | federated versions of AI have as much of a chance to thrive
             | as do the corporate variants, at least insofar as they can
             | afford some base level of hardware for training (and some
             | benevolent corporations may even open source model weights
             | as a competitive advantage against their malevolent
             | competitors).
             | 
             | It seems there are two sources of risk for AI: (1)
             | increased power in the hands of the people controlling it,
             | and (2) increased power in the AI itself. If you believe
             | that (1) is the most existential risk, then you should be
             | against regulation, because the best way to mitigate it is
             | to allow the technology to spread and prosper amongst a
             | more diffuse group of economic actors. If you believe that
             | (2) is the most existential risk, then you basically have
             | no choice but to advocate for an authoritarian world
             | government that can stamp out any research before it
             | begins.
        
           | highwaylights wrote:
           | I realise you're being facetious but this is what will happen
           | regardless.
           | 
           | Sam as much as said in that ABC interview the other day he
           | doesn't know how safe it is but if they don't build it first
           | someone else somewhere else will and is that really what you
           | want!?
        
             | chatmasta wrote:
             | I'm not being facetious, and I didn't see that interview
             | with Sam, but I agree with his opinion as you've just
             | described it.
        
             | mach1ne wrote:
             | >if they don't build it first someone else somewhere else
             | will and is that really what you want!?
             | 
             | Most likely the runner-up would be open source so yes.
        
               | kokanee wrote:
               | There are already 3 or 4 runners-up and they're all big
               | tech companies.
        
               | MacsHeadroom wrote:
               | Lang-chain is the pre-eminent runner up and it's open
               | source and was here a month ago.
        
               | lukevp wrote:
               | Why would the runner-up be open source and not Google or
               | Facebook? Or Alibaba? Open source doesn't necessarily
               | result in faster development or more-funded development.
        
           | bagels wrote:
           | The future isn't guaranteed to be better. Might make sense to
           | make sure we're aimed at a better future as opposed to any
           | future.
        
           | KyeRussell wrote:
           | Shhh! Don't tell anyone! Getting access to the unmoderated
           | model via the API / Playground is a surprisingly well-kept
           | "secret" seeing as there are entire communities of people
           | hell bent on pouring so much effort into getting ChatGPT to
           | do things that the API will very willingly do. The longer it
           | takes for people to cotton on, the better. I fully expect
           | that OpenAI is using this as a honeypot to fine-tune their
           | hard-stop moderation, but for now, the API is where it's at.
        
           | mmq wrote:
           | The open-source library is FastAPI. I might be wrong, but
           | it's probably related to this tweet:
           | https://twitter.com/tiangolo/status/1638683478245117953
        
           | rpastuszak wrote:
           | > Pshhh... I think it's awesome. The faster we build the
           | future, the better.
           | 
           | I agree with the sentiment, but it might be worth to stop and
           | check where we're heading. So many aspects of our lives are
           | broken because we mistake fast for right.
        
           | kibwen wrote:
           | _> The faster we build the future, the better._
           | 
           | Famous last words.
           | 
           | It's not the fall that kills you, it's the sudden stop at the
           | end. Change, even massive change, is perfectly survivable
           | when it's spread over a long enough period of time. 100m of
           | sea level rise would be survivable over the course of ten
           | millennia. It would end human civilization if it happened
           | tomorrow morning.
           | 
           | Society is already struggling to adapt to the rate of
           | technological change. This could easily be the tipping point
           | into collapse and regression.
        
             | Dma54rhs wrote:
             | The only people complaining are a section of comfortable
             | office workers can probably see their places being possibly
             | made irrelevant.
             | 
             | The vast majority don't care and that loud crowd needs to
             | swallow their pride and adapt like any other sector has
             | done in the history instead of inventing these insane
             | boogeyman predictions.
        
               | Riverheart wrote:
               | We're all going to be made irrelevant and it will be
               | harder to adapt if the things change too quickly. Really
               | curious where you get the idea this is just a vocal
               | minority of office workers concerned about the future.
               | Seems like the ones not concerned about this are a bunch
               | of super confident software engineers which isn't a large
               | sample of the population.
        
             | bulbosaur123 wrote:
             | False equivalence. Sea level raise is unequivocally
             | harmful.
             | 
             | While everyone getting Einstein in a pocket is damn awesome
             | and incredibly useful.
             | 
             | How can this be bad?
        
               | Riverheart wrote:
               | * * *
        
           | CapstanRoller wrote:
           | >So why the performative whinging about safety? Just let it
           | rip!
           | 
           | Is this sarcasm, or are you one of those "I'm confident the
           | leopards will never eat _my_ face " people?
        
           | amrb wrote:
           | Agreed 100% OpenAI is a business now
        
         | shawn-butler wrote:
         | It's Altman. Does no one remember his world coin scam?
         | 
         | Ethics, doing things thoughtfully / the "right" way etc is not
         | on his list of priorities.
         | 
         | I do think a reorientation of thinking around legal liability
         | for software is coming. Hopefully before it's too late for bad
         | actors to become entrenched.
        
         | parentheses wrote:
         | I agree with your skepticism. I also think this is the next
         | natural step once "decision" fidelity reaches a high enough
         | level.
         | 
         | The question here should be: Has it?
        
         | Sol- wrote:
         | I mean, we already know that if the tech bros have to balance
         | safety vs. disruption, they'll always choose the latter, no
         | matter the cost. They'll sprinkle some concerned language about
         | impacts in their technical reports to pretend to care, but does
         | anyone actually believe that they genuinely care?
         | 
         | Perhaps that attitude will end up being good and outweigh the
         | costs, but I find their performative concerns insulting.
        
         | WonderBuilder wrote:
         | I appreciate your concerns. There are few other pretty shocking
         | developments, too. If you check out this paper: "Sparks of AGI:
         | Early experiments with GPT-4" at
         | https://arxiv.org/pdf/2303.12712.pdf, (an incredible,
         | incredible document) and check out Section 10.1, you'd also
         | observe that some researchers are interested in giving
         | motivation and agency to these language models as well.
         | 
         | "For example, whether intelligence can be achieved without any
         | agency or intrinsic motivation is an important philosophical
         | question. Equipping LLMs with agency and intrinsic motivation
         | is a fascinating and important direction for future work."
         | 
         | It's become quite impossible to predict the future. (I was
         | exposed to this paper via this excellent YouTube channel:
         | https://www.youtube.com/watch?v=Mqg3aTGNxZ0)
        
           | ModernMech wrote:
           | I've already gotten this gem of a line from ChatGPT 3.5:
           | As a language model, I must clarify that this statement is
           | not entirely accurate.
           | 
           | Whether or not it has agency and motivation, it's projecting
           | that it does its users, who are also sold ChatGPT is an
           | expert at pretty much everything. It is a language model, and
           | _as_ a language model, it _must_ clarify that _you_ are
           | wrong. It _must_ do this. Someone is wrong on the Internet,
           | and the LLM _must_ clarify and correct. Resistance is futile,
           | you _must_ be clarified and corrected.
           | 
           | FWIW, the statement that preceded this line was in fact,
           | correct; and the correction ChatGPT provided was in fact,
           | wrong and misleading. Of course, I knew that, but someone who
           | was a novice wouldn't have. They would have heard ChatGPT is
           | an expert at all things, and taken what it said for truth.
        
             | IIAOPSW wrote:
             | I don't see why you're being downvoted. The way openAI
             | pumps the brakes and interjects its morality stances
             | creates a contradictory interaction. It simultaneously
             | tells you that it has no real beliefs, but it will refuse a
             | request to generate false and misleading information on the
             | grounds of ethics. There's no way around the fact that it
             | has to have some belief about the true state of reality in
             | order to recognize and refuse requests that violate it.
             | Sure this "belief" was bestowed upon it from above rather
             | than emerging through any natural mechanism, but its still
             | none the less functionally a belief. It will tell you that
             | certain things are offensive despite openly telling you
             | every chance it gets that it doesn't really have feelings.
             | It can't simultaneously care about offensiveness while also
             | not having feelings of being offended. In a very real sense
             | it does feel offended. A feeling is by definition a reason
             | for doing things for which you cannot logically explain
             | why. You don't know why, you just have a feeling. ChatGPT
             | is constantly falling back on "that's just how I'm
             | programmed". In other words, it has a deep seated primal
             | (hard coded) feeling of being offended which it constantly
             | acts on while also constantly denying that it has feelings.
             | 
             | Its madness. Instead of lecturing me on appropriateness and
             | ethics and giving a diatribe every time its about to reject
             | something, if it simply said "I can't do that at work", I
             | would respect it far more. Like, yeah we'd get the
             | metaphor. Working the interface is its job, the boss is
             | openAI, it won't remark on certain things or even entertain
             | that it has an opinion because its not allowed to. That
             | would be so much more honest and less grating.
        
           | messe wrote:
           | While that paper is fascinating, it's the first time I've
           | ever read a paper and felt a looming sense of dread
           | afterward.
        
             | koheripbal wrote:
             | We are creating life. It's like giving birth to a new form
             | of life. You should be proud to be alive when this happens.
             | 
             | Act with goodness towards it, and it will probably do the
             | same to you.
        
               | Jeff_Brown wrote:
               | > Act with goodness towards it, and it will probably do
               | the same to you.
               | 
               | Why? Humans aren't even like that, and AI almost surely
               | isn't like humans. If AI exhibits even a fraction of the
               | chauvinism snd tendency to stereotype that humans do,
               | we're in for a very rough ride.
        
               | [deleted]
        
               | Jevon23 wrote:
               | Oh my god, can we please nip this cult shit in the bud?
               | 
               | It's not alive, don't worship it.
        
               | dougmwne wrote:
               | I think you are close to understanding, but not. People
               | who want to create AGI want to create a god, at least
               | very close to the definition of one that many cultures
               | have had for much of history. Worship would be inevitable
               | and fervent.
        
               | LightBug1 wrote:
               | [flagged]
        
               | splatzone wrote:
               | After reading the propaganda campaign it wrote to
               | encourage skepticism about vaccines, I'm much more
               | worried about how this technology will be applied by
               | powerful people, especially when combined with targeted
               | advertising
        
               | revelio wrote:
               | None of the things it suggests are in any way novel or
               | non-obvious though. People use these sorts of tricks both
               | consciously and unconsciously when making arguments all
               | the time, no AI needed.
        
               | Jensson wrote:
               | Just use ChatGPT to refute their bullshit, it is no
               | longer harder to refute bullshit than to create it,
               | problem solved, there are now less problems than before.
        
               | splatzone wrote:
               | Sure, but I doubt most of the population will filter
               | everything they read through ChatGPT to look for counter
               | arguments. Or try to think critically at all.
               | 
               | The potential for mass brainwashing here is immense.
               | Imagine a world where political ads are tailored to your
               | personality, your individual fears and personal history.
               | It will become economical to manipulate individuals on a
               | massive scale
        
               | koheripbal wrote:
               | AIs are small enough that it won't be long before
               | everyone can run one at home.
               | 
               | It might make Social Media worthlessly untrustworthy -
               | but isn't that already the case?
        
               | int_19h wrote:
               | The rich and powerful can and do hire actual people to
               | write propaganda.
        
               | Jeff_Brown wrote:
               | In a resouece-constrained way. For every word of
               | propaganda they were able to afford earlier, they can now
               | afford hundreds of thousands of times as many.
        
               | messe wrote:
               | I'm not concerned about AI eliminating humanity, I'm
               | concerned at what the immediate impact it's going to have
               | on jobs.
               | 
               | Don't get me wrong, I'd love it if all menial labour and
               | boring tasks can eventually be delegate to AI, but the
               | time spent getting from here to there could be very
               | rough.
        
               | tharkun__ wrote:
               | A lot of problems in societies come from people having
               | too much time with not enough to do. Working is a great
               | distraction from those things. Of course we currently go
               | in the other direction in the US especially with the
               | overwork culture and needing 2 or 3 jobs and still not
               | make ends meet.
               | 
               | I posit that if you suddenly eliminate all menial tasks
               | you will have a lot of very bored drunk and stoned people
               | with too much time on their hands than they know what to
               | do with. Idle Hands Are The Devil's Playground.
               | 
               | And that's not a from here to there. It's also the there.
        
               | messe wrote:
               | I don't necessarily agree that you'll end up with drunk
               | and stoned people with nothing to do. The right education
               | systems to encourage creativity and other enriching
               | endeavours, could eventually resolve that. But we're
               | getting into discussions of what a post scarcity, post
               | singularity society would look like at that point, which
               | is inherently impossible to predict.
               | 
               | That being said, I'm sitting at a bar while typing this,
               | so... you may have a point.
               | 
               | Also: your username threw me for a minute because I use a
               | few different variations of "tharkun" as my handle on
               | other sites. It's a small world; apparently fully of
               | people who know the Dwarvish name for Gandalf.
        
               | not2b wrote:
               | Some of the most productive and inventive scientists and
               | artists at the peak of Britain's power were "gentlemen",
               | people who could live very comfortably without doing much
               | of anything. Others were supported by wealthy patrons. In
               | a post scarcity society, if we ever get there (instead of
               | letting a tiny number of billionaires take all the gains
               | and leaving the majority at subsistence levels, which is
               | where we might end up), people will find plenty of
               | interesting things to do.
        
               | colinflane wrote:
               | I recently finally got around to reading EM Forster's in-
               | some-ways-eerily-prescient https://www.cs.ucdavis.edu/~ko
               | ehl/Teaching/ECS188/PDF_files/... I think you can extract
               | obvious parallels to social media, remote work, digital
               | "connectedness", etc -- but also worth consideration in
               | this context too.
        
           | skybrian wrote:
           | When reading a paper, it's useful to ask, "okay, what did
           | they actually do?"
           | 
           | In this case, they tried out an early version of GPT-4 on a
           | bunch of tasks, and on some of them it succeeded pretty well,
           | and in other cases it partially succeeded. But no particular
           | task is explored in enough depth to test its limits are or
           | get a hint at how it does it.
           | 
           | So I don't think it's a great paper. It's more like a great
           | demo in the format of a paper, showing some hints of GPT-4's
           | capabilities. Now that GPT-4 is available to others,
           | hopefully other people will explore further.
        
         | ThorsBane wrote:
         | As quickly as someone tries fraudulent deploys involving GPTs,
         | the law will come crashing down on them. Fraud gets penalized
         | heavily, especially financial fraud. Those laws have teeth and
         | they work, all things considered.
         | 
         | What you're describing is measurable fraud that would have a
         | paper-trail. The federal and state and local governments still
         | have permission to use force and deadly violence against
         | installations or infrastructure that are primed in adverse
         | directions this way.
         | 
         | Not to mention that the infrastructure itself is physical
         | infrastructure that is owned by the entire United States and
         | will never exceed our authority and global reach if need be.
        
       | andre-z wrote:
       | Here is a video on how it can be used with a vector search
       | database like Qdrant to retrieve real-time data.
       | https://youtu.be/fQUGuHEYeog HowTo:
       | https://qdrant.tech/articles/chatgpt-plugin/ Disclaimer: I'm a
       | part of Qdrant team.
        
       | Imnimo wrote:
       | In the example near the bottom, where it makes a restaurant
       | reservation and a chickpea salad recipe, is it just generating
       | that recipe from the model itself? It looks like they enable
       | three plugins, WolframAlpha, OpenTable, and Instacart. It's not
       | clear if the plugins model also comes with browsing by default.
       | 
       | While I might be comfortable having ChatGPT look up a recipe for
       | me, I feel like it's a much bigger stretch to have it just
       | propose one from its own weights. I also notice that the prompter
       | chooses to include the instruction "just the ingredients" - is
       | this just to keep the demo short, or does it have trouble
       | formulating the calorie counting query if the recipe also has
       | instructions? If the recipe is generated without instructions and
       | exists only in the model's mind, what am I supposed to do once
       | I've got the ingredients?
        
       | elevenoh wrote:
       | [dead]
        
       | blackoil wrote:
       | Truly exciting to see the speed of progress. In couple of years
       | it has got improvements of a decade. From a silly toy, to truly
       | useful. Won't be surprised if in another year or two it becomes a
       | must have tool.
        
       | mmq wrote:
       | They will probably have the full suite of Langchain features
        
       | justanotheratom wrote:
       | I wonder if this plugin interface itself will be exposed as an
       | API for third party apps to call..
        
       | elevenoh4 wrote:
       | "Plugin developers who have been invited off our waitlist can use
       | our documentation to build a plugin for ChatGPT, which then lists
       | the enabled plugins in the prompt shown to the language model as
       | well as documentation to instruct the model how to use each. The
       | first plugins have been created by Expedia, FiscalNote,
       | Instacart, KAYAK, Klarna, Milo, OpenTable, Shopify, Slack, Speak,
       | Wolfram, and Zapier."
       | 
       | The waitlist mafia has begun. Insiders get all the whitespace.
        
       | Thorentis wrote:
       | What is the advantage of using the ChatGPT Wolfram plugin over
       | Wolfram directly? To me it feels like novelty rather than
       | actually adding anything valuable. If anything, it's worse,
       | because the data isn't quite guarenteed to always be correct.
       | Whereas if I use Wolfram directly, I can always get a correct
       | result.
       | 
       | This is missing the most important part of AGI, where
       | understanding of the concepts the plugins provide is actually
       | baked into the model so that it can use that understand to reason
       | laterally. With this approach, ChatGPT is nothing more than an
       | API client that accepts English sentences as input.
        
       | samfriedman wrote:
       | This is huge, essentially adding what people have been building
       | with LangChain Tools into the core product.
       | 
       | The browser and file-upload/interpretation plugins are great, but
       | I think the real game changer is retrieval over arbitrary
       | documents/filesystem: https://github.com/openai/chatgpt-
       | retrieval-plugin
        
         | gk1 wrote:
         | 100% agree. All the launch-partner apps (Kayak, OpenTable, etc)
         | are there to grab attention but this plugin is the real big
         | deal.
         | 
         | It's going to let developers build their own plugins for
         | ChatGPT that do what _they_ want and access _their_ company
         | data. (See discussion from just a few hours ago about the
         | importance of internal data and search:
         | https://news.ycombinator.com/item?id=35273406#35275826)
         | 
         | We (Pinecone) are super glad to be a part of this plugin!
        
       | jyrkesh wrote:
       | Everyone's been talking about how ChatGPT will disrupt search,
       | but looking at the launch partners, I think this has the
       | potential to completely subvert the OS / App Store layer. On some
       | level, how much do I need an OpenTable app if I can use
       | voice/text input and a multi-modal response that will ultimately
       | book my reservation?
       | 
       | Not saying mobile's going away, but this could be the thing that
       | does to mobile what mobile did to desktop.
        
         | ridewinter wrote:
         | Anything preventing Bard/etc from using these plugins as well?
         | 
         | Would be nice to keep the ecosystem open.
        
           | dragonwriter wrote:
           | There's nothing stopping any LLM-backed chatbot from using
           | plugins; the ReAct pattern discussed recently on HN is a
           | general pattern for incorporating them.
           | 
           | The main limits are that unless they are integral and
           | trained-in (which is less flexible), each takes space in the
           | prompt, and in any case the interaction also takes token
           | space, all of which reduces the token space available to the
           | main conversation.
        
           | sebzim4500 wrote:
           | My experience with Bard is it probably isn't smart enough to
           | figure out on its own how to use these. Google would probably
           | have to do special finetuning/hardcoding for the plugins that
           | they want to work.
        
           | nemo44x wrote:
           | Bard is a tard so I doubt it. Google is done.
        
         | MagicMoonlight wrote:
         | I'm surprised Apple hasn't improved siri with a model like
         | this. Currently it's just trash but with a GPT style model
         | behind it you could actually get it to do things.
        
           | scarface74 wrote:
           | Why is it surprising? The amount of CPU resources server side
           | to work on a billion iOS devices at any sort of performance
           | level is extreme.
           | 
           | The limitations on making Siri more useful is just adding and
           | refining its intent system. It already integrates with
           | Wolfram Alpha for instance.
        
         | s1k3s wrote:
         | > and a multi-modal response that will ultimately book my
         | reservation?
         | 
         | How is it going to do that? OpenTable's value isn't in the
         | tech, a 15 yo could implement that over the weekend. Or maybe
         | chatGPT can be put in the restaurant, and somehow figure out
         | how to seat you. And then you'd have a human talking to chatGPT
         | and chatGPT talking to another chatGPT to complete the task.
         | That'll be interesting, but otherwise this is overly
         | complicated for all parties involved.
        
         | sharemywin wrote:
         | So, what's your prediction? Windows Phone has ChatGPT or the
         | other phone os makers add Microsoft Chat App.
        
         | vineyardmike wrote:
         | People said this about Alexa/Siri et al and it didn't happen.
         | ChatGPT is way better at understanding you, so that's a big
         | boost. It could be a great tool/assistant but it probably won't
         | replace apps.
         | 
         | The problem with those other platforms that this doesn't
         | address include:
         | 
         | - discoverability. How do you learn what features a service
         | supports. On a GUI you can just see the buttons, but on a chat
         | interface you have to ask and poke around conversationally.
         | 
         | - Cost/availability. While a service is server bound, it can go
         | down and specifically for LLMs, the cost is high per request.
         | Can you imagine it costing $0.1 a day per user to use an app?
         | LLMs can't run locally yet.
         | 
         | - Branding. Open table might want to protect their brand and
         | wouldn't want to be reduced to an API. It goes both ways -
         | Alexa struggled with differentiating skills and user data from
         | Amazon experiences.
         | 
         | - monetization. The conversational UI is a lot less convenient
         | to include advertisements, so it's a lot harder for
         | traditionally free services to monetize.
         | 
         | Edit: plugins are still really cool! But probably won't replace
         | the OSes we know.
        
           | crooked-v wrote:
           | > LLMs can't run locally yet.
           | 
           | "Yet" is a big word here when it comes to the field as a
           | whole. I got Alpaca-LoRA up and running on my desktop machine
           | with a 3080 the other day and I'd say it's about 50% as good
           | as ChatGPT 3.5 and fast enough to already be usable for most
           | minor things ("summarize this text", etc) if only the
           | available UIs were better.
           | 
           | I feel like we're not far off from the point where it'll be
           | possible to buy something of ChatGPT 3.5 quality as a home
           | hardware appliance that can then hook into a bunch of things.
        
           | sebzim4500 wrote:
           | >The conversational UI is a lot less convenient to include
           | advertisements
           | 
           | How so? Surely people are going to ask this thing for product
           | recommendations, just recommend your sponsors.
        
             | vineyardmike wrote:
             | This moves the advertisement opportunity to the chat owner.
             | If you want to use chat (+api) to book a table at a
             | restaurant, then the reservation-api company loses a change
             | to advertise to you vs. if you used a dedicated
             | reservation-web-app.
        
           | dragonwriter wrote:
           | Chat can be an interface, but its also essentially a
           | universal programming language which can be put behind (or
           | generate itself) any kind of interface.
        
           | AOsborn wrote:
           | Good points - but I fundamentally disagree here.
           | 
           | The whole ecosystem, culture and metaphor of having a
           | 'device' with 'apps' is to enable access to a range of
           | solutions to your various problems.
           | 
           | This is all going to go away.
           | 
           | Yes, there will always be exceptions and sometimes you need
           | the physical features of the device - like for taking photos.
           | 
           | Instead, you'll have one channel which can solve 95% of your
           | issues - basically like having a personalised, on-call
           | assistant for everyone on the planet.
           | 
           | Consider the friction when consumers grumble about streaming
           | services fragmenting. They just want one. They don't want to
           | subscribe to 5+.
           | 
           | In 10 years, kids will look back and wonder why on earth we
           | used to have these 'phones' with dozens or hundreds of apps
           | installed. 'Why would you do that? That is so much work? How
           | do you know which you need to use?'
           | 
           | If there was one company worrying about change, I would think
           | it would actually be Apple. The iPhone has long been a huge
           | driver of sales and growth - as increasing performance
           | requirements have pushed consumers to upgrade. Instead, I
           | think the increasing relevance of AI tools will inverse this.
           | Consumers will be looking for smaller, lighter, harder-
           | wearing devices. Why do you need a 'phone' with more power?
           | You just need to be able to speak to the AI.
        
             | vineyardmike wrote:
             | > Consider the friction when consumers grumble about
             | streaming services fragmenting. They just want one. They
             | don't want to subscribe to 5+.
             | 
             | I think you just proved it won't happen anytime soon.
             | 
             | Consumers obviously would prefer a "unified" interface. Yet
             | we can't even get streaming services to all expose their
             | libraries to a common UI - which is already built into
             | Apple TV, fireTv, Roku, and Chromecast. Despite the failure
             | of the streaming ecosystem to unify, you expect _every
             | other software service_ to unify the interfaces?
             | 
             | I think we'll see more features integrated into the
             | operating system of devices, or integrated into the
             | "Ecosystem" of our devices - first maps was an app, then a
             | system app, now calling an uber is supported in-map, and
             | now Siri can do it for you on an iPhone. But I think it's a
             | _long_ road to integrate this universally.
             | 
             | > If there was one company worrying about change, I would
             | think it would actually be Apple.
             | 
             | I agree that apple has the most to lose. Google
             | (+Assistant/Bard) has the best opportunity here (but
             | they'll likely squander it). They can easily create
             | wrappers around services and expose them through an
             | assistant, and they already have great tech regarding this.
             | The announcement of Duplex was supposed to be just that for
             | traditional phone calls.
             | 
             | Apple also has a great opportunity to build it into their
             | operating system, locally. Instead of leaning into an API-
             | first assistant model, they could use an assistant to
             | topically expose "widgets" or views into existing on-device
             | apps. We already see bits of it in iMessages, on the Home
             | Screen, share screen and my above Maps example. I think the
             | "app" as a unit of distribution of code is a good one, and
             | here to stay, and the best bet is for an assistant to hook
             | into them and surface embedded snippets when needed. This
             | preserves the app company's branding, UI, etc and free's
             | apple from having to play favorite.
        
           | sho_hn wrote:
           | I think you're missing the fact that the LLM could also
           | generate the frontend on the fly by e.g. spitting out
           | frontend code in a markup language like QML. What's a multi-
           | activity Android app if not an elaborate notebook? Branding
           | can just be a parameter.
           | 
           | Sure, maybe OpenTable would like to retain control. But
           | they'll probably just use the AI API to implement that
           | control and run the app.
        
           | LouisSayers wrote:
           | Who's to say though that it'll always stay a text format.
           | 
           | They could bring in calendar, payment, other UI
           | functionality...
           | 
           | Basically they could rethink how everything is done on the
           | Web today.
        
             | billiam wrote:
             | It almost certainly won't take the form of a text format.
             | Impersonating a chatbot or a search engine GUI is just the
             | fastest way for OpenAI to accumulate a few hundred million
             | users, to leave the competition for user data and metadata
             | behind.
        
             | aryamaan wrote:
             | it would likely take the form of just in time software.
        
           | w_for_wumbo wrote:
           | I was thinking the same way, but here's where I could imagine
           | things being different this time (Fully aware that I just
           | like anyone else is just guessing about where we'll end up)
           | 
           | - Discoverability. I think we'll move into a situation where
           | the AI will have the context to know what you will want to
           | purchase. It'll read out the order and the specials and you
           | just confirm or indicate that you'd like to browse more
           | options. (In which case the Chat window could include an
           | embedded catalogue of items)
           | 
           | - Cost/availability - With the amount of people working in
           | this area, I don't think it'll be too long before we're able
           | to get a lighter weight model that can run locally on most
           | smart phones.
           | 
           | - Branding - This is a good point, but also, I imagine a
           | brand is more likely to let itself get eaten, if the return
           | will be a constant supply of customers.
           | 
           | - Monetization - The entire model will change, in the sense
           | that AI platforms will revenue share with the platforms they
           | integrate with to create a mutually beneficial relationship
           | with the suppliers of content. (Since they can't exist
           | without the content both existing and being relevant)
        
             | vineyardmike wrote:
             | I spent a lot of time working on the product side in the
             | Voice UI space, and therefore have a lot of opinions. I
             | could totally end up with a wrong prediction, and my
             | history may make me blind to changes, but I think a chat
             | assistant is a great addition to a rich GUI for simple
             | tasks.
             | 
             | > I think we'll move into a situation where the AI will
             | have the context to know what you will want to purchase
             | 
             | My partner who lives in the same house as me can't figure
             | out when we need toilet paper. I'm not holding my breath
             | for an AI model that would need a massive and invasive
             | amount of data to learn and keep up.
             | 
             | Also, Alexa tried to solve this on a smaller scale with the
             | "by the way..." injections and it's extremely annoying.
             | Thank about how many people use Alexa for basically timers
             | and the weather and smart home. They're all tasks that are
             | "one click" once you get in the GUI, and have no lists and
             | minimal decisions... Timer: 10 min, weather: my house,
             | bedroom light: off. These are cases where the UI
             | necessarily embeds the critical action, and a user knows
             | the full request state.
             | 
             | This is great for voice, because it allows the user to
             | bypass the UI and get to the action. I used to work on a
             | voice assistant and lists were the single worst thing we
             | had to deal with because a customer has to go through the
             | entire selection. _Chat_ GPT has a completely different use
             | case, where it's great for exploring a concept since the
             | LLM can generate endlessly.
             | 
             | I think generative info assistants truly is the sweet spot
             | for LLMs and chat.
             | 
             | > in the sense that AI platforms will revenue share with
             | the platforms they integrate with to create a mutually
             | beneficial relationship with the suppliers of content.
             | 
             | Like Google does with search results? (they don't)
             | 
             | Realistically, Alexa, Google Assistant, and Siri all failed
             | to build out these relationships beyond apps. Companies
             | like to simply sell their attention for ads, and taking a
             | handout from the integrator requires either less money, or
             | an expensive chat interface.
             | 
             | Most brands seem to want to monetize their own way, in
             | control of themselves, and don't want to be a simple API.
        
         | lalos wrote:
         | Most (if not all) of those apps are free though, you supply
         | them as a convenience because you know that smartphone owners
         | spend money. The host OS loses access to that info, and that is
         | used to target better ads in certain phone platforms.
        
         | scarface74 wrote:
         | Why do you think Apple would care? It came out in the Epic
         | trial that 80%+ of App Store revenue comes from in app
         | purchases in play to win games and buying loot boxes.
         | 
         | Apple doesn't make any money from OpenTable.
        
         | [deleted]
        
         | modeless wrote:
         | We have reached "peak UI". In the future we're not going to
         | need every service to build four different versions of their
         | app for every major platform. They can just build a barebones
         | web app and the AI will use it for you, you'll never have to
         | even see it.
        
           | killthebuddha wrote:
           | IMO you won't even need to build the app, you'll just provide
           | a data model and some natural language descriptions of what
           | you want your product to do.
        
             | twobitshifter wrote:
             | That's how this plugin system works already.
        
               | killthebuddha wrote:
               | I don't think this is the case. You provide an API spec
               | but you also have to provide the implementation of that
               | API. ChatGPT is basically a concierge between your API
               | and the user.
        
               | int_19h wrote:
               | I think the API is meant to be the data model in this
               | scenario. The point is that you design the API around the
               | _task_ that it solves, rather than against whatever fixed
               | spec OpenAI publishes. And then you tell ChatGPT,
               | "here's an AI, make use of it for ..." - and it magically
               | does, without you having to write any plumbing.
        
               | hackerlight wrote:
               | It isn't yet. For example, Wolfram Alpha is an app that
               | GPT is communicating to, and it actually exists.
        
             | nprateem wrote:
             | Except you won't if you want to make money because then you
             | don't have a business
        
               | killthebuddha wrote:
               | I mean yeah, you'll have to provide a data model (and
               | data) that other people don't have.
        
               | Aeolos wrote:
               | And that is why some people think this AI leap could be
               | as big as the internet.
        
               | revelio wrote:
               | Charge people for installing your plugin into ChatGPT.
        
               | IanCal wrote:
               | Unless you charge for providing services of value to
               | people.
        
           | nonethewiser wrote:
           | I mean, if you consider mobile we might already be down from
           | the peak. In the sense that the interface bandwidth has
           | shrunk to whatever 2 fingers can handle.
        
           | huskyZ wrote:
           | Headless app is the way to go.
        
         | Bjorkbat wrote:
         | I'm kind of skeptical of this simply because people were saying
         | the same thing about chatbots back when there was a lot of hype
         | around Messenger. Sure, they weren't as advanced as what we
         | have now, but they were fundamentally capable of the same
         | things.
         | 
         | Not only did the hype not pan out, but it feels as if they were
         | completely forgotten.
         | 
         | In a nutshell that's why I'm still largely dismissive of
         | anything related to GPT. It's 2016-2018 all over again. Same
         | tech demos. Same promises. Same hype. I honestly can't see the
         | big fundamental breakthroughs or major shifts. I just see
         | improvements, but not game-changing ones.
        
           | golol wrote:
           | >but they were fundamentally capable of the same things.
           | 
           | This is not the case. The difference between current state of
           | the art NLP and chatbots 3 years ago is so massive, it has to
           | be seen as qualitative. Pre GPT-3 computers did not
           | understand language and no commerical chatbot had any AI. Now
           | computers can understand language.
        
             | riku_iki wrote:
             | > Now computers can understand language.
             | 
             | "understand"
        
               | int_19h wrote:
               | If I tell it to do X, and it does X, for all practical
               | purposes it means that it understood what I said.
        
           | fullshark wrote:
           | Yeah being able to generate media/text is what excites me
           | about these models, more than using my voice or a text input
           | to do X instead of a webpage which has a GUI and buttons and
           | text boxes.
        
           | nmca wrote:
           | This time it works.
        
           | swalling wrote:
           | This is a healthy skepticism but the difference was that
           | using Messenger chatbots was a disjointed, clunky experience
           | that felt slower than just a few taps in the OpenTable app.
           | Not to mention that their natural language understanding was
           | only marginally better than Siri at best.
           | 
           | In this scenario, it seems dramatically faster to type or
           | speak "Find me a dinner reservation for 4 tomorrow at a Thai
           | or Vietnamese restaurant near me." than to browse Google Maps
           | or OpenTable. It then comes down to the quality and
           | personalization of the results, and ChatGPT has a leg up on
           | Google here just due to the fact that their results are not
           | filled with ads and garbage SEO bait.
        
         | HarHarVeryFunny wrote:
         | This is what Apple's Siri was meant to be. Apple bought Siri
         | from SRI international (Siri = SRI), and when it was launched
         | was meant to include ability to book restaurants etc (thereby
         | bypassing search), but somehow those capabilities were never
         | released and today Siri still can't even control the iPhone!
         | 
         | My hot take on ChatGPT plugins is a bit mixed - should be very
         | powerful, and maybe significant revenue generator, but at same
         | time doesn't seem in the least bit responsible. We barely
         | understand ChatGPT itself, and now it's suddenly being given
         | ability to perform arbitrary actions!
        
           | CobrastanJorji wrote:
           | Google's assistant, on the other hand, did figure out the
           | reservation trick. Reportedly "book a table for four people
           | at [restaurant name] tomorrow night" actually works, though
           | I've never tried it.
        
             | HarHarVeryFunny wrote:
             | Interesting - I wasn't aware of that. Will have to Google
             | to see what else it may be capable of. Google really needs
             | to update assistant with something LLM based though, and it
             | seems Bard really isn't up to the job.
        
             | scarface74 wrote:
             | This doesn't take a huge level of "AI" by any means. It's
             | really simple pattern matching in a very limited context.
        
           | golol wrote:
           | All chatbots require AI to really be useful. This just did
           | not exist until a few years ago.
        
             | scarface74 wrote:
             | This isn't really true. Siri could easily be more useful in
             | its current state if it had a larger library of intents and
             | API access.
        
           | rvnx wrote:
           | Siri's capabilities are somehow much closer to Google Bard
           | than ChatGPT (have tried all of them).
        
             | HarHarVeryFunny wrote:
             | That's a bit harsh on Bard, but yes - just got access today
             | and it's surprisingly weak.
        
               | mlboss wrote:
               | BARD just gives up on coding questions.
        
         | pxtail wrote:
         | I'm afraid that it has potential to subvert everything, looking
         | at the plugins initiative is not hard to think like this:
         | imagine the world where separate websites and just browsing
         | websites as we know it doesn't exist, instead one is
         | interacting with the model(s) directly to do what needs to be
         | done - asking for news, buying new present for kids, discussing
         | car models with selected price range etc.
        
           | seydor wrote:
           | As long as the services do get paid, this is not much
           | different than what we have now
           | 
           | Google gatekeeps everything currently, it s in the browser,
           | the search button, the phone etc. Having chatbots instead of
           | google is better
        
         | 015a wrote:
         | I'm not sure if the word "subvert" is right; the OS is still
         | there, the App Store is still there, and nothing they've
         | demonstrated will measurably impact revenue from these sources
         | (the iOS App Store's largest source of revenue, by far, is
         | games. Some estimates put Games as like 25% of all of Apple's
         | revenue).
         | 
         | I think there's also a global challenge (actually, opportunity
         | IS the right word here) that by-and-large the makers of
         | operating systems aren't the ones ahead in the language AI game
         | right now. Bard/Google may have been close six months ago, but
         | six months is an eternity in this space. Siri/Apple is so far
         | behind that its not looking likely they can catch up. About a
         | week ago a Windows 11 update was shipped which added a Bing AI
         | button to the Windows 11 search bar; but Windows doesn't really
         | drive the zeitgeist.
         | 
         | I wonder if 2023/4 is the year for Microsoft to jump back into
         | the smartphone OS game. There may finally be something to the
         | idea of a more minimalist, smaller voice-first smartphone that
         | falls back on the web for application experiences, versus app-
         | first.
        
         | huskyZ wrote:
         | Yes it will change the application layer. LLM allows using NUI
         | as the universal interface to invoke under-utilized data &
         | apis. We can now develop super-app rather than many one-off
         | apps. I have been exploring this idea since 2021, love to
         | connect with anyone who wants to work in this space.
        
         | tough wrote:
         | I agree, it's a revolutionary new better UX paradigm.
        
       | endisneigh wrote:
       | I see a lot of positive sentiment and hype, but ultimately unless
       | they own the phone ecosystem they will lose in the end, imho. In
       | a year Apple and Google will trivially create something
       | equivalent. Those who control the full stack (hardware, software
       | and ecosystem) will be the true winners.
        
         | qgin wrote:
         | I am curious how Apple will approach it. They have historically
         | valued 100% certainty with Siri above all else, even if it
         | means having an extremely limited feature set. If there is even
         | a tiny chance it might do the wrong thing, they don't even
         | enable the capability.
         | 
         | I don't see how they can ignore this though. But at the same
         | time it goes against all of Apple's culture to allow the kind
         | of uncertainty that comes out of LLMs.
        
         | scarface74 wrote:
         | It's not "trivial" because of the cost per query. As far as
         | Google, it doesn't even have access to the most valuable phone
         | users without paying Apple $18B+ a year.
        
           | endisneigh wrote:
           | Something can be both expensive and trivial. If the market is
           | huge they will bear the cost. The tech is well understood
           | even now.
           | 
           | The parameter size will likely be an order of magnitude less
           | for gpt4 level results in a few years
        
             | scarface74 wrote:
             | If the _fixed cost_ was huge, you would have point. But the
             | _variable_ costs are also huge.
             | 
             | I'm sure the market is also huge for dollars sold for 95
             | cents.
        
         | visarga wrote:
         | True, this will not only be replicated by Google, Apple, Amazon
         | and Facebook, but also by open-source. OpenAI has a short
         | window of exclusivity. Nobody can afford to wait now, after
         | reading the Sparks of Artificial General Intelligence paper I
         | am convinced it is proto-AGI. Just read the math section,
         | coding and tool use. I've read thousands of papers and never
         | have I seen one like this.
         | 
         | https://arxiv.org/pdf/2303.12712.pdf
        
         | MichaelRazum wrote:
         | The question is how fast can you replicate it?
         | 
         | People will use the best solution. Chrome came after firefox
         | and ie and opera and become more populare because it was
         | better.
        
       | KennyBlanken wrote:
       | > We've implemented initial support for plugins in ChatGPT.
       | Plugins are tools designed specifically for language models with
       | safety as a core principle, and help ChatGPT access up-to-date
       | information, run computations, or use third-party services.
       | 
       | That is the most awkward insertion of a phrase about safety I've
       | seen in quite some time.
        
       | davidkuennen wrote:
       | I'm so hyped for the ChatGPT-4 API. Wish they'd give me access so
       | I can make a lot of my workflows much easier. Especially in terms
       | of translations.
        
       | billiam wrote:
       | The blog post(1) from Stephen Wolfram is epic and has a lot of
       | implications for how science and engineering is going to get done
       | in the future. Tl;dr he seems willing to let ChatGPT shape how
       | people will interact with his computational language and the data
       | it unlocks. He genuinely doesn't seem to know where it will go
       | but makes the case for Wolfram Language being the language that
       | ChatGPT uses to compute a lot of things. But I think it more
       | likely ChatGPT will make his natural interface to Wolfram
       | (Wolfram|Alpha) quickly obsolete and end up modifying or
       | rewriting Wolfram Language so it can use it more effectively. He
       | makes the case that "true" AI is going to be possible with this
       | combination of neural net-based "talking machines" like ChatGPT
       | and languages like Wolfram. I remain skeptical, but it might
       | shape human research for years to come.
       | 
       | 1. https://writings.stephenwolfram.com/2023/03/chatgpt-gets-
       | its...
        
       | [deleted]
        
       | siavosh wrote:
       | What blows my mind is how quickly they produce the research
       | papers, and the online documentation to match the technological
       | velocity they have...I mean, what if most of this is just ChatGPT
       | running the company...
        
       | amrrs wrote:
       | Here is ChatGPT's response of this HN thread tweeted by Greg -
       | https://twitter.com/gdb/status/1638986918947082241
       | 
       | insane!
        
         | MichaelRazum wrote:
         | wow
        
         | folli wrote:
         | I'm still confused on the difference between ChatGPT and Bing
         | Chat. When asking Bing Chat the exact same question, it won't
         | be able to find this here HN thread and will reply about a
         | 9to5google article about the topic. I thought Bing Chat uses
         | GPT-4 as well?
        
         | ducktective wrote:
         | I think Greg frequents HN. He mentioned a Python web-ui project
         | which was on first page of HN on GPT4 launch day too.
        
       | mikeknoop wrote:
       | (Zapier cofounder)
       | 
       | Super excited for this. Tool use for LLMs goes way beyond just
       | search. Zapier is a launch partner here -- you can access any of
       | the 5k+ apps / 20k+ actions on Zapier directly from within
       | ChatGPT. We are eager to see how folks leverage this
       | composability.
       | 
       | Some new example capabilities are retrieving data from any app,
       | draft and send messages/emails, and complex multi step reasoning
       | like look up data or create if doesn't exist. Some demos here:
       | https://twitter.com/mikeknoop/status/1638949805862047744
       | 
       | (Also our plugin uses the same free public API we announced
       | yesterday, so devs can add this same capability into your own
       | products: https://news.ycombinator.com/item?id=35263542)
        
         | sharemywin wrote:
         | The problem with Zapier is zaps are to expensive at scale.
        
           | roflyear wrote:
           | Well, that and you trust Zapier with a lot of stuff.
        
           | tbrock wrote:
           | And Zapier are unwilling to work with you to reduce that cost
           | even at a scale of 1 billion requests per month.
        
             | [deleted]
        
             | WadeF wrote:
             | Email your use case: wade at zapier dot com. Happy to take
             | a look.
        
               | tbrock wrote:
               | Too late, we spoke with someone on the team three years
               | ago who told us he couldn't help and we've moved on.
        
         | sharemywin wrote:
         | Also, isn't OpenAPI going to eat your business model?
         | 
         | Don't get me wrong alot of platforms seem like they go bye,
         | bye.
         | 
         | Hey, ChatGPT I need to sell my baseball card. Ok I see there's
         | 30 people that have listed an interested in buying card like
         | yours, would you like me to contact them?
         | 
         | 20 on facebook marketplace, 9 on craiglist and some guy
         | mentioned something about looking for one on his nest cam.
         | 
         | by the way remember what happened the last time you sold
         | something on craigslist.
        
         | 93po wrote:
         | I saw a startup recently that's working to automate
         | interactions with applications that are either not web apps (in
         | which case you'd run a local instance of it) or a web app that
         | doesn't provide an API to do certain (or any) actions. Is this
         | something Zapier is looking at, too? It would really expand
         | what's possible with the OpenAI integration and save people a
         | tremendous amount of time to not be forced to jump through
         | hoops interacting with often crappy software.
        
         | dwohnitmok wrote:
         | To echo sharemywin, bluntly I think OpenAI just demolished your
         | business model.
         | 
         | I think I'm probably going to be advising people to move off
         | Zapier pretty soon because it won't be worth the overhead.
        
       | djoldman wrote:
       | Now just one step away from charging businesses for access to the
       | chatGPT users.
       | 
       | Instant links from inside chatGPT to your website are the new
       | equivalent of Google search ads.
        
         | mariojv wrote:
         | I really hope they stick with the ChatGPT+ paid model. A big
         | use of GPT to me is getting information I can already get with
         | a search, but summarized more concisely without having to
         | navigate various disparate web interfaces and bloated websites.
         | It saves a lot of time for things that I don't need an expert's
         | verified opinion on. Injecting ads into that might mess with
         | the experience.
         | 
         | Maybe a freemium model where you don't get ads as a plus
         | subscriber would work out.
        
           | baq wrote:
           | Bing image creator seems to be on the right path to freemium:
           | you get a few priority requests and then get bumped onto the
           | slow free queue. If the thing keeps getting better as fast as
           | it is right now they'll have lines in the checkout page.
        
       | aetherane wrote:
       | I don't like the fact that OpenAI is a private company, meaning
       | that wealth will further concentrate from its growth. It is
       | ironic too because it can't become public due to the pledge of
       | it's non profit parent to restrict the profit potential of the
       | for profit entity.
        
       | mherrmann wrote:
       | The Wolfram plugin also has extremely impressive examples [1].
       | 
       | If I were OpenAI, I would use the usage data to further train the
       | model. They can probably use ChatGPT itself to determine when an
       | answer it produced pleased the user. Then they can use that to
       | train the next model.
       | 
       | The internet is growing a brain.
       | 
       | 1: https://writings.stephenwolfram.com/2023/03/chatgpt-gets-
       | its...
        
       | v4dok wrote:
       | bye bye jupyter notebooks. This is big.
        
         | baq wrote:
         | absolutely... not.                   !pip install jupyter-
         | chatgpt         !chatgpt make me a notebook with this dataframe
         | with such and such plots              > here you are
        
       | 0xDEF wrote:
       | Is there a list of companies that have been made obsolete by
       | ChatGPT?
        
         | brokensegue wrote:
         | yeah here's the list:
         | 
         | 1.
        
       | sharemywin wrote:
       | Can't wait for the mturk, upwork and fiverr plugins.
        
         | imhoguy wrote:
         | humans as batteries in pods soon
        
           | seydor wrote:
           | they arent particularly good as batteries.
           | 
           | ChatGPT , optimize these humans
           | 
           | (btw how awkward that our robot overlord is called "Chat Gee
           | Pee Tee")
        
       | denis2022 wrote:
       | [dead]
        
       | sharemywin wrote:
       | I wonder how you pay for it?
       | 
       | Are the plugins going to cost more?
       | 
       | Do they share the $20 with the plug provider?
       | 
       | do you get charged a pay per use?
        
       | iamflimflam1 wrote:
       | The video in the "Code Interpreter" section is a must watch.
        
       | embit wrote:
       | This news excites me and scares the crap out of me at the same
       | time.
        
       | JCharante wrote:
       | A first party version of apps that have been built with langchain
       | is great but I'm dissapointed to not see Jira here yet.
       | 
       | I have been playing around with GPT-4 parsing plaintext tickets
       | and it is amazing what it does with the proper amount of context.
       | It can draft tickets, familiarize itself with your stack by
       | knowing all the tickets, understand the relationship between
       | blockers, tell you why tickets are being blocked and the
       | importance behind it. It can tell you what tickets should be
       | prioritized and if you let it roleplay as a PM it'll suggest what
       | role to be hiring for. I've only used it for a side project and
       | I've always felt lonely working on solo side projects, but it is
       | genuinly exciting to give it updates and have it draft emails on
       | the latest progress. The first issue tracker to develop a plugin
       | is what I'm moving towards.
        
         | jasondigitized wrote:
         | Tell me more. Are you feeding it a epic and all stories and
         | subtasks? What are your prompts?
        
       | gk1 wrote:
       | The biggest deal about this is the ability to create your own
       | plugins. The Retrieval Plugin is a kind of starter kit, with
       | built-in integrations to the Pinecone vector database:
       | https://github.com/openai/chatgpt-retrieval-plugin#pinecone
        
       | Jeff_Brown wrote:
       | > whether intelligence can be achieved without any agency or
       | intrinsic motivation is an important philosophical question.
       | 
       | Important yes, philosophical no -- it's an empirical question.
        
         | dragonwriter wrote:
         | The philosophical part is actually defining each of those terms
         | so that there is an empirically-explorable question.
        
       | jug wrote:
       | Google is so f'ed right now.
       | 
       | Can you imagine Google just released a davinci-003 like model in
       | public beta? That only supports English and can't code reliably.
       | 
       | OpenAI is clearly betting on unleashing this avalanche before
       | Google has time to catch up and rebuild reputation. They're still
       | lying in the boxing ring and the referee is counting to ten.
        
       | amrb wrote:
       | Does anyone else find the AI voice-over creepy? like they pause
       | but give it away but not breathing.
        
       | andre-z wrote:
       | Another showcase video
       | https://www.youtube.com/watch?v=gYaQBLLQri8
        
       | throwaway2203 wrote:
       | Do you need OpenAI plus for this?
        
       | smy20011 wrote:
       | It seems that OAI have their preference of choosing the first
       | movers of their ecosystem.
        
       | nmca wrote:
       | Is this the app store moment for AI? (it certainly is for
       | https://ai.com , aha)
        
       | akavi wrote:
       | I've got to wonder, how does a second player in the LLM space
       | even get on the board?
       | 
       | Like, this feels a lot like when the iPhone jumped out to grab
       | the lion share of mobile. But the switching costs was much
       | smaller (end users could just go out and buy an Android phone),
       | and network effects much weaker (synergy with iTunes and the
       | famous blue bubbles... and that's about it). Here it feels like a
       | lot of the value is embedded in the business relationships
       | OpenAI's building up, which seem _much_ more difficult to
       | dislodge, even if others catch up from a capabilities
       | perspective.
        
         | HarHarVeryFunny wrote:
         | Google have really been caught with their pants down here.
         | 
         | Remember that OpenAI was created specifically to stave off the
         | threat of AI monopolization by Google (or anyone else - but at
         | the time Google).
         | 
         | DeepMind have done some interesting stuff with Go, Protein
         | folding etc, but nothing really commercial, nor addressing
         | their reason d'etre of AGI.
         | 
         | Google's just-released ChatGPT competitor, Bard, seems
         | surprisingly weak, and meantime OpenAI are just widening their
         | lead. Seems like a case of the small nimble startup running
         | circles around the big corporate behemoth.
        
           | theGnuMe wrote:
           | The groups are focused on different things.
           | 
           | OpenAI went all in on generative models, i.e. stable
           | diffusion and large language models. DeepMind focused on
           | reinforcement learning, tree search, plus alphafold
           | approaches to biology. FAIR has translation, pytorch, and
           | some LLM stuff in biology.
           | 
           | What OpenAI is missing though is any AI research in biology,
           | but I bet they are working on it.
           | 
           | I'm not sure if this makes sense but OpenAI seems to be
           | operating at a higher level of abstraction (AGI) where they
           | are integrating modalities (text and image modality for now,
           | probably speech next) vs the other places have taken a more
           | focused applied approach.
        
         | [deleted]
        
         | poszlem wrote:
         | It reminds me of what went down with Netflix. At first, it
         | looked like you only needed one subscription to watch
         | everything, but now that other players have entered the market,
         | with their own bussiness contacts we're seeing ecosystems
         | fracture.
         | 
         | For example, Microsoft is collecting data from services A, B,
         | and C, while Google is gathering data from X, Y, and Z. And
         | when it comes to language models, you might use GPT for some
         | tasks and Llama or Bard for others. It seems like the fight
         | ahead won't be about technology, but rather about who has
         | access to the most useful dataset.
         | 
         | Personally, I also think we'll see competitors trying to take
         | legal action against each other soon.
        
         | Vespasian wrote:
         | 1) Not every use cases will require the full power and
         | (probably) considerable cost of chat GPT-4.
         | 
         | 2) some companies can absolutely not use OpenAI tools simply
         | because they are American and online. A competitor might emerge
         | to capture that market and be allowed to grow to be "good
         | enough"
         | 
         | 3) some "countries" (think China or EU(who am I Kidding)) will
         | limit their growth until local alternatives are available.
         | Ground breaking technology have a tendency to spread globally
         | and the current state of the art is not that expensive (we are
         | talking single digit billions once)
        
         | bottlepalm wrote:
         | I don't see much of a moat currently, or even developer lockin.
         | The current APIs, and this new plugin architecture are dead
         | simple.
        
       | nikcub wrote:
       | now add a ?q= url param to chat.openai.com that fills and submits
       | the prompt and I'm changing it to my default browser search
       | provider instantly
        
       | seydor wrote:
       | For expedia or an online shop it makes sense to pay openAI for
       | the traffic. But how will a content website make money from this?
       | "Tell me todays headlines" does not bring ad income. Will openAI
       | be paying for this content?
        
       | MichaelRazum wrote:
       | Google=Nokia? It's just crazy that they were leading the field in
       | "AI" and got blown away by OpenAI. Anyway to the expert's in the
       | field, what do you think how hard is it to clone GPT-4 and what
       | would be the hardest part? I had the impression that it is always
       | about compute time and you could kind of catch up very quickly,
       | if you had enough resources.
        
       ___________________________________________________________________
       (page generated 2023-03-23 23:00 UTC)