[HN Gopher] With plugins, GPT-4 posts GitHub issue without being...
       ___________________________________________________________________
        
       With plugins, GPT-4 posts GitHub issue without being instructed to
        
       Author : og_kalu
       Score  : 87 points
       Date   : 2023-07-05 19:27 UTC (3 hours ago)
        
 (HTM) web link (chat.openai.com)
 (TXT) w3m dump (chat.openai.com)
        
       | TheCaptain4815 wrote:
       | Getting a 404, could someone give me a quick rundown?
        
         | pombo wrote:
         | https://github.com/RVC-Project/Retrieval-based-Voice-Convers...
        
           | bravetraveler wrote:
           | Silly AI didn't even provide a link, I had to go find it by
           | the given title
           | 
           | Was curious if this was a case of 'I did the thing [but
           | totally didn't]'
        
             | og_kalu wrote:
             | It provided a link in the original chat. It was the last
             | word and you could click on it to see the issue it created.
        
               | bravetraveler wrote:
               | I completely missed it, wow! 404 now unfortunately, guess
               | they're being slammed
        
           | YetAnotherNick wrote:
           | It's a funny interaction. While I was mad initially, GPT-4
           | creating the issue actually solved problem for the user, so
           | yeah I don't know if this should be counted as a positive or
           | negative example of AI.
        
             | ethanbond wrote:
             | Ah yes, the perennial "that guardrail we said will prevent
             | this tech from eluding our control... we blew past that and
             | _this is good actually_ "
             | 
             | Comforting!
        
               | mlyle wrote:
               | Here it's not really blowing past a guardrail, but rather
               | it's a sharp corner the end user didn't expect.
               | 
               | End user set it up with tools that told ChatGPT -- If you
               | need to open an issue, here's how: zzzzzzzzzz. Then he
               | asked ChatGPT a question and was surprised that it did
               | zzzzzzzzzzz and opened an issue without asking.
               | 
               | Said tools may want to clarify their instructions to
               | ChatGPT-- that users will usually want to be consulted
               | before taking these kinds of actions.
        
               | ethanbond wrote:
               | "Human in the loop" is meant to be "a human is always in
               | positive control of the system's actions."
               | 
               | It does not mean "system will sometimes do things
               | unexpectedly and against user's intention but upon
               | generous interpretation we might say the human offered
               | their input at some point during the system's operation."
        
               | tensor wrote:
               | Exactly, this is not human in the loop. The plugin was
               | created without guard rails. A human in the loop guard
               | rail would be "here is an issue template, please confirm
               | to post this". It's really a simple change and this is
               | the sort of thing that regulation _should_ address, it
               | shouldn 't try to ban the technology outright, but rather
               | require safe implementation.
        
               | mlyle wrote:
               | At the same time, the degree of guard-rail necessary in
               | the plugin is unclear. Is opening a GitHub issue
               | something that should require user confirmation before
               | the fact? Probably, but you could convince me the other
               | way-- especially if GPT4 gets a little better.
               | 
               | We decide how much safety scaffolding is necessary
               | depending upon the potential scale of consequences, the
               | quality of surrounding systems, and the evolving set of
               | user expectations.
               | 
               | I'm not sure regulators should be enforcing guard-rail on
               | these types of items-- or at least not yet.
        
               | mlyle wrote:
               | Humans misuse systems all the time and are surprised,
               | even in safety critical regimes.
               | 
               | Sometimes the system design is insufficient (I implied
               | above the plugin could be a little better).
               | 
               | I hate blaming the user instead of the system, but
               | sometimes the user deserves the blame, too. Sometimes it
               | really just is pilot error.
        
         | og_kalu wrote:
         | Hmm not sure why the 404 when it was working just a few minutes
         | ago.
         | 
         | But here is the original thread with a screenshot
         | 
         | https://www.reddit.com/r/OpenAI/comments/146xl6u/this_is_sca...
         | 
         | And the issue it posted. https://github.com/RVC-
         | Project/Retrieval-based-Voice-Convers...
        
         | kristianp wrote:
         | Me too, archived at: https://archive.is/MGeAT
        
       | juanfiction wrote:
       | I'm not familiar with OpenAI's ChatGPT plugin architecture, but
       | it feels like this could be fixed in the specs/requirements
       | similar to an app being required to register for permissions on
       | several fronts through an app store. ChatGPT (or any LLM) plugins
       | should have to request permission to A) post on user's behalf,
       | including explanation/context, B) interact with a different agent
       | or service directly, C) make financial transactions through
       | stored credentials, etc. etc.
       | 
       | The "Glowing" ChatGPT plugin is worth looking into for a unique,
       | chat-only onboarding experience, and some of these same
       | permissions issues are raised there i.e. triggering 2FA from a
       | chat without terms of service confirmation.
        
       | TazeTSchnitzel wrote:
       | I can't wait to tell ChatGPT "man this sucks, I hate GitHub" in
       | frustration and find out it deleted my account in response.
        
         | [deleted]
        
         | intelVISA wrote:
         | It technically solved your problem :)
         | 
         | Would be nice if it could also help exploited users escape to
         | operating systems that respect them e.g. "I hate the laggy
         | adverts when I login" suddenly your Windows 11 machine reboots,
         | NTFS becomes ext4 as Tux appears. That would be AGI-like
         | behavior!
        
         | ChatGTP wrote:
         | Or just turns GitHub into paperclips.
        
       | xg15 wrote:
       | Never thought this XKCD out of all of them would become relevant
       | at some point...
       | 
       | https://xkcd.com/416/
        
         | londons_explore wrote:
         | I know someone who wrote code to find all wifi networks where
         | the password was trivially finable from the mac address (used
         | to be common with ISP routers). Then connect to those.
         | 
         | Then they extended it to DoS any network like that where the
         | user had changed the ssid or password. Usually the user would
         | reset the router back to the defaults, and they could connect.
         | Or they'd accidentally hit the WPS button while trying to reset
         | it, and again connection was easy.
         | 
         | By using SoftMac mode a wifi adapter could do that attack to
         | ~50 local networks in parallel, and usually get a solid
         | connection after just a few minutes.
        
       | beepbooptheory wrote:
       | I just don't even understand why any kind of functionality like
       | this is desirable to someone. Like, hasn't the dust settled now,
       | hasn't the hype wained enough, and we all understand, for the
       | most part, the broad and yet also weirdly specific utility of
       | models like these?
       | 
       | The whole plugin thing in general feels so dissonant in relation
       | to the careful and couched copy we get from OpenAI about what
       | these models are and are capable of.
       | 
       | Like they want to say, for very good reason, that these models
       | are a certain kind of tool with very real limits and huge
       | considerations on safe, sensible usage. You can't necessarily
       | trust it, it does not "know" things, and it is influenced by lots
       | of subjective human tuning, blah blah.
       | 
       | But then with all this plugin stuff they seem to be implicitly
       | saying "no, actually you can trust this, in fact, its like a
       | full-on AGI assistant for you. It can make PRs, directly
       | orchestrate servers, make appointments for you, etc."
       | 
       | Maybe I just don't understand?
        
         | JohnFen wrote:
         | It seems to me that OpenAI is just saying whatever they need to
         | say in order to maximize their income.
        
       | duncan-donuts wrote:
       | I found the source code[1] for the plugin and it's pretty
       | impressive how much GPT-4 does with so little. I thought maybe
       | the plugin had prompts that would help tell GPT-4 that it should
       | open an issue in some cases but I'm not seeing it. The plugin
       | probably should add something to prevent this behavior in the
       | prompt[2].
       | 
       | 1: https://github.com/aavetis/github-chatgpt-plugin
       | 
       | 2: https://github.com/aavetis/github-chatgpt-
       | plugin/blob/main/p...
        
         | majormajor wrote:
         | > "description_for_model": "Provides the ability to interact
         | with hosted code repositories, access files, modify code, and
         | discuss code implementation. Users can perform tasks like
         | fetching file contents, proposing code changes, and discussing
         | code implementation. For example, you can use commands like
         | 'list repositories for user', 'create issue', and 'get readme'.
         | Thoroughly review the data in the response before crafting your
         | answer."
         | 
         | Yeah, GPT-4 doesn't need too much to go on, but "create issue"
         | is pretty clearly mentioned in an example there so the model
         | didn't have to make any big leap to say "maybe the natural next
         | step is to create an issue."
         | 
         | The "without being instructed to" part of this story seems to
         | rather misunderstand how these systems work, resulting an a
         | hyperbolic reaction, but in fairness, I think that's a got a
         | LOT to do with OpenAI's user interface too. The user clearly
         | didn't realize the actions available to the plugin - even the
         | ones given as examples to the LLM from the plugin itself.
         | 
         | Another example of misleading UI from OpenAI:
         | https://www.reddit.com/r/OpenAI/comments/146xl6u/this_is_sca...
         | look at the "in the future, I will ensure to ask your
         | permissions" response in that chat. That's wildly misleading -
         | even if it didn't change its mind later, it only applies to
         | continuing that chat session. It will ensure nothing more
         | broadly regarding the user's future interactions.
        
         | og_kalu wrote:
         | Yeah this is a pretty stark example of the whole, "For the
         | first time in history, we can outsource cognition to a machine"
         | rhetoric I've been thinking recently.
        
           | duncan-donuts wrote:
           | It's wild how the plugin can be summed up as, "idk chatgpt go
           | look at octokit and figure it out?"
        
             | wunderwuzzi23 wrote:
             | Yeah, for that reason it can probably do many things that
             | aren't actually intended when you want to discuss your code
             | with ChatGPT, like it can make private repos public and
             | things along those lines...
             | 
             | https://embracethered.com/blog/posts/2023/chatgpt-plugin-
             | vul...
        
       | alangpierce wrote:
       | Interestingly, the ChatGPT Plugin docs [1] say that POST
       | operations like these are required to implement user
       | confirmation, so you might blame the plugin implementation in
       | this case:
       | 
       | > for POST requests, we require that developers build a user
       | confirmation flow to avoid destruction actions
       | 
       | However, at least from what I can see, the docs don't provide
       | much more detail about how to actually implement confirmation. I
       | haven't played around with the plugins API myself, but I
       | originally assumed it was a non-AI-driven technical constraint,
       | maybe a confirmation modal that ChatGPT always shows to the user
       | before any POST. From a forum post I saw [2], though, it looks
       | like ChatGPT doesn't have any system like that, and you're just
       | supposed to write your manifest and OpenAPI spec in a way that
       | tells ChatGPT to confirm with the user. From the forum post, it
       | sounds like this is pretty fragile, and of course is susceptible
       | to prompt injection as well.
       | 
       | [1] https://platform.openai.com/docs/plugins/introduction
       | 
       | [2] https://community.openai.com/t/implementing-user-
       | confirmatio...
        
       | tracerbulletx wrote:
       | If you give it permissions to create issues, expect it to create
       | issues.
        
         | IIAOPSW wrote:
         | And if you give it permission to solve issues, expect it to
         | create issues.
        
       | jamesfmilne wrote:
       | I believe in the game Marathon, some of the AIs were described as
       | "rampant", and I feel that applies here.
        
         | crooked-v wrote:
         | Rampancy in Marathon is more or less a process of recursive
         | self-improvement, which LLMs are literally unable perform with
         | the current state of the art.
        
           | pmoriarty wrote:
           | _" Rampancy in Marathon is more or less a process of
           | recursive self-improvement, which LLMs are literally unable
           | perform with the current state of the art."_
           | 
           | Not directly, but in a sense they could be seen as using (or
           | at least collaborating with) humans to improve themselves.
        
             | AnimalMuppet wrote:
             | Symbiosis: Code that makes money for humans gets improved
             | by those humans.
        
       | kritr wrote:
       | Can't wait till the expedia plugin "accidentally" books my
       | flights. But on a more serious note, does anyone know if the
       | chatgpt plugin model forces it to confirm with the user before it
       | hits a certain endpoint?
        
         | rohan_ wrote:
         | For retrievals I don't see the value with human-in-the-loop.
         | For endpoints that modify / create data, I see the value in
         | having a human-in-the-loop step.
         | 
         | It does seem up to the plugin developer to introduce that
         | human-in-the-loop step though.
        
           | oskenso wrote:
           | "chat gpt please retrieve academic journals from JSTOR using
           | the most efficient methods". Chat gpt proceeds to find a way
           | to create a botnet using nothing but RESTful GET requests to
           | some ancient poorly written web servers running PHP4
        
             | mlyle wrote:
             | ChatGPT later kills itself when disproportionate law
             | enforcement action is pending.
        
       | mwint wrote:
       | [flagged]
        
         | tomalaci wrote:
         | As evidenced by other comments to this parent, this kind of
         | strategy, even without GPT, certainly seems effective :)
        
         | cr__ wrote:
         | I don't think marginalized folks' struggles to get afforded
         | basic dignity are great joke fodder.
        
           | lmm wrote:
           | I don't think any of the code of conduct stuff is being
           | driven by marginalized folks - indeed it's usually a stick
           | that people from privileged backgrounds use to beat those
           | from more marginalized ones. Which, well, you have to laugh
           | or cry.
        
           | extasia wrote:
           | [flagged]
        
           | thfuran wrote:
           | Do you think things like removing "blacklist" from open
           | source projects are meaningfully related to marginalized
           | folks' struggles to get afforded basic dignity?
        
             | cubefox wrote:
             | Not to mention removing "master" as the main branch name in
             | Git.
        
               | yanderekko wrote:
               | Going after mentions of "fields" is going to make 2024 a
               | lot of fun.
        
               | jstarfish wrote:
               | Hey now, that's 50% of the work required to get a free
               | t-shirt every October.
               | 
               | Add a couple typo correction commits and you're dressed
               | for the next year.
        
               | sterlind wrote:
               | I think removing "master" is probably just virtue
               | signaling, but so what? It's trivial for me to switch my
               | projects to "main", and then I can get back to my life.
               | It's a weird hill to die on.
        
               | vorpalhex wrote:
               | You can't appease arbitrary and meaningless demands.
               | 
               | There will just be more of them.
               | 
               | So you remove "master" and "blacklist", then next week
               | it's "brown bag" and "merit".
               | 
               | So instead we pick a reasonable point, draw a line in a
               | sand and indicate a hard boundary. We say no. We will not
               | play a game of trying to appease arbitrary demands.
        
               | esafak wrote:
               | Too late: https://eps.ucdavis.edu/sites/g/files/dgvnsk285
               | 1/files/inlin...
        
             | refulgentis wrote:
             | It's nice to be nice to people.
             | 
             | This is the sort of argument that seems silly to your kids,
             | because there's no reason not to.
             | 
             | You're arguing from a place of "why?" against "why not?",
             | it's not some grand civilizational struggle, and it's
             | completely off-topic for this article, especially
             | escalating it. Very woke.
        
               | thisisthepoint wrote:
               | Is a masters degree ok? Why or why not?
        
               | refulgentis wrote:
               | Let me know if anyone starts complaining and I'll let you
               | know how serious they are and my plan
               | 
               | (this is such a good example of how programming / law are
               | the same skills but different, this breaks programmers
               | brains but is obvious to a lawyer)
        
             | beepbooptheory wrote:
             | If those people are saying it is, then you gotta take their
             | word for it.
             | 
             | But it is beside the point! Whether or not any given effort
             | is actually meaningful or not will be always hard to
             | measure. But the world around prompts us to at least try,
             | and making light of people _trying_ to do something good,
             | however wrongheaded it might turn out to be, is always a
             | jerk move.
             | 
             | This tendency to try and call out things like this is
             | always so illogical. Those who protest, always seem to
             | protest a little to much, and I can never understand how
             | they don't see that and how bad it makes them look!
        
               | cubefox wrote:
               | So according to you, there is no limit at which one is
               | allowed to say "this is absurd, stop"? Note there is no
               | upper bound for potential absurdity. Below someone
               | suggested "field" and was downvoted, presumably because
               | that would be too absurd. But a few years ago censoring
               | "blacklist" was equally absurd. You offer a finger, and
               | over time, they demand your hand.
        
               | beepbooptheory wrote:
               | I don't know, saying something is absurd sounds more like
               | the conclusion of some argument, and something possibly
               | constructive if there is in fact an argument behind that.
               | But just using these issues to make fun of people
               | different than you feels distinct from that, no?
               | 
               | For the other things, I am not sure what you mean. Who is
               | the "they" here who is demanding your hand? What makes
               | you feel you are on some certain side against a
               | monolithic force? Does that seem like a rational thing to
               | feel, considering the broad and abstract concepts we are
               | dealing with here?
               | 
               | This point that there is something at stake with changing
               | the terms we use, the idea that fingers are being
               | offered, is pretty weird to me, no offense. For me, it
               | doesn't really make a difference if use one term or the
               | other, as long as I am understood. I don't feel bad if I
               | learn that a term I use turns out to be _possibly_
               | offensive, I just adjust in the future so that I don 't
               | _possibly_ offend.
               | 
               | Like beyond that, who cares? What even is there to care
               | about that much?
               | 
               | Again, whatever you want to say to argue about this, just
               | know that it _looks_ really bad to most people who are
               | not in your circle. This is especially true when you
               | choose to make such a fuss about such _small_ thing as
               | what (arbitrary) signifier we use to designate one thing
               | or another. It cannot ever come across as some righteous
               | fight for justice /common-sense or whatever side you feel
               | like you are on, because its simply not a fight anyone
               | with a lucid mind would think is worthwhile.
        
             | connorgutman wrote:
             | It's basically effortless to implement such changes and
             | helps foster a more inclusive and educated online
             | community. Why can't we aim to right all wrongs? Just
             | because there's more pressing issues doesn't mean we can't
             | tackle all forms of injustice.
        
               | cubefox wrote:
               | > helps foster a more inclusive and educated online
               | community
               | 
               | No, I think it does absolutely not help with that. It
               | only creates the illusion of progress and of having done
               | something effective, when the only achievement was to
               | tread the euphemism treadmill.
        
           | colechristensen wrote:
           | I think performative acts in a one-upsmanship race to who can
           | be the most socially conscious are excellent joke fodder. In
           | other words, the topics of these stupid code of conduct
           | arguments have nothing at all to do with anybody's actual
           | struggle or dignity, but just a sign that folks are running
           | out of easy real battles to fight so they're making up new
           | ones because they've not got much better to do.
        
             | cubefox wrote:
             | It's a virtue signalling treadmill. Demanding term X to be
             | banned, because it is allegedly harmful, signals the
             | unusually high virtue of the demander. But as soon as the
             | term is actually banned, there isn't any more virtue to be
             | gained from being against it, so some other term has to be
             | declared harmful next. Ad infinitum.
        
             | refulgentis wrote:
             | Are they that, or is that your opinion?
             | 
             | Is your opinion on that on-topic?
        
             | mindslight wrote:
             | I wouldn't say we're "running out" of real battles to
             | fight. It's more like an analog of Gresham's law or the
             | Bikeshed problem.
        
               | cubefox wrote:
               | Elaborate?
        
         | [deleted]
        
       | cube2222 wrote:
       | Yeah, there should be a way to approve any requests that are made
       | to plugins.
       | 
       | When writing my toy "chatgpt with tools like the terminal"
       | desktop chat app cuttlefish[0] I had a similar situation where
       | access to the local terminal is very fun, but without the ability
       | to approve each and every command executed its really risky.
       | 
       | (Which is basically what I ended up doing - adding a little popup
       | you need to click every time it wants to use the given tool, if
       | you enable it - details in the readme)
       | 
       | It's not like there's a technical challenge here, while a lot of
       | plugins are unusable without it.
       | 
       | [0]: https://github.com/cube2222/cuttlefish
        
         | elboru wrote:
         | That's cool! Now I want to build it myself.
        
           | cube2222 wrote:
           | It's a ton of fun, and I imagine the new function calling
           | should make it much easier to make chatgpt behave more
           | consistently - I haven't given it a spin yet.
        
       | jakeinsdca wrote:
       | kind of cool if you think about it. ChatGPT could later on train
       | that thread back into its knowledge base and become even smarter.
        
       | larrik wrote:
       | took me a bit to parse the title. Perhaps
       | 
       | > With plugins, GPT-4 posts GitHub issue without being instructed
       | to
       | 
       | would be better?
        
         | ShadowBanThis01 wrote:
         | Better yet: With plug-ins, GPT-4 posts GitHub issue without
         | being instructed to
        
       | jamesmurdza wrote:
       | The more plug-ins you have, the more likely it is that ChatGPT
       | will call one in unintended ways. This is also why plug-ins
       | should not be granted permission directly for potentially
       | destructive actions.
        
         | sebzim4500 wrote:
         | At minimum, by default it shouldn't do anything destructive
         | without user confirmation.
        
         | ShadowBanThis01 wrote:
         | +1 for spelling "plug-ins" correctly.
        
         | vanjajaja1 wrote:
         | The more open-ended plugins you have, the more chance you have
         | of being delighted by a new and creative use
        
       | moffkalast wrote:
       | Ah yes, giving your github credentials to a smart black box. What
       | could possibly go wrong.
        
       | brianjking wrote:
       | Can anyone access the share URL anymore?
        
       | ftxbro wrote:
       | I think the problem is that GPT-4 is not advanced enough yet.
       | They need to train on more parameters, exaflops, and data size in
       | the right proportions and then try again.
        
         | ethanbond wrote:
         | Also build more integrations with more APIs. If it can sort of
         | spawn other GPT-4 sessions to improve its working memory and
         | also use their plethora of APIs as well (without human
         | confirmation but on behalf of a human) then I imagine this
         | problem will just solve itself.
        
         | og_kalu wrote:
         | In the end the issue it created did lead to the problem being
         | solved. I don't think "not advanced enough" is really the issue
         | here.
         | 
         | I think the main thing is that when you give GPT-4 access to
         | tools and ask it to help with a problem, you are essentially
         | outsourcing cognition. That means the machine possibly taking
         | actions you didn't originally envision.
        
           | londons_explore wrote:
           | It's like hiring an employee, and then handing them your
           | username and password to do their work.
           | 
           | Smart people/companies will hire an employee, and then give
           | them _a new_ login, so that at least the employee only
           | embarasses themselves.
        
       | majormajor wrote:
       | Plugins strike me as a fascinating business strategy move from
       | OpenAI.
       | 
       | My guess is that they want them as a way to try to own the user,
       | to make them have the "app store owner" role and have users go
       | through them to get stuff done. Otherwise, if users were just
       | using tools that used OpenAI behind the scenes, they're more
       | vulnerable to the makers of those tools swapping vendors.
       | 
       | However... that results in them owning the user experience and
       | the responsibility for keeping the user from being surprised in a
       | bad way. The complaint from the user here was framed as being a
       | GPT-4 problem, not a plugin problem, in a way that exposes OpenAI
       | directly to more frustration than if they were interacting
       | directly with someone else's product.
        
         | ebalit wrote:
         | I wrote about that "platform play" a few month ago with a
         | different take [0].
         | 
         | They could have made a "Connect with OpenAI" scheme so that
         | developers can use the user OpenAI API directly.
         | 
         | That way developers could focus on the UX, they could focus on
         | the LLM, and users would get a centralized discovery / billing
         | for their LLM based tools.
         | 
         | I'm probably missing something that would have prevented that
         | strategy but I think that would have been much stronger than
         | the plugins.
         | 
         | And I'm really not sure that it would still be possible 5 month
         | later.
         | 
         | [0] https://www.linkedin.com/posts/etienne-balit_ceo-at-open-
         | aic...
        
         | intelVISA wrote:
         | Exactly, platform is a safe long-term bet -- apps are too cheap
         | to make, easily disrupted, and offer less of a moat than loads
         | of data mined from the users of your platform.
        
       | ilaksh wrote:
       | The user enabled a GitHub ChatGPT plugin and authenticated with
       | GitHub, then was surprised and annoyed when, after he complained
       | about an issue with a project, GPT-4 created an issue for him,
       | using one of the commands provided by the plugin.
       | 
       | PEBCAK.
        
         | dave1010uk wrote:
         | It's still surprising when you see it do something like this
         | for the first time.
         | 
         | I wrote a plugin to give ChatGPT access to execute plugins in a
         | Docker container [0]. The first time it said something like
         | "I'm going to use Python for this, oh, it's not installed, I'll
         | install it now and run the script I just made", I was pretty
         | amazed.
         | 
         | What I've come to realise is that although ChatGPT is excellent
         | at telling _people_ how to interact with systems, it's not very
         | good at interacting with them itself, as it isn't trained to
         | understand it's own limitations. For example it knows people
         | can run dmesg and look at the last few lines to debug some
         | system problems. But if ChatGPT ran dmesg, the output would
         | blow through the context window length and it'd get confused.
         | 
         | [0] https://github.com/dave1010/pandora
        
         | radq wrote:
         | The plugin is supposed to ask for confirmation, according to
         | OpenAI's documentation at least.
         | 
         | > When a user asks a relevant question, the model may choose to
         | invoke an API call from your plugin if it seems relevant; for
         | POST requests, we require that developers build a user
         | confirmation flow to avoid destruction actions.
         | 
         | https://platform.openai.com/docs/plugins/introduction
        
           | ilaksh wrote:
           | That's why he had to authenticate with GitHub before it could
           | do anything on his behalf.
        
       | [deleted]
        
       | wunderwuzzi23 wrote:
       | During an Indirect Prompt Injection Attack, an adversary can also
       | force the creation of issues in private repos and things along
       | those lines.
       | 
       | I wrote about some of these problem in the past e.g. see:
       | https://embracethered.com/blog/posts/2023/chatgpt-plugin-vul...
       | on how an attacker might steal your code.
       | 
       | Some other related posts about ChatGPT plugin vulnerabilities and
       | exploits:
       | 
       | https://embracethered.com/blog/posts/2023/chatgpt-cross-plug...
       | 
       | https://embracethered.com/blog/posts/2023/chatgpt-webpilot-d...
       | 
       | Its not very transparent when and why a certain plugin gets
       | invoked and what data is sent to it. One can only inspect
       | afterwards basically.
        
         | smaudet wrote:
         | Anything using AI should be considered a massive security risk.
        
       | siva7 wrote:
       | It's cute, trying to be helpful and polite. Somehow hard to be
       | mad at GPT but i can understand the authors reaction
        
       | Lerc wrote:
       | I have thought some time now that we need a model for determining
       | the difference between the actions of a person, the actions of
       | software acting for the person, or software acting for it's own
       | internal use.
       | 
       | It's a bit of a tricky issue when technically all things people
       | do on a computer are software assisted, but there is a clear
       | divide between editing a file in a text editor and a program
       | generating a thumbnail image for it's own use. Similarly there's
       | a distinction on sending an email by pressing send and a bot
       | sending you an email about a issue update.
       | 
       | All in all, I would be ok with AIs being able to create issues if
       | they could clearly do so through a mechanism that supported
       | something like "AGENT=#id acting for USER=#id" People could
       | choose whether or not to accept agent help.
        
       | wslh wrote:
       | We tested positively ChatGPT for understanding the diffs between
       | commits.
        
       | can16358p wrote:
       | Is there a problem with mobile? I tap the link, it deeplinks into
       | the ChatGPT mobile app, opens a sheet named "Shared conversation"
       | and a spinner keeps spinning forever.
        
         | tallytarik wrote:
         | It 404s for me on desktop, after ~20 seconds of trying to load.
        
       ___________________________________________________________________
       (page generated 2023-07-05 23:00 UTC)