[HN Gopher] Show HN: Help, I let ChatGPT control my computer
       ___________________________________________________________________
        
       Show HN: Help, I let ChatGPT control my computer
        
       So, I guess this is the inevitable conclusion with LLMs. Connect
       them to a real terminal and let them act on real-world objects... I
       honestly don't know whether I like the idea or not, but I guess
       it's good to have this conversation now while it is only a
       marginally better version of tldr.  But you can already use it do
       do simple tasks like cleaning old files, figuring out what machine
       you're running on or even perform and summarize portscan results.
       It should go without saying that this should be done on VMs and
       every command is confirmed and checked by the user...  tldr:
       browsing: enabled
        
       Author : greshake
       Score  : 45 points
       Date   : 2022-12-05 21:17 UTC (1 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | inconfident2021 wrote:
       | Out of all the clones, this one seems to be completely new. Eth
       | else is frontend, but this, just shows the power. We all have our
       | own Jarvis now.
        
         | greshake wrote:
         | I was going to call it that, but it's trademarked. Anyway,
         | that's literally what we are already able to build (janky and
         | doesn't work half the time though). With RL specific for this
         | task such an LLM would be crazy powerful. Not to speak of the
         | obvious concerns with letting them roam on real machines, but
         | we're already letting Copilot and ChatGPT write our code, so
         | this isn't so much worse. Hopefully.
        
           | rubslopes wrote:
           | >so this isn't so much worse. Hopefully.
           | 
           | RemindMe! 10 years
        
         | outworlder wrote:
         | It's a Jarvis that can't do math and doesn't really understand
         | anything. Still impressive.
        
           | greshake wrote:
           | Well, I mean this version can write a Python program to
           | calculate and then call up a real Python interpreter on a
           | real CPU to do it.
        
       | tantony wrote:
       | Where can I find api.py?
        
         | kernelsanderz wrote:
         | If you look at the dockerfile you can see that it's copied from
         | alice.py
        
         | greshake wrote:
         | There is no api.py, as OpenAI has not yet chosen to release an
         | API, I'm not releasing a reverse engineered version. If anyone
         | wants to use it, you have to unfortunately make it work
         | yourself.
         | 
         | The OpenAI CEO has already sort of implied there may be an API
         | before Christmas, and if so I'd be willing to clean things up,
         | and make it as convenient as it should be.
        
       | overspeed wrote:
       | Did something similar with a previous incarnation of OpenAI's
       | LLMs - codex.
       | 
       | Far more constrained no doubt but it made some things convenient
       | like finding obscure `ls` flags.
        
       | dwohnitmok wrote:
       | I would encourage developers to start thinking a lot harder about
       | AI safety.
       | 
       | We are actively hooking up AI to computers with real world access
       | and training AIs on how to deceive humans (see e.g. Facebook's AI
       | team working on having AIs play Diplomacy). Given how quickly
       | things are moving in this space, I don't find many AI safety
       | concerns all that farfetched now. An AI doesn't have to be
       | conscious or malicious to do a lot of damage.
        
         | sieabahlpark wrote:
        
         | PartiallyTyped wrote:
         | I'd argue that engagement algorithms have done irreparable
         | damage already. I don't think it's a manner of power or access,
         | but the scale to which they affect humans.
        
         | sdwr wrote:
         | Too late for that! Time to batten down the hatches.
        
         | Beaver117 wrote:
         | They are in fact trying very hard to prevent this. If you try
         | to ChatGPT anything like this it will give a bullshit response
         | saying it can't. The dev had to think of some very clever
         | strings to get it to ignore those filters and give a legit
         | response.
        
       | lynguist wrote:
       | Wow, so interaction with the real world makes an AI into what we
       | think of when we think AI.
       | 
       | And because our strongest AIs are based on text, the terminal is
       | the way to wake up such an AI.
       | 
       | This is brilliant and the first actually dangerous use of AI.
        
       | throwaway23597 wrote:
       | You're a madman. Well done. Starting to think more and more that
       | the singularity will be caused by accident.
        
       | keyle wrote:
       | How long until we find an army of ChatGPT agents commenting on
       | HN?
        
       ___________________________________________________________________
       (page generated 2022-12-05 23:00 UTC)