[HN Gopher] Using ChatGPT for Home Automation
       ___________________________________________________________________
        
       Using ChatGPT for Home Automation
        
       Author : iamflimflam1
       Score  : 152 points
       Date   : 2023-05-20 16:57 UTC (6 hours ago)
        
 (HTM) web link (www.atomic14.com)
 (TXT) w3m dump (www.atomic14.com)
        
       | charbull wrote:
       | very cool !
        
       | evmaki wrote:
       | Super exciting to see the work happening in this area! I can
       | especially appreciate the use of ChatGPT to orchestrate the
       | necessary API calls, rather than relying on some kind of
       | middleware to do it.
       | 
       | I have been working in this area (LLMs for ubiquitous computing,
       | more generally) for my PhD dissertation and have discovered some
       | interesting quality issues when you dig deeper [0]. If you only
       | have lights in your house, for instance, the GPT models will
       | always use them in response to just about any command you give,
       | then post-rationalize the answer. If I say "it's too chilly in
       | here" in a house with only lights, it will turn them on as a way
       | of "warming things up". Kind of like a smart home form of
       | hallucination. I think these sorts of quality issues will be the
       | big hurdle to product integration.
       | 
       | [0] https://arxiv.org/abs/2305.09802
        
         | ftxbro wrote:
         | > If I say "it's too chilly in here" in a house with only
         | lights, it will turn them on as a way of "warming things up".
         | 
         | is it bad that I'm on GPT's side on this, i mean if they are
         | incandescent lights they are small electric heaters and what
         | else do you expect it to do
        
           | evmaki wrote:
           | It certainly makes logical sense. I think if you have the
           | ability to control the light in the first place via an API,
           | it's probably an LED smart bulb and thus doesn't produce much
           | heat. At least, I'm not aware of any incandescent smart
           | bulbs.
        
             | ftxbro wrote:
             | I mean the laziest way to control a house is to add plugs
             | to change every electrical plug to an on/off controllable
             | one. This would make every incandescent bulb a smart bulb.
        
           | bee_rider wrote:
           | It would probably be more useful to report to the user that
           | it doesn't have any control over than aspect of the
           | environment.
           | 
           | I'd be curious to see what it does if it is told the sun is
           | too bright...
        
             | hutzlibu wrote:
             | With given access to more resources, it might try to
             | permanently solve skin cancer.
        
           | awwaiid wrote:
           | Yeah but I think the idea is that it is a knob that calls to
           | be turned. "It's warm in here" -> "I'll make the light blue
           | so you feel nice and cool". "How fast do sparrows fly?" ->
           | "Making the light brown". Like it might want to do
           | _something_ and tweaking the hue or brightness are all it can
           | do.
           | 
           | Good reason to always try to include in a prompt a way-out, a
           | do-nothing or I-don't-understand answer.
        
             | [deleted]
        
           | dclowd9901 wrote:
           | Gotta wonder why someone is saying "turn up the heat" to an
           | AI that's only connected to lights.
        
             | bee_rider wrote:
             | Visiting a friend's place or in an Airbnb or something like
             | that?
        
         | iamflimflam1 wrote:
         | Very interesting paper!
         | 
         | It's something that I've been wondering about with ChatGPT
         | plugins - they've kind of left it up to the user to
         | enable/disable plugins. But there's definitely going to come a
         | point where plugins conflict and the LLM is going to have to
         | choose the most appropriate plugin to use.
         | 
         | I have been very impressed at how good it is at turning random
         | commands into concrete API calls. You are right though, pretty
         | much any command can be interpreted as an instruction to use a
         | plugin.
        
           | evmaki wrote:
           | Thanks! That is part of the challenge as this idea scales imo
           | - once you've increased the number of plugins or "levers"
           | available to the model, you start to increase the likelihood
           | that it will pull some of them indiscriminately.
           | 
           | To your point about turning random commands into API calls:
           | if you give it the raw JSON from a Philips Hue bridge and ask
           | it to manipulate it in response to commands, it can even do
           | oddly specific things like triggering Hue-specific lighting
           | effects [0] without any description in the plugin yaml. I'm
           | assuming some part of the corpus contains info about the Hue
           | API.
           | 
           | [0] https://evanking.io/posts/homegpt/
        
         | phh wrote:
         | > If I say "it's too chilly in here" in a house with only
         | lights, it will turn them on as a way of "warming things up".
         | 
         | Thanks for the example that's interesting.
         | 
         | FWIW, this is pretty much what has been described as "waluigi"
         | effect a bit extended: in a text you'll find on the internet,
         | if some information at the beginning is mentioned, it WILL be
         | relevant somewhere at some point later in that text. So an
         | auto-completion algorithm will use all the information that has
         | been given in the prompt. In your example it puts it in an even
         | weirder situation where the model the overall model information
         | (the lights, and that you're cold and nothing else), and it
         | must generate a response. It would be a fun psychological study
         | to look at, but I'm pretty sure even humans would do that in
         | that situation (assuming they realize that lights may indeed
         | produce a little bit of wattage of heat)
        
           | og_kalu wrote:
           | For performant enough models, you can just instruct it not to
           | necessarily use that information in immediate completions.
           | 
           | adding something like
           | 
           | "Write the first page of the first chapter of this novel. Do
           | not introduce the elements of the synopsis too quickly. Weave
           | in the world, characters, and plot naturally. Pace it out
           | properly. That means that several elements of the story may
           | not come into light for several chapters."
           | 
           | after you've written up key elements you want in the story
           | actually makes the models write something that paces
           | ok/normally.
        
           | ftxbro wrote:
           | > FWIW, this is pretty much what has been described as
           | "waluigi" effect a bit extended
           | 
           | Sorry I disagree for some reasons. First, turning the lights
           | on is literally the only thing the bot can do to heat up the
           | house at all. Turning on the lights does heat it up a little
           | bit. So it's the right answer. Second, that's not the Waluigi
           | effect, not even 'pretty much' and not even 'a bit extended'.
           | Both of them are talking about things LLMs say, but other
           | than that no.
           | 
           | The Waluigi effect applied to this scenario might be like,
           | you tell the bot to make the house comfortable, and describe
           | all the ways that a comfortable house is like. Then by doing
           | this you have also implicitly told the bot how to make the
           | most uncomfortable house possible. Its behavior is only one
           | comfortable/uncomfortable flip away from creating a living
           | hell. Say that in the course of its duties the bot is for
           | some reason unable to make the house as comfortable as it
           | would like to be able to do. It might decide that it didn't
           | do it, because it's actually trying to make the house
           | uncomfortable instead of comfortable. So now you got a bot
           | turning your house into some haunted house beetlejuice
           | nightmare.
        
         | kingo55 wrote:
         | Making IOT API calls is a solved problem with Home Assistant -
         | plus it works locally.
         | 
         | Where I see this working best is giving Chat GPT some context
         | about the situation in your home and having it work out complex
         | automation logic that can't be implemented through simple
         | rules.
        
       | gumballindie wrote:
       | Here is what a large budget buys: massive marketing campaigns.
       | ChatGPT spam is not showing signs of slowing down anytime soon.
       | Are people aware there are other ais out there they can use? Some
       | free?
        
         | sukilot wrote:
         | [dead]
        
       | aussieguy1234 wrote:
       | If it can turn on a light, it can open the pod bay doors.
        
       | iamflimflam1 wrote:
       | There's a good video showing it working here:
       | https://youtu.be/BeJVv0pL5kY
        
       | wkat4242 wrote:
       | This is so cool.
       | 
       | Only drawback: it won't feel complete unless you build pod bay
       | doors on your house.
        
       | ilyt wrote:
       | I'm not against idea of having LLM be smart assistant for home,
       | but I do have problem with sending any of that to cloud.
       | 
       | It's one thing to use it as convenient way to change anything
       | remotely but linking home automation not only into constant
       | internet connectivity but also "that one particular cloud thing
       | that might disappear at any moment" (hello google) just seems
       | like setting yourself for problems later. At least in this case
       | he's still left with HASS if cloudy part goes away.
       | 
       | But I wish there was more push into hybrid model - like have your
       | router or small ARM box be server that runs queue and some, for
       | lack of better word, "lambda-like" code that handles most of the
       | programmed events (lights, heating etc.), say via WASM and some
       | APIs around it, then just have HASS-like software (both cloud and
       | local) deploy control rules/code onto it.
       | 
       | Big nice UI for control goes down ? Doesn't matter, the
       | controller is just running simple code, not rest of the
       | visualisation and controls. Want to replace it ? Configuration as
       | code makes that a breeze, hell, have cloud backup of whole setup.
       | Don't want controller ? It's essentially just a queue and some
       | code runners, that cloud providers can host for you. Want pretty
       | graphs in cloud or some aggregation ? Just make your local node
       | filter and send relevant MQ events there.
        
         | nine_k wrote:
         | "Internet of Things will not happen until LAN of Things
         | arrives", to paraphrase someone.
         | 
         | This already happens in the industry, like heavy industry.
         | 
         | At home, it's held back by absence of an adequate server. Often
         | a router box or a NAS would be fine to host something like
         | OwnCloud. But to run ML middle, different and much more
         | expensive hardware is required, which would sit idle 99% of
         | time.
        
           | noobface wrote:
           | It's getting cheaper: https://coral.ai/products/
           | 
           | $20 for the SMD TPU isn't bad, but it's definitely at the top
           | end of the BOM for custom PCB projects.
           | 
           | Ideally we'll see some competition with on-package options.
        
             | jsjohnst wrote:
             | > $20 for the SMD TPU isn't bad, but it's definitely at the
             | top end of the BOM for custom PCB projects.
             | 
             | Sounds great in theory, but show me a place where you can
             | buy Coral TPUs at anywhere near msrp. Unless something has
             | changed recently, finding a real live unicorn would be
             | easier.
        
               | zeagle wrote:
               | If you see a usb one for sale somewhere in Canada near
               | msrp please let me know!
        
             | moffkalast wrote:
             | Any idea if anyone will spin a board with a bunch of these
             | and 12G of VRAM for loading LLMs?
        
           | kingo55 wrote:
           | LAN of things - this is a fascinating idea. Can you share
           | where you are paraphrasing this from or link to more info on
           | this concept? Sounds familiar with the self hosted
           | applications people now run on their NAS/routers these days.
        
             | c_o_n_v_e_x wrote:
             | The phrase is used quite a bit in hackernews comments,
             | particularly on any IoT or home automation threads.
        
         | Joeri wrote:
         | A mac mini running home assistant with some kind of llama
         | plugin running one if the 7B models should do well enough and
         | would run entirely locally. It is just a matter of time before
         | someone builds it.
         | 
         | I think one of the issues for a lot of people is the beefy
         | hardware required for running even a 7B model locally. A RPi4
         | just won't cut it for interactive use, and that's what most
         | people would run their home assistant setup on.
        
         | [deleted]
        
       | kingo55 wrote:
       | I have been using chat GPT to control my light colours for about
       | a month now. It's too tedious to properly set the colours and
       | temperatures of our lights manually and too complex to consider
       | all factors like activity, weather, music, time of day and
       | season.
       | 
       | Chat GPT is now our personal lighting DJ, giving us dynamic and
       | interesting light combinations that respect our circadian rhythm.
       | 
       | Here's my prompt - the output of which feeds Home Assistant:
       | 
       | Set the hue for my home's lights using the HSL/HSB scale from
       | 0-360 by providing a primary and complementary colour which
       | considers the current situation. The HSL color spectrum ranges
       | from 0 (red), 120 (green), to 240 (blue) and back to 360 (red).
       | Lower values (0-60) represent warmer colors, while higher values
       | (180-240) represent cooler colors. Middle values (60-180) are
       | neutral.
       | 
       | Consider these factors in setting the primary hue (in order of
       | importance):
       | 
       | 1. Preferences throughout the day: - When about to wake: Reds,
       | oranges or hot pinks - Approaching bedtime: Hot pinks or reds -
       | During worktime: Blues, greens or yellows - Other times: Greens,
       | yellows or oranges
       | 
       | 2. Current activity: Bedtime
       | 
       | 3. Sleep schedule: Bedtime 23:00, Wake-up time 07:00
       | 
       | 4. Date & time: Sunday May 21, 05:40
       | 
       | 5. Current primary hue: 10
       | 
       | 6. Current complementary hue: 190
       | 
       | 7. Weather: 13degC, wind speed 9 km/h, autumn
       | 
       | Respond in this format and provide a reason in <250 characters:
       | 
       | {"primary_hue": PRIMARY_HUE, "complementary_hue:
       | COMPLEMENTARY_HUE, "reason": REASON }
       | 
       | The output looks like this:
       | 
       | {primary_hue: 10 complementary_hue: 190 reason: "Approaching
       | bedtime and early hours of morning, so a warm and calming hue is
       | needed. Complementary hue adjusted slightly to 195 to maintain
       | balance."}
        
         | iamflimflam1 wrote:
         | That's really cool. Are you using the API to run this locally?
        
           | kingo55 wrote:
           | Yes, I access the API through Node Red which fills out the
           | prompt template and returns Chat GPT's output to Home
           | Assistant.
           | 
           | Costs about $3/year in API quota.
        
         | furyofantares wrote:
         | For stuff like this I like to make it write out a lot of
         | "reasoning" before the final output that I'll parse.
         | 
         | Like so:
         | 
         | Write three thoughts on how the primary and complementary hue
         | should change and what value they should change to, along with
         | your reasoning.
         | 
         | Pick one, summarize the reason for it in less than 50 words.
         | 
         | Then write FINAL CHOICE: followed by output that looks like
         | this {"primary_hue": PRIMARY_HUE, "complementary_hue":
         | COMPLEMENTARY_HUE, "reason": REASON"}
        
         | selimnairb wrote:
         | [flagged]
        
           | winphone1974 wrote:
           | You've been down voted but I totally agree. Why not us the
           | same amount of effort to trying and make a meaningful
           | difference in the world?
        
             | [deleted]
        
             | Toutouxc wrote:
             | So I suppose you don't do anything to provide any kind of
             | comfort or enjoyment to you or your family, ever? If yours,
             | how exactly does that differ from someone working on their
             | home automation?
        
             | jrockway wrote:
             | Where do you rank the general idea of recreation on this
             | scale? Should we be spending 16 hours a day on improving
             | the world? No time off ever? Should we take amphetamines
             | and maybe get by with 4 hours of sleep instead of 8 so that
             | each day brings 4 more hours of bettering the world to the
             | table?
        
         | binkHN wrote:
         | > ...provide a reason in <250 characters
         | 
         | For what it's worth, I've been using something similar with my
         | prompts and felt the completions did a poor job of honoring
         | this, but do a better job when asked to use words instead of
         | characters.
        
           | travisjungroth wrote:
           | Imagine asking a person to give a verbal response in 250
           | characters or less. They could do it, but it would be a lot
           | of work. Even saying less than 50 words is hard.
           | 
           | If you actually have a hard cap, you'll have to give
           | feedback. If it's just you don't want an essay, it works
           | great to say something like "a few sentences". And as always,
           | examples help a ton.
        
           | dragonwriter wrote:
           | > For what it's worth, I've been using something similar with
           | my prompts and felt the completions did a poor job of
           | honoring this, but do a better job when asked to use words
           | instead of characters.
           | 
           | Yes, restricting by characters is hard for GPT-style LLMs
           | because they work in tokens, not characters.
        
             | kingo55 wrote:
             | Thanks, that's a good pickup -I'll refine my prompt.
        
               | psychphysic wrote:
               | But LLM have little concept of tokens don't they? Or at
               | least well not know what their tokenizes is like.
        
               | teaearlgraycold wrote:
               | It can understand word boundaries, though. A space is its
               | own token and there are special tokens for common words
               | prefixed with a space or common word prefixes with a
               | space in front, ex " a"
        
           | celestialcheese wrote:
           | GPT-4 does a much better job at paying attention to details
           | in prompts
        
         | netsharc wrote:
         | A video of these conditions and the resulting colors would be
         | nice.
         | 
         | I wonder if the limited amount of variables mean one could just
         | ask ChatGPT to generate a one-time lookup table of colors and
         | store them locally. But it's interesting to see that an LLM can
         | be a "color designer".
        
       | bbarnett wrote:
       | Oh I can just see it. A fire. A desperate resident. A plea for
       | help.
       | 
       | "Fucking help me!!!" screamed out.
       | 
       | Rather than activate the sprinklers, ChatGPT admonishing the on
       | fire, burning person, for language, and a lack of respect.
       | 
       | This is the AI is see currently. A hazard. A menace. A joke.
        
         | sukilot wrote:
         | [dead]
        
       | odiroot wrote:
       | You can also use ChatGPT to help you write Home Assistant YAML
       | code for sensors and automations.
       | 
       | I even got it to write ESPHome rules for me.
        
       | quickthrower2 wrote:
       | But don't forget AI safety you fine pioneers
        
       | paulddraper wrote:
       | 1990s: What if we had to fight Skynet?
       | 
       | 2020s: Let's make Skynet.
        
       | XCSme wrote:
       | Is this reading specific outputs (in text form) from ChatGPT and
       | then forwarding them to an API? Or how does "ChatGPT" actually
       | make the call based on the OpenAPI description?
        
       | jonplackett wrote:
       | I had been thinking about doing this.
       | 
       | I'm getting really tired of talking to Alexa now I'm used to a
       | machine actually understanding me.
       | 
       | Lights is just the start. Things like getting the weather,
       | playing a playlist (or coming up with a new one for me) would all
       | be SO much better with an LLM instead of a dumb bot.
       | 
       | Edit: if you're going to downvote. Leave a comment at least. Are
       | there just a lot of Alexa fans in the house?
        
         | awwaiid wrote:
         | Maybe you need to let your LLM manage Alexa
        
         | majormajor wrote:
         | Coming up with a playlist is a fun one, but I've had much more
         | luck with stuff like "generate a description of a playlist" - I
         | have a little side project that talks to the spotify API and
         | dumps song metadata into sqlite, and plugging GPT in as a SQL
         | query generator was super useful vs writing all those queries
         | by hand.
         | 
         | Of course that requires a lot of up-front work other than GPT,
         | but not a ton more than is necessary to talk to the API in the
         | first place, but GPT itself has not impressed me in the ability
         | to _rank_ things, like  "select the fifty other songs I've
         | liked that are most like this song" even when given a lot of
         | quantitative metadata.
        
         | gerdesj wrote:
         | It's in its infancy but this does work - https://www.home-
         | assistant.io/blog/2023/05/03/release-20235 I have been able to
         | turn a device on and off by speaking to the app on my phone.
         | Nothing leaves my house. There is a bit of a lag but it is
         | early days.
         | 
         | You simply add the Piper and Whisper addons, then add the
         | integration in the GUI. Then you press the assistant button in
         | the app or your browser and then press the mic button and talk.
        
       | graiz wrote:
       | Nice - exploring the use of home assistant for something similar.
       | It already has all the integrations and API's for control, just
       | lacks the Chat GPT plugin.
        
         | iamflimflam1 wrote:
         | They are very easy to make - it's just an API. The main
         | difficulty at the moment is just getting access to them.
         | There's a long waiting list for develop access.
        
       | syntaxing wrote:
       | Is there something that works with home assistant? Siri is just
       | terrible and have been trying to replace it with anything. The
       | ESP box posted last weeks looks pretty promising but wonder if
       | there's something that could work directly off of the rpi I host
       | HA with.
        
         | tikkun wrote:
         | For anyone curious, the ESP box mentioned is likely:
         | https://news.ycombinator.com/item?id=35948462
        
         | kingo55 wrote:
         | Yes, see my example above. I use Node Red to get chat GPT and
         | Home Assistant talking to each other.
         | 
         | No code required (unless you count the prompt template).
        
       | chpatrick wrote:
       | As an AI language model, I can't let you do that, Dave.
        
         | [deleted]
        
       ___________________________________________________________________
       (page generated 2023-05-20 23:00 UTC)