[HN Gopher] The future of programming: Research at CHI 2023
       ___________________________________________________________________
        
       The future of programming: Research at CHI 2023
        
       Author : azhenley
       Score  : 93 points
       Date   : 2023-04-27 17:26 UTC (5 hours ago)
        
 (HTM) web link (austinhenley.com)
 (TXT) w3m dump (austinhenley.com)
        
       | kleiba wrote:
       | Note that CHI is not a programming language or software
       | engineering conference, but a conference in _human-computer
       | interaction_ : it's the ACM Conference on Human Factors in
       | Computing Systems.
        
         | gjvc wrote:
         | in Europe we call it HCI. In America, they put humans second,
         | so they call it CHI.
        
           | asoneth wrote:
           | > in Europe we call it HCI
           | 
           | Well CHI is being held in Europe this year, so apparently you
           | don't!
           | 
           | But more seriously, the field is "HCI" everywhere, including
           | in America, and has been for at least thirty years. I have
           | vague memories of hearing the story about why the initial ACM
           | SIGCHI folks didn't go with HCI at the time but I can't
           | recall. Anyway, it wasn't long after CHI was founded that
           | basically everyone was using "HCI" on both sides of the
           | Atlantic.
        
             | gjvc wrote:
             | so why hasn't it changed, then?
        
           | Ar-Curunir wrote:
           | Er what? It's the Conference for HcI = CHI
        
           | dr_dshiv wrote:
           | Ahem, at CHI, humans are at the center.
        
       | throwaway4good wrote:
       | I am surprised how quickly the HCI researchers jumped on chatgpt
       | / prompt engineering.
        
         | gjvc wrote:
         | they are chasing that research funding money
        
           | cflewis wrote:
           | Sadly I think this is a significant part of it. It is so very
           | hard to convince anyone in CS to fund unsexy projects. I
           | think the majority of innovation on the unsexy things happens
           | internally at the large tech companies.
        
           | jasonhong wrote:
           | Or maybe their chasing it because it's a highly relevant
           | topic that might impact lots of people around the world, you
           | know, a kind of human-computer interaction.
        
         | radarsat1 wrote:
         | Why? Natural language interaction with computers is like the
         | holy grail of human-computer interaction, of _course_ they
         | jumped on it.
        
         | gwern wrote:
         | Perhaps they could learn something about HCI from ChatGPT...?
        
       | teragramma wrote:
       | Oh man, wild to see an article about the biggest conference in my
       | field pop up on HN.
       | 
       | It's surprising how quickly HCI people managed to pivot to AI
       | stuff - the paper deadline for this conference was Sept. 15,
       | 2022, which was about a month before ChatGPT was even released.
       | So... expect to see even more AI at next year's conference in
       | Honolulu!
        
       | dtagames wrote:
       | This paper is quite good: Why Johnny Can't Prompt: How Non-AI
       | Experts Try (and Fail) to Design LLM Prompts
        
         | textninja wrote:
         | I'm not fond of the provocative title because prompting is easy
         | and only getting easier; the advice seems to be predicated on
         | the use of relatively deficient LLMs. I don't doubt there will
         | still be operator skill involved, but I anticipate the state of
         | the art for LLMs ability to adapt to "bad" prompts will outpace
         | our ability to learn to prompt them effectively.
         | 
         | Disclaimer: I watched the video but didn't read the paper.
        
           | version_five wrote:
           | I think you're right about prompts getting "easier" but I
           | don't think it's a good thing. I expect it will evolve like
           | google search. Where initially there are ways to increase
           | specificity, or at least introduce enough randomness to get
           | some different results, it will converge to something that
           | ignores most of what you prompt and gives you what OpenAI
           | wants you to see. That's really the only way adapting to
           | "bad" prompts even could work
        
           | domoritz wrote:
           | I think there are a lot of instances where writing prompts
           | can be hard just because it's hard to express your needs in
           | words sometimes. Bad prompts are often ambiguous and there is
           | only so much even a perfect LLM can correct for. That is,
           | until we have direct connections to our brains.
        
       ___________________________________________________________________
       (page generated 2023-04-27 23:00 UTC)