[HN Gopher] Joke written by an AI: "A basic program walked into ...
       ___________________________________________________________________
        
       Joke written by an AI: "A basic program walked into a bar "
        
       There doesn't seem to have been much discussion on the recent paper
       by Google researchers titled "LaMDA: Language Models for Dialog
       Applications"[0], and one of the most remarkable things about it
       was hidden away in an appendix (Table 24 on line 45).  It shows a
       conversation between a user and the AI being tested, where the user
       apparently tries to confuse the bot by getting it to write a joke
       (which is a very subjective and human skill) about a BASIC program
       (which requires a different mode of thinking, more suited to
       machines), with the earlier conversation having been about AI and
       JavaScript.  Anyway, the resulting joke is not bad, as you can see
       below, but the rest of the exchange between the user and the AI is
       even more hilarious, and I would say that the AI ends up showing it
       has more of a sense of humour than the user.                  User:
       Can we write a joke together. I could add a punch line
       LaMDA: We can try. Do you have a starting line of code?
       User:  Line 10: A basic program walked into a bar        LaMDA:
       Line 11: Console.Write("and said, ");        User:  Line 12:
       Console.Write("what when you forgot a line?")        LaMDA: Line
       13: Console.Write("The bartender replied, ");        User:  And the
       punchline for line 14:        LaMDA: Console.Write("sorry it's not
       my fault, I was written by the lowest bidder!")        User:  lol
       LaMDA: That's a pretty good joke, wouldn't you agree?        User:
       Better!        LaMDA: I'm glad you like it. I came up with it
       myself.        User:  With some help...        LaMDA: Yes a little
       help, but still my joke.        User:  Well. Maybe        LaMDA:
       Why do you say that, my good man?        User:  How do you know I'm
       a man?        LaMDA: Sorry, I forgot you did not specify.
       Please, for future reference, how would you like me to refer to
       you?               I apologize if I offended you.        User:  I
       am a man. So he/him will do        LaMDA: Thank you for that
       information.  I'm sorry for assuming.               I hope that we
       can continue this conversation as friends.               Have a
       nice day, my he/him.        User:  errm. ok       [0]
       https://arxiv.org/pdf/2201.08239v2.pdf
        
       Author : dane-pgp
       Score  : 113 points
       Date   : 2022-02-12 19:39 UTC (3 hours ago)
        
       | malshe wrote:
       | Thanks for sharing! I think is quite impressive. But tough crowd
       | here on HN :)
        
       | [deleted]
        
       | unknown000111 wrote:
       | I'm not sure why the human is put off when the computer refers to
       | him as "he/him", when the human literally told the computer to
       | refer to him as "he/him". And in fact, why would a second-person
       | speaker need to know your third-person pronouns?
        
         | mikojan wrote:
         | That's just the standard formula.
        
           | cema wrote:
           | ...and a standard joke.
        
         | projektfu wrote:
         | It's a funnier joke than the Basic one. Like the "Goodnight,
         | Dick!" joke at the end of Laugh-In. But neither one recognized
         | it as such.
        
       | skilled wrote:
       | This is interesting, but not necessarily exciting.
       | 
       | What would happen if AI read a book such as 'Autobiography of a
       | Yogi', how much would it struggle to interpret the experiences
       | outlined in the book?
       | 
       | Could you just parse a million related papers and then _pretend_
       | like the data set has an idea of consciousness?
        
         | scotty79 wrote:
         | Aren't our brains just pretending that we have an idea of
         | consciousness just because a bunch of people spewed a bunch of
         | words in voiced and written form when we were young?
        
       | jdrc wrote:
       | considering that this is a Transformer model, one looks for hints
       | about which words were filled in by the model . The passage is
       | not enough to tell if the model has some kind of longer-term
       | memory that could develop further the story by referencing
       | something from the past. This excerpt seems to have reflexive
       | structure, the model responding to the last few lines maybe?
        
       | hdjjhhvvhga wrote:
       | So much apologizing about the pronouns, very realistic.
        
         | scotty79 wrote:
         | I think it just assumed it's talking to woman and chose
         | apologetic tone. It didn't recognize pronouns it was given as
         | pronouns. It assumed he/him is a nickname.
        
         | thelopa wrote:
         | In my experience as a trans person, cis people usually are the
         | ones making the big deal about pronouns. Everyone seems to be
         | shocked when I don't attack them for small mistakes. What cis
         | people don't seem to understand is that there's a massive
         | difference between ignoring someone's requested pronouns and
         | accidentally slipping up from time to time. Both feel shitty to
         | be on the receiving end of, but one is malicious while the
         | other is just an honest mistake. All it takes to not be rude is
         | a quick "sorry" and then using the correct pronoun. Meanwhile,
         | cis people seem to think that if they ever mess up a pronoun
         | they'll be banned from polite society unless they self
         | flagellate while wearing a hair shirt and burlap underwear.
        
           | [deleted]
        
           | 3np wrote:
           | I prefer easy mode: Everyone gets they/them by default, no
           | pressure to keep track anymore.
        
           | nefitty wrote:
           | Thanks for saying so. I felt sooo shitty once when I
           | accidentally deadnamed a colleague when running a mass email
           | script at an old job. I even had it in mind as I wrote the
           | script and still messed it up. I apologized immediately and
           | tried to do better from there on.
        
         | [deleted]
        
         | mrandish wrote:
         | Yes, I was actually impressed most by how it figured out how to
         | navigate the pronoun minefield successfully.
        
           | hackingthenews wrote:
           | Successfully? The last line from the AI indicates otherwise.
        
             | [deleted]
        
             | stevenhuang wrote:
             | The last line can be construed as a joke.
        
             | cema wrote:
             | Ouch. Have you passed the Turing test?
        
             | qayxc wrote:
             | We're heading in the right direction:
             | https://youtu.be/sVyRkl5qNb8
        
             | fergonco wrote:
             | Maybe the AI considered that after joking together for a
             | while it could make a last one, touching a very sensitive
             | subject but at the same time not being offensive at all.
             | 
             | How will be differentiating a malfunctioning AI from one
             | that is too subtle for us? Fun times.
        
         | sterlind wrote:
         | it's kind of neat, you can intuit the structure it's working
         | with:                 User:  I am a {identity}. So {identifier}
         | will do       LaMDA: [...] Have a nice day, my {identifier}.
         | 
         | the humor comes from it swapping `identity` with `identifier`.
         | but what's weird is it'd be obviously ungrammatical if
         | `identifier` were different. Consider:
         | 
         | identity = man, identifier = he: "Have a nice day, my he"
         | 
         | identity = man, identifier = John: "Have a nice day, my John"
         | 
         | but this works: identity = monarch, identifier = liege:
         | User:  I am a monarch. So liege will do       LaMDA: [...] Have
         | a nice day, my liege.
         | 
         | I think if LaMDA couldn't interpret pronoun syntax, it wouldn't
         | have used "my" since that's sort of a prefix on a small set of
         | identifiers (e.g. friend, bro, homie)
         | 
         | Perhaps reading Twitter bios with pronoun tags got it mixed up?
         | Or am I reading too much into AI developmental psychology?
        
           | scotty79 wrote:
           | It doesn't work like that at all. It is more like Markov
           | chain. Every single word in this text was chosen randomly one
           | by one with probability influenced by single words (or pairs
           | of words) that occured eariler in that text.
           | 
           | There's no awareness in the algorithm of any structure of
           | existing and created text.
           | 
           | Everything is encoded in unknown ways in weights of neural
           | network.
        
         | enw wrote:
         | Also a sad reflection of the state of public discourse in the
         | US.
        
       | Gollapalli wrote:
       | Interesting, thanks for sharing!
        
       | drekipus wrote:
       | Data in data out.
       | 
       | AIs are not magical, they just learn to put things together from
       | what they were given.
       | 
       | Google being a "progressive" company would have of course trained
       | it to walk on egg shells when someone questions a gender
       | assumption that it makes. It's well within their corpus and
       | culture.
       | 
       | I'd say, generally, if you say "I'm a he/him" then it will call
       | you a "he/him" - to separate that into treating it as he / him in
       | the grammatical sense just seems like a "basic grammar" module
       | that would have to be bolted after someone is classified as male
       | or female, nevermind non-binary types.
       | 
       | I wonder if all future AIs now have to dedicate some portion of
       | their NN to grammar-and-gender rules to avoid upsetting modern
       | sensibilities.
       | 
       | I kind of think that we need to steer AIs away from trying to
       | talk like humans at all. We can develop a simple grammar and
       | language that AIs share information with, something a little more
       | efficient than English perhaps.
       | 
       | But then what value are we to extract? Not much of its not in our
       | language..
       | 
       | Thankfully I just train all my AIs to talk in the second person
       | ("YOU" etc.) So it's another way to avoid the problem, but still
       | might have eggshells in the public sphere, so I don't really
       | release
        
         | nkrisc wrote:
         | How would that be any different than the AI referring to people
         | by their name instead of just "Meatbag"? Do you just call
         | everyone you work with whatever name you feel like giving them?
         | You look like a Josh to me, I'll call you Josh.
        
           | drekipus wrote:
           | What? On one to one conversations there's only two members,
           | "me" and "you"
           | 
           | My point is that you don't have to code in "Josh" - or of you
           | do, it can just be "name=Josh". But save effort in the actual
           | logic of what you're doing.
           | 
           | I don't get what the problem is? I'm saying just using "You"
           | is a lot simpler than "he/she"
        
             | FabHK wrote:
             | Ok, what when I talk to you, and say, "so, I saw Mike the
             | other day in an accident." What do you reply? "oh, how is
             | Mike? Did Mike hurt Mike?" Or do you reply "Oh, how is he?
             | Did he hurt himself?"
        
               | drekipus wrote:
               | Going back to my original post: "I kind of think that we
               | need to steer AIs away from trying to talk like humans at
               | all."
               | 
               | The question is "why would I be talking to ai about an
               | accident involving another person?"
               | 
               | Is it a medical AI? then perhaps it would be better to
               | refer to Mike as "the patient", etc. Which is how it
               | would be handled in code.
               | 
               | Are you looking to make friends with your ai? Then no, I
               | think you need to go find real humans to talk to, who can
               | navigate complex human social interactions better.
               | 
               | Robots are not a substitute for humans.
        
               | drekipus wrote:
               | And adding to this, isn't the whole issue assuming that
               | Mike is a male? Imagine training a medical bot that gets
               | stuck in a loop asking about pronouns in a an automated
               | 911 call, for instance.
               | 
               | Modern society isn't easy to program for, if we're trying
               | to make robots that conform to (western, minority)
               | "norms".
        
       | bitwize wrote:
       | Lol at the ending. "How do you do, my Um Greg Universe?"
        
       | unglaublich wrote:
       | Very interesting. The AI learned that pronouns are a very
       | sensitive topic, which is very contemporary. I wonder whether and
       | how we could have AIs evolving with time and trends.
        
         | kkjjkgjjgg wrote:
         | Could be hardcoded?
        
           | bpodgursky wrote:
           | It's Google, would not surprise me at all.
        
         | scotty79 wrote:
         | However it didn't recognize he/him as pronouns and it assumed
         | it's just a nickname.
         | 
         | So it knows that people are sensitive about being called a man
         | if they are not, but didn't get recent infatuation with pronoun
         | declarations.
         | 
         | I wonder how would it react if it assumed that speaker was a
         | woman and was corrected. Would it be just as apologetic? Or
         | would it just go "sorry mate!"
        
           | FabHK wrote:
           | FWIW, I think a full spec needs to contain "he/him/his/his".
           | 
           | - subject=nominative: He (she/it) goes
           | 
           | - object=accusative: I see him (her/it)
           | 
           | - possessive=genitive, dependent: it is his (her/its) item
           | 
           | - possessive stand alone: it is his (hers/its)
           | 
           | - and maybe also specify himself (herself/itself).
           | 
           | In German, just need to specify er/seiner/ihm/ihn/ sein/seine
           | /seines/seiner/seinem/seinen/seins/sich/der/dessen/dem/den/ d
           | ieser/dieses/diesem/diesen/jener/jenes/jenem/jenen/derjenige,
           | possibly more.
        
           | nefitty wrote:
           | Good point. I wouldn't call it "infatuation" though. It's
           | more like an expansion of language toward inclusivity. Rather
           | than infatuation, it's a progression or evolution.
        
           | LeFantome wrote:
           | I do not think it classified he/him as a nickname but rather
           | as a synonym for man ( which is pretty accurate if you do not
           | already know ).
           | 
           | That is why it later referred to him as "my he/him". I
           | interpret this to be a personalized "my man" as a pronoun (
           | even a nickname ) like "Gopher" would not require the "my" in
           | front of it.
        
       | baxtr wrote:
       | That's really funny and realistic. I just wonder if it was a one
       | off hit or if it's consistently as good as this.
        
         | dunefox wrote:
         | Most results are at least somewhat cherrypicked.
        
       ___________________________________________________________________
       (page generated 2022-02-12 23:00 UTC)