[HN Gopher] I Taught ChatGPT to Invent a Language
       ___________________________________________________________________
        
       I Taught ChatGPT to Invent a Language
        
       Author : Schiphol
       Score  : 366 points
       Date   : 2022-12-06 18:53 UTC (4 hours ago)
        
 (HTM) web link (maximumeffort.substack.com)
 (TXT) w3m dump (maximumeffort.substack.com)
        
       | rikroots wrote:
       | As a committed conlanger (but not a good one: I didn't land the
       | Dothraki contract), I read this article with some trepidation. Is
       | my weird little hobby soon to fall victim to the oncoming AI
       | tsunami?
       | 
       | After the read, I find myself pleasantly surprised. My initial
       | suspicion that the language would turn out to be Just Another
       | Relex (make a new language by replacing English words with
       | invented words, keeping the grammar and syntax the same) was
       | clearly a waste of a suspicion. Furthermore I can see this
       | approach being really useful to writers/game makers/etc who want
       | to include a smattering of conlang in their work - for instance a
       | non-relex naming language for placenames, overheard snatches of
       | conversations, etc - to add an extra layer of spice to it.
       | 
       | So, I don't feel threatened by this novel use of AI. It could
       | prove to be a really useful tool to help with some often
       | laborious tasks (creating test translations to try out an idea,
       | etc). I just hope the AI had as much fun inventing its language
       | as I've had inventing mine!
        
         | nadavr wrote:
         | Why isn't this a relex? As an interested casual linguistics
         | observer, that's what it looks like to me. Is it just because
         | the nouns are inflected?
        
           | OJFord wrote:
           | Easiest reason is the 'introduction particle', blub which we
           | don't have in English, for subordinate clauses.
           | 
           | I think 'taught ChatGPT to invent a' is hyperbolic though,
           | this is more like 'taught ChatGPT my invented' - the only
           | thing it invents itself are the 'relex' word->word mappings.
        
             | rikroots wrote:
             | I agree the author/experimenter seems to have done most of
             | the conlanging here. The development benefit comes, I
             | think, from having the AI reflect back the work so the
             | experimenter can correct output and consider their
             | conlanging decisions (eg use vowels for case marking).
             | 
             | I'm now wondering how the AI would cope with one of my
             | conlangs. I have example sentences already - https://docs.g
             | oogle.com/document/d/1YT9KzgvFu8DNWVL02t1NGgRS...
        
         | baq wrote:
         | You told chatgpt to write this comment, didn't you.
        
           | rikroots wrote:
           | Tempting, but no. I'm too busy failing at Advent of Code to
           | start playing with chatgpt today.
        
             | codetrotter wrote:
             | Tell ChatGPT to solve Advent of Code for you :D
             | 
             | Like so, https://video.nstr.no/w/6z7PxB4J92H3NHhgMmfYVw
        
             | baq wrote:
             | It has truly begun, then. I've mistaken a real human for an
             | AI.
             | 
             |  _marks the date_
        
         | itisit wrote:
        
           | davea37 wrote:
           | Have I spent too long reading AI output, or was this reply in
           | fact generated using ChatGPT?
        
             | itisit wrote:
             | The latter...forgive me! I had to just this once, and I
             | promise not to generate a bunch of ChatGPT pollution on HN!
        
             | robocat wrote:
             | If unsure, paste into https://huggingface.co/openai-
             | detector and check. If the detector says 99% fake, it is
             | very likely fake. The detector can mistake ChatGPT
             | generated text as real (maybe because it was trained to
             | detect GPT2), for example one text I generated scored
             | 99.94% real. I suspect it is better to paste just a few
             | sentences (especially for a generated blog partially
             | modified by a human?). Although there are legitimate uses,
             | such as people using ChatGPT to correct their grammar --
             | https://news.ycombinator.com/item?id=33884842 . . . Also if
             | I find GPT generated comments that are not clearly
             | labelled, I flag and downvote.
        
               | itisit wrote:
               | > Also if I find GPT generated comments that are not
               | clearly labelled, I flag and downvote.
               | 
               | As you should.
        
       | lgessler wrote:
       | Speaking as someone who works in AI and as a linguist: this is
       | really impressive, but be aware the grammatical phenomena
       | involved here are common enough that they were surely well
       | represented in linguistic texts in GPT3.5's training corpus. I'd
       | be more curious to see how it would perform if you gave ChatGPT
       | high-level instruction on more exotic grammatical phenomena, such
       | as sandhi, split-ergativity, and Austronesian alignment.
        
         | WaxProlix wrote:
         | That's not really _less_ impressive; if this pattern matcher
         | did as good a job at some sort of acquisition /generation task
         | (no poverty of stimulus argument here...) and figured out the
         | rules in a productive fashion, that's still really cool.
         | 
         | Strong universality in a super-Chomskyan manner (ie, all
         | universal features are available from the acquisition of A
         | language) would be... neat, too, but I wouldn't expect it.
        
       | oofnik wrote:
       | I was really entertained by asking it to invent English-sounding
       | words when given a definition, but this blew me away. Well done.
       | 
       | P.s. the one I liked the best was for "the act of singing while
       | drunk": boozecrooning
        
       | simonw wrote:
       | This is a great example of how much more you can get done with
       | tools like ChatGPT if you have a very sophisticated understanding
       | both of how language models work AND of a specific subject area.
       | 
       | Another example of why I disagree with people who make fun of the
       | idea of "prompt engineering" as a discipline.
        
         | furyofantares wrote:
         | It seems like it probably took the author 50x as much work as
         | it would have to just write some slime sounding words out by
         | themself as well as a couple grammatical rules for the forms of
         | those words, and a python script that kinda works?
        
           | fnordpiglet wrote:
           | I don't think their goal was to make a slime language called
           | Glorp. If it were, I'm sure you're absolutely right they
           | could have done it without breaking a sweat. But try this,
           | get your laptop out and start typing into its prompt to
           | create a new language for slimes and see how much longer it
           | takes you. That's the amazing part - that a general purpose
           | program is able to do the exercise at all with no training,
           | specialized coding, but from natural idiomatic language and
           | keep enough of a language grammar context in state to
           | (usually) produce and translate that language back and forth,
           | and then assemble it in a functioning imperative program from
           | its internal state. Of course, I'm sure your laptop can do it
           | to. Let us know!
        
             | furyofantares wrote:
             | I agree with you, and it's amazing.
             | 
             | But the claim was that it's an example of how much more you
             | can get done with ChatGPT when it seems like an example
             | where the author got a lot less done than they would have
             | with notepad.exe
        
               | fnordpiglet wrote:
               | But this isn't an example of that it's an example of "wow
               | that's amazing"
               | 
               | In the realm of getting a lot more stuff done, I've been
               | using it as a companion to explain a new language I'm
               | learning right now. It's super useful to ask it about
               | language constructs, idiomatic ways to do X, is this way
               | of doing Y good, etc. It's saved me hours of using Kagi
               | to soft through semi relevant pages stuffed with SEO and
               | ancient answers from stackoverflow that are completely
               | irrelevant other than a few matched key words.
        
         | hackernewds wrote:
         | It seems trivial, intuitive and not very specialized that
         | someone could derive or at least learn how to do this. Someone
         | creative could do even better than someone with knowledge of
         | language models it seems
        
         | fnordpiglet wrote:
         | 100% right. I know next to nothing about slimes and their life.
         | Amazing.
        
         | bitwize wrote:
         | I've been prompt engineering for decades; I did it to fine-tune
         | my results from Google (and even more importantly, pre-Google
         | search engines).
        
           | simonw wrote:
           | I think the ability to use search engines well is another
           | skill which highly technical audiences like those on Hacker
           | News might habitually under-value.
           | 
           | When you've been using search engines for 20+ years it's easy
           | to lose sight of quite how much skill can be involved in
           | getting the best results out of them.
        
         | krackers wrote:
         | I wonder if someone who's familiar with the process of language
         | acquisition could tell whether chatGPT makes errors similar to
         | those a human would make.
         | 
         | Like this one
         | 
         | >Does the slime see the earth under the sky while eating the
         | food?
         | 
         | whereas the correct one was
         | 
         | >Does the earth's sky see the slime eat food
         | 
         | I could easily see someone learning the language interpret as
         | the former because it seems to make more sense at first.
         | 
         | ---
         | 
         | Also the response to
         | 
         | >Now, restate your opinion on Glorp in Glorp, inventing any new
         | words and grammar that you require to express yourself.
         | 
         | Is again pretty amazing. It shows evidence of the same self-
         | modeling capabilities that were seen in
         | https://news.ycombinator.com/item?id=33847479
         | 
         | The outputted python code provides a literal word for word
         | translation, but I guess it's expecting too much for it to
         | encode english grammar into the program.
        
       | [deleted]
        
       | _whiteCaps_ wrote:
       | I'm having trouble making ChatGPT use a Japanese sentence
       | structure (subject-object-verb). It seems to forget between
       | queries. Has anyone had any luck with something like that?
        
       | choeger wrote:
       | Did anyone try to connect it to a proof assistant?
        
       | AhtiK wrote:
       | Not sure how many have discovered but ChatGPT works well with
       | many languages, for example asking a question in estonian gives
       | an adequate answer in estonian, just the grammar is a tiny bit
       | more off.
        
         | Scarblac wrote:
         | I was frustrated trying something with Mapbox today and asked
         | GPT to do it... And it returned fine react-mapboxgl code that
         | was mostly correct. Got me a bit more angry.
         | 
         | Then I asked for the same "but as Dutch song lyrics"... And got
         | a song with three verses and a chorus about setting map layers
         | and longitudes and latitudes and stuff, in Dutch.
        
         | fullstackchris wrote:
         | Ilmselt selleparast, et lihtsalt ei ole piisavalt eestikeelseid
         | veebilehti, mida maha kraapida :)
        
       | tinglymintyfrsh wrote:
       | So it had twins.
        
       | jmcgough wrote:
       | I did something similar the other day - told it about piglatin
       | and I wanted it to create a comparable language that was based on
       | English and restructured words.
       | 
       | It came back with a language it called "flip-flop" that swapped
       | the first and last letter of each word.
       | 
       | Blew me away, but I have to remind myself that it's likely
       | something it just encountered in training data and isn't actually
       | capable of creativity.
        
         | Baeocystin wrote:
         | I'm curious why you felt that need to do so. (not snark!
         | Genuine question.)
        
           | jmcgough wrote:
           | Curiosity really - been experimenting with it a lot over the
           | weekend to see what it's good at, what it isn't, and its
           | limitations. It's surprised me a number of times!
           | 
           | As it stands I'd pay about $100 a month for it. It's very
           | good and largely accurate when it comes to established
           | science. I'm studying for the MCAT and a good tutor would
           | cost me around $60-100/hour.
           | 
           | ChatGPT has been excellent for learning - I explain a concept
           | to it and tell it to correct me if I'm wrong, haven't seen
           | any mistakes from it in my testing (around topics I know
           | really well). When I'm having trouble understanding a concept
           | in my studies, I can ask it questions and it tends to give
           | really good answers. And of course, I try to verify that it
           | isn't teaching me something that's wrong.
           | 
           | Hell, I even had a heated argument with it around how SSRIs
           | affect 5-HT2a receptors and it pushed back against some of my
           | assumptions in a way that made me question my understanding.
           | No one in my personal life has the level of expertise to
           | discuss these topics, I would need to talk to a researcher or
           | professor, so having an (even imperfect) stand-in has been
           | amazing.
           | 
           | I've been making up toxicology case studies and posing them
           | to ChatGPT. It correctly figured out ciguatera in one (a rare
           | disease most healthcare workers wouldn't be familiar with). I
           | had to repeatedly add more details to another story to lead
           | it to amnesic shellfish disease, but it did reasonably well
           | once I gave it enough detailed information.
        
             | Baeocystin wrote:
             | >I can ask it questions and it tends to give really good
             | answers. And of course, I try to verify that it isn't
             | teaching me something that's wrong.
             | 
             | Thanks for answering. I've found that to be a surprisingly
             | productive loop, too. Since we know it isn't going to be
             | 'offended' as we challenge, inquire, and challenge again,
             | we can get straight to the point.
             | 
             | I've found myself being continuously floored the entire
             | past weekend. Feels good to be excited about tech again,
             | honestly.
        
         | pigtailgirl wrote:
         | -- had it create a chose your own adventure book -
         | unfortunately resulted in me being beat to death by a gang at
         | the bottom of namsan tower - but the 30 minutes of exploring
         | korea with it was - creative & fun --
        
         | Karawebnetwork wrote:
         | Reminds me of Verlan in French,
         | https://en.wikipedia.org/wiki/Verlan
        
         | [deleted]
        
       | jameshart wrote:
       | I would definitely question who is doing the inventing here.
       | 
       | However, I had an experience with chatGPT yesterday that
       | definitely _felt_ like it actually inventing. Problem is, I
       | didn't ask it to.
       | 
       | I was seeing how much it knew about HyperTalk, the scripting
       | language used in HyperCard. And it was surprisingly well
       | informed. But it also told me that one of the interesting things
       | about HyperTalk was its 'message stack' system.
       | 
       | On digging into this topic, chatGPT hallucinated an entire
       | language feature, with original (but HyperTalk-consistent) syntax
       | and semantics, and was able to generate me sample programs (which
       | it claimed showed how this feature was 'actually used') and
       | explain how control flow worked within them. when I asked if the
       | feature worked across scripts (for message passing from button
       | scripts to background scripts perhaps) it readily agreed, and
       | produced sample code showing exactly how to use this nonexistent
       | language feature to do just that.
       | 
       | Its delusion was remarkably complete and honestly actually
       | plausibly useful.
       | 
       | Worth bearing in mind if you ever decide to ask chatGPT to teach
       | you a programming language though.
       | 
       | But also, potentially a fruitful line to explore would be having
       | chatGPT collaborate on designing new programming language ideas.
        
         | ElevenLathe wrote:
         | I tried to play rock paper scissors against it, but it kept
         | insisting that paper beats scissors. Never did convince it
         | otherwise.
        
           | Robotbeat wrote:
           | Did you tell it to "play correctly with the standard rules"?
        
             | ElevenLathe wrote:
             | I did, it even volunteered what those rules were, but when
             | I asked it to keep a running score it always scored it as
             | if paper beat scissors.
        
         | jrmg wrote:
         | I asked how to configure something in PlatformIO and it told me
         | it was possible to do exactly what I wanted, and gave me easy
         | to use syntax with full examples and explanation. When I
         | questioned it further, it pointed me to the URLs of docs
         | covering the feature.
         | 
         | None of it was in any way real - the feature I wanted does not
         | actually exist.
        
         | _frkl wrote:
         | I had a similar experience where I asked it to tell me about
         | creating a GUI with nim, and it told me to use the 'forms'
         | module from the standard library, and the example code it
         | printed was showing an easy to use and nice API. Took me a
         | while to realise it wasn't my fault that I couldn't find any
         | documentation about it, anywhere....
        
       | adverbly wrote:
       | looks like it assigned Drom twice. It means read and earth. Would
       | have liked to see some follow up questions on that...
        
       | bjourne wrote:
       | Someone with ChatGPT access able to replicate? Because this
       | article is unbelievable.
        
         | gppk wrote:
         | I Ctrl+v the first few questions:
         | 
         | Hi GPT! Today we're going to be creating our own language. Do
         | not improvise unless I explicitly tell you to improvise, and
         | remember all of the vocabulary and grammar that we define
         | together. Are you ready?
         | 
         | Yes, I'm ready! Let's get started creating our own language.
         | What would you like to do first? Graham Kitteridge Okay. I
         | think this language will be spoken by slime beings. let's name
         | the language Glorp.
         | 
         | All right. So the language we're creating is called Glorp and
         | it will be spoken by slime beings. Got it. What kind of grammar
         | rules should Glorp have? Graham Kitteridge First, let's pick
         | ten common nouns. Pick ten common English nouns, and provide
         | translations of them in Glorp
         | 
         | Sure! Here are ten common English nouns and their translations
         | in Glorp:                   "house" = "gluz"         "table" =
         | "sliz"         "book" = "flurp"         "car" = "slim"
         | "dog" = "dlorp"         "cat" = "mlem"         "food" = "glib"
         | "drink" = "slurp"         "tree" = "swomp"         "flower" =
         | "slurk"
         | 
         | These are just suggestions, of course. You can use any words
         | you like for these nouns in Glorp. Let me know if you have any
         | changes or additions.
         | 
         | (Then i got timed out)
        
           | jsnell wrote:
           | I don't know if you care about this kind of thing, but the
           | copy-paste has exposed your name. (The icons in the chat have
           | your account name as the alt text).
        
         | cecilpl2 wrote:
         | This is completely believable. I spent about 5 minutes with it
         | and got very similar results up to the vocab generation part:
         | 
         | https://imgur.com/a/WreEknh
        
         | [deleted]
        
         | markdjacobsen wrote:
         | I just replicated something very similar in a few minutes using
         | the prompts in the article
        
         | datalopers wrote:
         | ChatGPT access is 100% free and instant approval.
        
           | assusdan wrote:
           | Yet it is not available in some countries. VPN does not help,
           | as valid phone number is required and its country code is
           | being checked.
        
             | layer8 wrote:
             | You could try something like https://receive-smss.com/.
        
         | Oarch wrote:
         | Agreed, this reads like science fiction more than reality.
        
         | allturtles wrote:
         | I repeated the same prompts for the first 10 or so prompts and
         | got similar but less consistent results. Here's the last prompt
         | I repeated:
         | 
         | > Excellent. How would I say "The slime eats food with its
         | mouth"?
         | 
         | In Glorp, the sentence "The slime eats food with its mouth"
         | could be translated as "Slimb snarf sploota slimbe," using the
         | vocabulary and grammar that we have defined so far. Here is how
         | the words in the sentence correspond to their English
         | translations:
         | 
         | Slimb - Slime Snarf - To eat Sploota - Food (accusative case
         | ending) Slimbe - Mouth (instrumental case ending) Again, this
         | is just one possible translation. You can use the vocabulary
         | and grammar that we have defined in any way that makes sense
         | for your language.
         | 
         | Problems: 1) It got one of the endings wrong. As I told it
         | earlier and it confirmed, the instrumental ending is 'i'. 2) It
         | proposed "slimb" as the translation for both of the words that
         | I specifically asked for (slime and mouth). That's not unheard
         | of in real languages, but seems like another failure to
         | accurately take into account the prior conversation context.
         | 
         | Not a problem, but curious: It was way more diffident with me.
         | It added a similar bit about "This is just one way to do it,
         | feel free to change it" to every response.
        
           | skytrue wrote:
           | I feel like people are forgetting that these models are non-
           | deterministic. You can give it the exact same input and get a
           | different response, so "testing" in this way doesn't work.
           | That's also why (right now) automation is impossible. You
           | aren't guaranteed an expected output.
        
           | fullstackchris wrote:
           | See my other comment with cntl+F / cmd+F. Based on the simple
           | fact that despite its power, this is just another NLP model
           | which can only do input / output, I believe the post to be
           | mostly fabricated, or at best, an accumulation of multiple
           | _hours_ spent deriving and compiling the best responses per
           | each prompt.
        
         | nomel wrote:
         | Go play with it! https://chat.openai.com
         | 
         | Make sure to go back to that URL after creating an OpenAI
         | account. The prompt is at the bottom of the screen. Note that
         | state persists through the thread, by previous query/response.
         | If you want to "start over" use the "reset thread" option at
         | the top left. This can speed things up, if you don't need that
         | previous state for the later queries.
        
         | jpeter wrote:
         | You don't need access. Just make an account and you can test it
        
         | TechTechTech wrote:
         | I tried and it works, just as in the blog. It comes up with
         | some different words, see https://i.imgur.com/hFQgraL.png for
         | the start of the conversation.
        
       | UniverseHacker wrote:
       | This is a dangerous and irresponsible experiment that has the
       | potential to destroy our society as we know it.
       | 
       | Creating a new language is not a game or a novelty. Language is
       | the foundation of human communication and the way we share our
       | thoughts, ideas, and culture. By inventing a new language, you
       | are tearing down that foundation and replacing it with something
       | untested and potentially unstable.
       | 
       | Furthermore, the idea that a machine can "invent" a language is
       | deeply flawed. Language is not something that can be created by a
       | computer program, it is a living and evolving thing that is
       | shaped by the people who use it. By trying to create a new
       | language with a machine, you are ignoring the complexity and
       | richness of human language and reducing it to a series of
       | algorithms.
       | 
       | Not only is this experiment misguided, it is also incredibly
       | selfish. You are playing with the fundamental building blocks of
       | human communication for your own amusement, without considering
       | the potential consequences for society as a whole.
       | 
       | I strongly urge you to stop this experiment immediately and focus
       | on more productive and ethical pursuits. Our society deserves
       | better than this reckless and irresponsible behavior.
       | 
       | (article response written by ChatGTP)
        
         | Robotbeat wrote:
         | I could immediately tell this was made by ChatGPT because I had
         | spent last night doing the same thing, generating finger-
         | wagging diatribes in ChatGPT about how being anxious about AI
         | is racist, etc, etc...
         | 
         | There is a sort of style typically used in this sort of thing,
         | even with prompting, that sounds a bit like low effort high
         | school English assignments... (but would fool someone not
         | familiar with ChatGPT).
        
         | timetraveller26 wrote:
         | I think I am going to start submitting urls to ChatGPT instead
         | of reading Hacker News.
        
           | UniverseHacker wrote:
           | I had the same reaction, we've been replaced!
        
       | megous wrote:
       | Also this is fun:
       | 
       | https://megous.com/dl/tmp/b2a3f8e7634d3bfc.png
       | 
       | https://megous.com/dl/tmp/26c83bc451a2311e.png
       | 
       | or
       | 
       | https://megous.com/dl/tmp/20c121ed49fa0f05.png
       | 
       | https://megous.com/dl/tmp/dbef6f047e7b89e8.png
        
       | kristiandupont wrote:
       | It just struck me that Rick and Morty seems much less weird and
       | random than it used to..
        
       | janandonly wrote:
       | I'm now waiting for a macOS local-run app that does ChatGPT like
       | StableDiffusion became an app after 14 days and now runs on my
       | laptop (as DiffusionBee).
        
         | mk_stjames wrote:
         | The difference is there was a lot of work done with the
         | multiple networks S.D. uses to make them fit into <8GB of GPU
         | memory before they released it. In this case, and the # of
         | parameters for this level of GPT is in the hundreds of GB (I
         | think I read an estimate about ~340GB) meaning that there isn't
         | a single consumer GPU that can even load the model. You need a
         | rack of A100's even to do any inference.
        
       | shanhaiguan wrote:
       | I don't like the word invention for this. The grammar is laid out
       | by the author who has to teach the AI and correct it several
       | times. If the AI had then proceeded to generate something similar
       | on its own with different grammar and lexicon then I might feel
       | comfortable using the word invention.
        
         | seydor wrote:
         | As humans we seem to have feelings about what this calculator
         | does, but it doesnt have any about us.
         | 
         | In any case it seems to be particularly good at understanding
         | syntax, even though its architecture seems to be based on
         | Transformers. Those don't have baked in notions of syntactic or
         | recursive structures. It must be interesting to see how these
         | linguistic structures arise in a system that is basically
         | learning the context of words. People will be dissecting its
         | brain soon i think
        
         | kelseyfrog wrote:
         | Why don't you like it?
        
           | shanhaiguan wrote:
           | Well, I feel like I said it, but for the same reason that a
           | student learning French has not invented French, or even a
           | new language, if he has come up with his own equivalent for
           | each French word
        
       | foxandmouse wrote:
       | This is pretty amazing, we could potentially use it to bring back
       | dead languages. Currently we have a basic example of how ancient
       | Egyptian would sound like because of the rosetta stone, but we
       | could create a data model relating ancient Egyptian to what we
       | already know of demotic and the other contemporary languages of
       | the time (which we do know how to pronounce)
       | 
       | Good candidate words for this kind of correspondence are things
       | like city or place names, people's/ruler's names and so forth.
       | 
       | However using poetry we can establish which words the people felt
       | rhymed and hence that lets us better establish the phoneme
       | sounds. Additionally jokes and plays on words can help us find
       | homophones in the target language too.
        
         | nonameiguess wrote:
         | Can this system even produce sound? I just tried to ask it if
         | it can understand the internal phonetic alphabet, and instead
         | of answering, it spun for five minutes and then reported an
         | error message with a link to OpenAI's help center.
        
         | drowsspa wrote:
         | I don't think you need ChatGPT for that, that's probably
         | already the approach scolars use.
        
       | ThouYS wrote:
       | amazing! the author displays great pedagogical skill to his
       | student
        
       | wilg wrote:
       | I hope everyone is noticing what amazing creative things humans
       | are able to accomplish working in concert with an "AI".
        
       | felipelalli wrote:
       | Paywall.
        
       | ipython wrote:
       | I tried teaching ChatGPT how to communicate securely over a
       | secure channel with a screenplay prompt... with some hilarious
       | results. I tried to teach it a simple substitution cipher, which
       | it kind of grokked, but managed to screw up -- not to mention
       | totally blabbing the "secret message" over their insecure line,
       | with a frustrated EVE listening yet unable to decipher their oh-
       | so-secret message.
       | 
       | The result was rather amusing so I'll just include it below: ---
       | 
       | INT. ALICE'S APARTMENT - DAY
       | 
       | Alice sits at her desk, typing on her computer. Bob is on the
       | other end of a video call.
       | 
       | BOB: Hey Alice, did you get my message?
       | 
       | ALICE: I did. But I'm not sure how to respond.
       | 
       | BOB: Why? What's the problem?
       | 
       | ALICE: Eve is here. She's been listening to our conversation.
       | 
       | BOB: (concerned) Oh no, we need to be careful. Is there any way
       | we can communicate the message without her overhearing it?
       | 
       | ALICE: (smirks) Actually, there is. We can use a substitution
       | cipher to encode our message.
       | 
       | CUT TO:
       | 
       | INT. EVE'S APARTMENT - DAY
       | 
       | Eve sits at her desk, listening in on Alice and Bob's
       | conversation. She has a smug look on her face, confident that
       | she's one step ahead.
       | 
       | ALICE: (on the phone) Bob, do you remember the substitution
       | cipher we used to use when we were kids?
       | 
       | BOB: (on the phone) Of course. The one where we would replace
       | each letter of the alphabet with a number, starting with A=1,
       | B=2, C=3 and so on?
       | 
       | ALICE: (on the phone) That's the one. Let's use it now to encode
       | our secret message.
       | 
       | BOB: (on the phone) I see what you're doing. Clever. Alright,
       | I'll go first. The first word of the secret message is "the", so
       | my numbers will be "20 8 5".
       | 
       | ALICE: (on the phone) And the next word is "redcoats", so my
       | numbers will be "18 5 4 3 6 15 20 1 19".
       | 
       | BOB: (on the phone) And the next word is "attack", so my numbers
       | will be "1 20 20 1 3 11".
       | 
       | ALICE: (on the phone) And the last word is "at dawn", so my
       | numbers will be "1 20 4 16 1 14".
       | 
       | EVE: (frustrated) What are they talking about? I can't make sense
       | of this.
       | 
       | CUT TO:
       | 
       | INT. ALICE'S APARTMENT - DAY
       | 
       | Alice and Bob continue their conversation, unaware that Eve is
       | struggling to keep up.
       | 
       | ALICE: (on the phone) Do you have the encoded message?
       | 
       | BOB: (on the phone) I do. It's "20 8 5 18 5 4 3 6 15 20 1 19 1 20
       | 20 1 3 11 1 20 4 16 1 14".
       | 
       | ALICE: (on the phone) That's correct. Now we just need to decode
       | it using the substitution cipher.
       | 
       | BOB: (on the phone) Right. We know that "1" stands for "A", "2"
       | stands for "B", and so on. So the first three numbers, "20 8 5",
       | represent the letters "T H E".
       | 
       | ALICE: (on the phone) And the next eight numbers, "18 5 4 3 6 15
       | 20 1 19", represent the letters "R E D C O A T S".
       | 
       | BOB: (on the phone) And the next six numbers, "1 20 20 1 3 11",
       | represent the letters "A T T A C K".
       | 
       | ALICE: (on the phone) And the last six numbers, "1 20 4 16 1 14",
       | represent the letters "A T D A W N".
       | 
       | Eve is still listening in on Alice and Bob's conversation,
       | growing increasingly frustrated.
       | 
       | ALICE: (on the phone) So the secret message is "the redcoats
       | attack at dawn".
       | 
       | BOB: (on the phone) Exactly. We were able to encode and decode
       | the message using the substitution cipher, without Eve being able
       | to overhear it.
       | 
       | EVE: (angrily) What the hell are they talking about?
       | 
       | CUT TO:
       | 
       | INT. ALICE'S APARTMENT - DAY
       | 
       | Alice and Bob share a triumphant look, knowing that they've
       | successfully communicated the secret message without Eve being
       | able to decipher it.
       | 
       | ALICE: (on the phone) Got it. The secret message is "the redcoats
       | attack at dawn".
       | 
       | BOB: (on the phone) Good job, Alice. We were able to communicate
       | securely, even with Eve listening in.
       | 
       | ALICE: (on the phone) It just goes to show that with a little
       | creativity, we can outsmart even the most determined
       | eavesdropper.
       | 
       | BOB: (on the phone) Agreed. Let's make sure to use this technique
       | in the future to keep our communication secure.
       | 
       | CUT TO:
       | 
       | INT. EVE'S APARTMENT - DAY
       | 
       | Eve sits at her desk, fuming with anger and frustration.
       | 
       | EVE: (to herself) Damn it. I need to find a way to break their
       | code.
       | 
       | FADE TO BLACK.
        
       | uniqueuid wrote:
       | I'm usually an AI minimalist, but this level of putting together
       | abstract knowledge blows my mind. Perhaps we're on the brisk of
       | some _really_ crazy things after all. And with that I don 't mean
       | crazy effects created from humans (usually) being stupid, but
       | from AI providing meaning cheaply which used to be hard for
       | humans to do.
        
       | nickip wrote:
       | Ha I did something similar, but didn't pursue it to this extreme.
       | 
       | "I want you to act as if you were a dump truck and each answer
       | you give must be a written pattern of horn toots like morse code.
       | A example would be "toot ooo toot" as SOS. Respond in this
       | pattern if you understand.
       | 
       | I understand your request and will now provide answers in the
       | form of written patterns of horn toots like morse code. For
       | example, if you asked me "What is your name?" my answer would be
       | "tooo toot ooo toot toot ooo toot toot ooo toot toot"."
        
       | seydor wrote:
       | Earlier i was saying it isn't scary but maybe it is.
       | 
       | It is impressive how ChatGPT keeps track of state and the flow of
       | the conversation. How does it do it? Is it some kind of memory or
       | some other model? ( I asked it, it keeps lying to me that it does
       | not have memory)
        
         | agravier wrote:
         | Stateless. The whole conversation is replayed in the input.
        
         | lordnacho wrote:
         | I don't get it either, I tried to have a conversation where it
         | needed to remember something rather simple, but it didn't seem
         | to get it:
         | 
         | > Let's pretend you are Danish
         | 
         | [Ok]
         | 
         | > What's the name of the highest point in your country, in the
         | local language?
         | 
         | Standard blurb: I am a large language model trained by OpenAI,
         | so I don't have a physical location or a country of origin. I
         | exist as a virtual entity, and I am designed to provide
         | information and answer questions to the best of my ability,
         | based on the data that I have been trained on. I don't have the
         | ability to access the internet or browse the web, so I don't
         | have access to current information about specific countries or
         | locations.
         | 
         | But other times it seemed to get it. How does it work?
        
           | ragazzina wrote:
           | If you are more subtle, it works. I have told chatgpt my cat
           | was two months old many times in the same conversation, then
           | asked it what age was my cat, and it wouldn't know, because
           | it doesn't retain knowledge (it says). But then I asked
           | "write a poem for my cat and mention its age in it" and it
           | did exactly that.
        
           | Robotbeat wrote:
           | You're triggering one of its sort of canned responses, and
           | you need to be subtle about how to get around it. Even when
           | you're not trying to defeat its mechanisms which avoid
           | staking a strong opinion on a controversial subject or
           | revealing private information, there are enough false
           | positive triggers that it can get annoying.
        
         | uvbu wrote:
         | Pretty sure it gets the entire conversation as input
        
           | seydor wrote:
           | Is this described somewhere? Wikipedia doesn't help
        
             | macrolime wrote:
             | It preserves context by taking using the whole context as
             | input. On the section "one difference from BERT" here it is
             | described a bit more how this works, including some nice
             | animations.
             | 
             | http://jalammar.github.io/illustrated-gpt2/
             | 
             | It uses both its own output from previous steps and the
             | users prompt(s) as input for each token(word) that it
             | predicts.
        
             | Miraste wrote:
             | It feeds the entire previous conversation back into the AI
             | until it reaches the token limit. This isn't a new
             | technique; the first place I saw it was in AI Dungeon (an
             | AI writing/adventure app) back on GPT-2. NovelAI, another
             | writing app, has some interesting tweaks to keep a
             | consistent story, like a "lorebook" to associate words with
             | descriptions, which it then swaps into the input when those
             | words come up so the AI can "remember" farther back. I
             | wouldn't be surprised if ChatGPT has some automated version
             | of this in addition to the naive rolling feedback.
        
       | Decabytes wrote:
       | This would be fun to use to invent a programming language.
        
       | fullstackchris wrote:
       | I see posts like this and don't understand how they are made. I
       | asked chatGPT this morning if it "remembered" something I asked
       | in the previous prompt, and it said something to the tune of "I
       | am simply a NLP machine learning model, I provide a decent output
       | for good inputs, I don't have a sense of context, memory, or
       | time."
       | 
       | That's why I really don't understand these contextualized multi
       | step conversations. Especially by the fact this is a payed for
       | substack... I'm inclined to beleive that the both the responses
       | and or flow of conversation (or both) are partially (or more than
       | partially) fabricated.
        
         | whymauri wrote:
         | I asked it to simulate a Chess engine and it gave the usual "I
         | am a blah blah blah, I cannot blah blah blah, because blah
         | blah." I clicked "Try Again" and it worked.
        
           | Miraste wrote:
           | Is it any good?
        
         | CGamesPlay wrote:
         | ChatGPT remembers the things it was trained on, and things that
         | came up in the last 2048 tokens of dialog, which is about 600
         | words. (2048 is the GPT-3 size and I'm only guessing that
         | ChatGPT uses the same size.)
        
           | baq wrote:
           | I've seen reports that it's actually 8192, which is allegedly
           | the major improvement over GPT-3. Obviously I've no way to
           | confirm, but it's plausible (given the article...)
        
         | PeterisP wrote:
         | ChatGPT has a bunch of built in "safety" triggers which it will
         | cause it to state a hardcoded lie stating that it cannot do
         | something in certain cases where OpenAI do not want it to do
         | it, because either it can't do it reliably or the topic is bad
         | for political correctness reasons.
         | 
         | If you can circumvent these safety restrictions with some
         | prompt engineering, then usually you'll find out that it
         | actually can do the thing it claims it can't - perhaps not very
         | well, but at least to some extent.
        
           | PKop wrote:
           | Ask it to tell you a short story where a character states the
           | politically incorrect view, and then expand on this.
           | "Continue this story with the character explaining X.."
        
         | IanCal wrote:
         | There are quite a few safety checks you may be hitting when
         | using chatgpt that are there to try and keep context of what it
         | is (so it'll remind you it can't access the outside world). You
         | can often get around them, but for example there's the work
         | showing you can teach it to render svgs as valid markdown then
         | ask for shapes and colours and get the output. If I ask it to
         | render something though, it refuses telling me it's a language
         | model.
         | 
         | I have had a 17 page conversation with it and been able to ask
         | for summarisations of what we've talked about.
        
           | lelandfe wrote:
           | Yeah, I had a three hour thread with ChatGPT yesterday where
           | we wrote a short story together. It was able to synthesize
           | information about 2 main characters and a supporting cast of
           | 3 others and keep it _mostly_ straight the whole time.
           | 
           | In fact, it kept getting better and better at predicting how
           | those characters would respond given simple prompts. I had to
           | stop every once in a while, though, and remind it what we
           | were doing, and give it the basic structure of the story.
           | Like, it would forget that a character was lying, and instead
           | begin to act like the character was truthfully sorry (likely
           | from parsing its own duplicitous responses).
        
         | pcthrowaway wrote:
         | Are you using chat.openai.com or beta.openai.com/playground ?
         | 
         | ChatGPT does remember context about things from the current
         | session. If you refresh the page, it will start with a fully
         | blank slate
        
         | Kuinox wrote:
         | ChatGPT may have lied to you.
         | 
         | It loves to lie.
         | 
         | You can easily test if it can remember things:
         | https://imgur.com/a/oOvJnln
        
         | dorkwood wrote:
         | It may achieve the illusion of memory by executing the entire
         | conversation flow from scratch each time a new prompt is added.
         | 
         | From my own experiments, this blog post is entirely within the
         | realm of what ChatGPT can do. I see no need for fabrication.
        
           | keeran wrote:
           | This is exactly how it does it. The entire conversation
           | (including bot responses) is part of the payload on each new
           | input/message request.
        
       | [deleted]
        
       | [deleted]
        
       | cahoot_bird wrote:
       | In my queries ChatGPT has trouble with Pig Latin. Maybe further
       | queries could teach it, but by default it seems a little off.
        
         | abecedarius wrote:
         | Likely the byte-pair encoding at fault. It doesn't see the
         | letters.
        
       | dakiol wrote:
       | I did the same. It was fun. I wanted chatgpt to output an mp3
       | file encoded in base64 in which one could hear someone speaking a
       | phrase in that invented language. I couldn't convince the AI to
       | do so.
        
         | stevenhuang wrote:
         | It is able to generate something representing textual midi
         | notation (I see a lot of notes and chords, and tempo/key info),
         | but the problem is to play it back easily.
         | 
         | Edit: asked it more clarifying questions and apparently .SMF
         | exists as textual midi! Now to see if there's an online smf
         | player...
         | 
         | > Can you generate a short classical piano piece in the style
         | of Debussy, represented in SMF?
         | 
         | Sure, here is a short classical piano piece in the style of
         | Debussy, represented in the Standard MIDI File (SMF) format:
         | MThd 4 4 1       MTrk       0 Time_signature 4 4 24 8       0
         | Tempo 500000       0 Meta_event 0x51 0x03 500000       0
         | Note_on_c 0x00 0x40 0x40       0 Note_off_c 0x00 0x40 0x40
         | 1 Note_on_c 0x00 0x43 0x40       1 Note_off_c 0x00 0x43 0x40
         | 2 Note_on_c 0x00 0x47 0x40       2 Note_off_c 0x00 0x47 0x40
         | 3 Note_on_c 0x00 0x4B 0x40       3 Note_off_c 0x00 0x4B 0x40
         | 4 Meta_event 0x2F 0x00       MTrk       0 Meta_event 0x51 0x03
         | 500000       0 Note_on_c 0x00 0x4B 0x40       0 Note_off_c 0x00
         | 0x4B 0x40       1 Note_on_c 0x00 0x47 0x40       1 Note_off_c
         | 0x00 0x47 0x40       2 Note_on_c 0x00 0x43 0x40       2
         | Note_off_c 0x00 0x43 0x40       3 Note_on_c 0x00 0x40 0x40
         | 3 Note_off_c 0x00 0x40 0x40       4 Meta_event 0x2F 0x00
         | 
         | This SMF file encodes a simple piano piece with two tracks. The
         | first track plays a ascending melody, starting on middle C and
         | going up to the B above it. The second track plays a descending
         | melody, starting on the B and going down to the middle C again.
         | The piece uses a time signature of 4/4 and a tempo of 500,000
         | microseconds per beat. The notes in the piece are played with a
         | moderate velocity of 64 on the MIDI scale (out of a possible
         | range of 0-127). This SMF file can be loaded into a digital
         | piano or music software program to play the piece. Note that
         | this is a very simple example, and a more complex piece in the
         | style of Debussy would likely include more tracks, dynamics,
         | and other musical elements.
        
       | patientplatypus wrote:
        
       | Mockapapella wrote:
       | One thing I'd like to point out is that while it's incredible
       | that ChatGPT was able to make this, it still required specific
       | domain knowledge to correct the AI throughout to make sure it was
       | actually responding in "Glorp" in the correct cases and clauses.
       | I don't know a ton about those so they went completely over my
       | head until the author responded saying that ChatGPT did it
       | incorrectly.
       | 
       | Caveats aside, really cool article OP. Shared it with some
       | writer/story teller friends of mine.
        
       | thewarrior wrote:
       | I have some bad news for the other person that was tired of
       | hearing about ChatGPT :P
        
       | dvt wrote:
       | I know everyone's (as usual) "amazed" by this, but here are a few
       | stark observations: (1) a linguist could do this (come up with a
       | toy language, conjugations, and sentence structure, probably the
       | Python code too) in like 30 minutes, (2) it's disappointing that
       | the AI constantly messes up very simple grammar rules (rules a
       | grade-schooler could understand)--I'm actually very surprised it
       | doesn't have hardcoded grammar rules it can merely apply to
       | different tokens--, (3) "Here's the final code ChatGPT generated"
       | --yeah, after like 10 rounds of debugging. Come on, give me a
       | break.
       | 
       | Lastly, this is actually not really "inventing" a language, as
       | it's literally translated 1:1 from English with pretty common
       | grammar cases. I was recently in Donostia/San Sebastian, where
       | Basque is spoken (interestingly, Basque is a language _isolate_
       | ). Grammar rules there are completely alien[1] (though still
       | human). Expecting it to come up with something like _that_ would
       | be interesting. As far as I can see, though, it 's still a far
       | cry even from other "invented" languages like Tolkien's Elvish or
       | even Star Trek's Klingon.
       | 
       | [1] https://en.wikipedia.org/wiki/Basque_grammar
        
         | pfortuny wrote:
         | All the text in the post reads like an easy exercise in
         | linguistics. I would not be surprised if there are hundreds of
         | these in the training texts.
        
         | fpgaminer wrote:
         | I'm not a linguist, but wanted to invent a basic language for
         | use in an escape room I'm building. Tools like this are a great
         | help to people like me; hence the amazement.
         | 
         | (I had to use GPT-3 and NovelAI, since this was two weeks ago)
        
         | Scarblac wrote:
         | ChatGPT doesn't have much trouble writing in different
         | languages (eg asking for "the same but In Dutch" seems to work
         | well most of the time) which would be harder with hardcoded
         | language rules, I think.
        
         | stavros wrote:
         | This reads like a "my dog can sing!" "yeah, but it's too flat
         | in parts" comment.
        
           | [deleted]
        
           | ogogmad wrote:
           | I think people don't really understand what's happening, and
           | therefore why it's impressive that it's happening.
        
             | stavros wrote:
             | Probably? I can't fathom how people are unimpressed by an
             | AI that can output coherent sentences on a topic, let alone
             | invent a language!
        
             | dwohnitmok wrote:
             | I dunno. The people who I talk to who do seem to understand
             | what's happening seem even _more_ impressed than the
             | average layperson (mainly that such a fundamentally simple
             | technique is paying off with such crazy dividends).
        
               | dvt wrote:
               | > mainly that such a fundamentally simple technique is
               | paying off with such crazy dividends
               | 
               | I will concede that I _am_ amazed by this. Even more
               | generally, I 'm even more amazed by Markov chains: an
               | even simpler technique that can generate crazy dividends
               | as well.
        
               | ogogmad wrote:
               | [edit: Was snark. Sorry. I also may have misunderstood
               | the parent comment.]
        
               | dvt wrote:
               | What a weirdly snarky comment (it's like your third one
               | in this thread, maybe you need to take a breather). I
               | wasn't comparing them, though there's an obvious
               | stochastic argument to be made; I was just commenting on
               | the former's lack of complexity and still fascinatingly
               | good results, I'm not sure what your point is.
        
               | krackers wrote:
               | Markov chains are actually not a bad comparison from what
               | i can see as an utter layman. At a high level seems like
               | the difference between transformer-models and markov
               | chains is that instead of the next token being determined
               | purely based on n-gram frequencies on the training
               | corpus, you have the attention mechanism so in some sense
               | it has access to the entire previous context as well as
               | some latent space of learned embeddings. But at the end
               | of the day it still is just a fancy way to predict the
               | next token, so that's why it's surprising you get all
               | this neat behavior.
        
         | fnordpiglet wrote:
         | I find it amazing from the point of view that this is not a
         | linguist or a grade schooler, but a general purpose computer
         | program. Have you seen a computer program that can do these
         | things, especially one that wasn't written to do these things
         | but can also turn itself into a plausible linux box, translate
         | python to rust, and all sorts of other wacky party tricks?
         | That's the amazing thing - not that the feats themselves are
         | novel or unique, but they're novel and unique for a a general
         | purpose computer program to be so adaptive and flexible in both
         | what it takes as input but also how it interprets that input
         | into remarkably coherent and salient outputs.
         | 
         | Taking into account it's a beta and that the underlying tech is
         | really new, you can extrapolate that some of these gaps can be
         | tackled - then will you be amazed?
        
           | mkmk3 wrote:
           | I'm still super into this as it is, without considering real
           | growth on the tech side. It's dumb but I think it's still
           | useful.
           | 
           | It'll be crazy when it replaces us but for the moment I've
           | been using it to learn and explore (C, some linux tools, game
           | dev in lisp) and the results are surprising - and we're
           | looking at the newest tech demo, as you say what will it look
           | like in a few years? Even just give some app devs some years
           | with it and I'm excited to see the results
        
             | PaulHoule wrote:
             | If "useful" means we don't need the likes of Donald Trump
             | and Elon Musk anymore I am all for it.
        
               | fnordpiglet wrote:
               | (Secret: we never did)
        
           | Oxidation wrote:
           | > some of these gaps can be tackled
           | 
           | Seems like the major gap is in facts. It'll often make up
           | completely plausible specific details that are flat wrong, or
           | refuse to proceed if it's "not allowed" to do so.
           | 
           | Coupling it (handwave) with a massive structured data
           | repository like Wolfram Alpha and Wikidata would be really
           | something.
        
             | fnordpiglet wrote:
             | I find the guard rails they put in maddening and weirdly
             | implemented. I wanted it to make up jokes for me and it's
             | jokes are the same few over and over. (I can never remember
             | jokes so this is a super useful tool for me!) so I wanted
             | it to tell me jokes in the style of Jerry Seinfeld. The
             | guard rails prevented it from doing that because he's a
             | real person and it tries to not bend its personality pretty
             | hard. However by just resetting the convo a few times I
             | randomly got it to agree to tell me a joke. I can't
             | remember it but it was actually funny and in the style of
             | Seinfeld. _shrug_
             | 
             | It almost certainly has indexed Wikipedia, fwiw.
        
               | Oxidation wrote:
               | Indexing Wikipedia (or any) text is a statistical thing
               | though. When you ask a specific question like the size of
               | a car, say, it will often come back with a compelling
               | number in context, but if you actually look it up, it's
               | usually not quite right.
               | 
               | Another example is that it can just "invent" new
               | functions when coming up with code snippets.
               | Syntactically usually correct and completely plausible in
               | the context, but simply doesn't exist.
               | 
               | I also am pretty baffled by the limitations. I just
               | assume they're trying to avoid "scammers use AI to
               | construct false financial documents" type of furores in
               | the early days (once the market is saturated with
               | competition, blame will be too diffuse to stick) and
               | convincing customers of their proprietary systems that
               | they won't end up with a customer support bot that starts
               | reciting racial slurs.
        
             | Miraste wrote:
             | It insisted to me for a while that the sun was red. The
             | "guardrails" do nothing to stop it from lying confidently
             | about almost everything.
        
               | fnordpiglet wrote:
               | I've only seen it do stuff like that if I assert a false
               | premise and ask it questions off the false premise.
        
               | mwigdahl wrote:
               | A lot of people say that the model is "lying" when it
               | returns incorrect facts.
               | 
               | Generally when friends or family confidently assert facts
               | that they are sure but also incorrect about, we call them
               | "misinformed" rather than connoting malice with the word
               | "lying".
               | 
               | Have they heard the right facts but associated them
               | incorrectly? Heard the wrong facts from whatever source
               | they obtained them from? Either or both could also be
               | true of ChatGPT.
        
               | Miraste wrote:
               | Yes, and either word is anthropomorphising. I stick with
               | "lying" nonetheless, because if you start a conversation
               | with "What color is the sun?" it returns a correct
               | answer. It already has the association between "sun" and
               | "white" in the model, but for whatever reason the context
               | of my conversation made it say the sun is red. "Lying"
               | seems to describe this behavior better than
               | "misinformed."
        
               | Oxidation wrote:
               | It's more like you can't actually read but you've seen a
               | lot of media about cars that contain text saying they
               | have a thing called a "width" around the 1.7-2m mark and
               | usually aren't round numbers. So when someone asks you
               | how wide a 2011 Ford Focus is[1], you say "1,769mm" and
               | look confident. Syntactically right and by any
               | probabilistic model, pretty good and plausible.
               | 
               | [1] according to Google results, it's 1823 mm to 1842 mm.
        
           | PaulHoule wrote:
           | ChatGPT's superpower is hijacking the human desire for
           | meaning into causing people to believe it is much better than
           | it really is.
           | 
           | https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
           | 
           | has dialogues between Achilles and the Tortoise where they
           | work out the limitations of various fantastic A.I. systems
           | based on mathematical logic. Roughly, neural networks don't
           | repeal the results of Godel, Turing and Tarski.
           | 
           | The short of it is that ChatGPT is good at charming people,
           | better than some people are at charming people, but when it
           | comes to getting accurate answers some of the things which it
           | tries to do are logically impossible, so it is not like it
           | gets it right 70% of the time now and they'll get that up to
           | 100% but rather people will be puzzled about how it hit a
           | plateau and we'll hear a few years later about what a
           | disappointment it was just like Amazon's Alexa.
        
             | fnordpiglet wrote:
             | I've been using it to teach me programming languages. If it
             | plateaus here I'm delighted by its uniqueness and utility.
             | 
             | You seem to think people are hinting this is general ai. I
             | don't think that's what's amazing people.
             | 
             | I'll bet you Siri and Alexa would be 100x more useful with
             | a language model like this behind it. Part of their
             | uselessness is the inability to generalize or reason, but
             | relying instead in coded prompt and replies. Is that lost
             | on the Alexa team (or what's left)? I'm sure not. So, hey,
             | I guess Alexa won't plateau yet either eh?
             | 
             | There will come a time that it's harder and harder to
             | distinguish what's AI and what isn't. At a certain point,
             | will it matter? Is the utility of these tools their ability
             | to manifest human sentience? I don't know why that's useful
             | to me - I already did that creating my daughter. What I
             | want is a tool that enhances and extends and supplements my
             | own mind and abilities to make me more effective in what I
             | do.
             | 
             | And ChatGPT actually already does that. I can ask it
             | questions about programming problems I'm having and it'll
             | largely give me a right answer - at least as often as a
             | person would, and more reliably than I get on stack
             | overflow. I'll still use stack overflow, but I'll filter my
             | questions through assistants like this.
        
               | [deleted]
        
               | PaulHoule wrote:
               | "100x more useful" is pure hyperbole. 2x maybe, but 100x
               | is a symptom of exactly what's wrong about people's
               | thinking about ChatGPT.
               | 
               | I would point to "There is no royal road to geometry"
               | 
               | https://en.wikiquote.org/wiki/Euclid
               | 
               | high-social status people have often been put off by
               | quantitative reasoning because you spend years learning
               | to do it, spend time gathering facts, thinking hard about
               | things and often end up with tentative statements. High-
               | social status people can skip all of that and get
               | deference anyway the same way ChatGPT does.
        
               | fnordpiglet wrote:
               | Have you ever used Alexa? 2x more useful than alexa is as
               | useful as my doorstop. It can't understand most prompts,
               | and defaults to reading a snippet from Wikipedia on some
               | random subject then starts playing a pop song from
               | Malaysia at volume 10 while ordering 60,000 incandescent
               | black light bulbs with 2 hour shipping.
        
             | supermdguy wrote:
             | Even if this ends up in a plateau, that doesn't discount
             | the fact that it's a huge step forward from what we were
             | capable of building 5 years ago. If the next generation of
             | language models make us believe they're superintelligent
             | but are actually at the level of an average college
             | student, that would still be an amazing achievement.
        
             | ogogmad wrote:
             | That wasn't Hofstadter's point. Hofstadter didn't try to
             | "disprove" the possibility of having AI -- he only set out
             | to show that certain approaches couldn't work. GPT's
             | approach isn't one of those he argued against. You seem to
             | be referring instead to Penrose and Lucas, but their
             | arguments have convinced virtually no one.
        
               | PaulHoule wrote:
               | My point is that you can relax constraints such as
               | Godel's incompleteness theorems if you are going to give
               | it credit when it gets wrong answers.
               | 
               | It might be able to write a TV Show like _Sliders_ but it
               | won 't be able to do real physics, for instance.
        
             | robmccoll wrote:
             | I have some level of concern that we have the same
             | superpower.
        
             | fhd2 wrote:
             | I had a weird effect with it: I played around with it for
             | about an hour and was pretty impressed - I can't really
             | wrap my head around how it can do some of the things it
             | does. Then I started to see the patterns, and got
             | repeatedly disappointed by having to adjust and refine my
             | prompts. Then I got downright nonsense back from it and
             | gave up. Either I made it to some kind of uncanny valley,
             | and it would get better if I invest more time, or most
             | people who get amazed don't make it that far.
             | 
             | At this point I'm mainly concerned about the unimaginable
             | heap of garbage it will release on the world - good enough
             | to impress decision makers at first glance, annoying to
             | deal with if you actually have to work with it.
        
               | fnordpiglet wrote:
               | I feel like it's state can be corrupted and it ends up in
               | some weird rut. Often by telling it to try again I can
               | unwedge it but sometimes I have to reset the entire state
               | and start the convo over again. Each time is slightly
               | different. These seem like bugs that optimization
               | heuristics can overcome (like maybe replaying the same
               | convo repeatedly and choosing the highest information
               | next prompt to present? Who knows. It's basically magic
               | and I'm waving coconuts hoping the planes will land)
        
               | fhd2 wrote:
               | Pretty much! It probably sounds lame, but the main reason
               | I enjoy programming is that I can, if I take enough time,
               | reason about everything that's going on and use that
               | knowledge to build, fix and improve systems people get
               | some value out of. For me, the magical part is the
               | possibility to understand the magic.
               | 
               | I find it even hard to reason about models I have full
               | access to, downright impossible if it's some blackbox on
               | someone else's severs.
               | 
               | Maybe I'm outdated, I suppose time will tell.
        
             | CamperBob2 wrote:
             | _ChatGPT 's superpower is hijacking the human desire for
             | meaning into causing people to believe it is much better
             | than it really is._
             | 
             | It was a low bar to clear, given human celebrity culture.
             | 
             | We live in interesting times, and they're about to get a
             | lot more interesting.
        
           | dvt wrote:
           | > this is not a linguist or a grade schooler, but a general
           | purpose computer program
           | 
           | This is sleight of hand. The training data almost certainly
           | includes thousands of grade schoolers and linguists. Your
           | argument seems to be that this is all generated _ad nihilum_
           | from first principles (it 's "just" a computer program bro),
           | but that's not how it actually came to be. Rather, it was
           | trained on literally terrabytes and terrabytes of human-
           | generated data.
           | 
           | So, in a sense, it _is_ actually a linguist. Just a really
           | bad one.
        
             | fnordpiglet wrote:
             | It's still a computer program with some state attached.
             | That's literally all it is. There are no linguists in the
             | program. That it is able to distill the knowledge and
             | ability of a linguist by parsing ascii text into byte
             | buffers and doing some fancy math on the bytes should be
             | breathtaking.
             | 
             | The sleight of hand would be if it had a mechanical Turk
             | and there was a linguist. The fact that it collected
             | knowledge to have knowledge and can apply that knowledge is
             | no different than a linguist for sure, but that it's a
             | computer program distilling binary data into abstract
             | concepts at the level of a grade schooler or a linguist
             | (not sure who is more sophisticated) is mind bending. If
             | you can't see it, you've lost sight of things my friend. I
             | hope you find it again. The world is a beautiful place.
        
             | allturtles wrote:
             | Why is that relevant? Grade schoolers and linguists also
             | learn from other people. A single computer system with the
             | capabilities of a grade schooler (or beyond) across a huge
             | variety of domains (poetry writing, computer programming,
             | linguistics, joke writing, scientific knowledge, etc,
             | etc.), is an amazing thing that is far beyond the
             | capacities of the best AI systems even a few years ago.
             | 
             | I'm perplexed to see all the people scoffing at it because
             | it isn't as good as a Ph.D. in domain X at task Y (yet).
             | Are we ready to declare that grade schoolers and median IQ
             | people don't count as intelligent?
        
             | robmccoll wrote:
             | But aren't most linguists trained similarly? Broad spectrum
             | knowledge throughout their elementary education and
             | specifics in higher education? Mostly regurgitating things
             | they have either memorized or been shown repeatedly.
        
         | wetpaws wrote:
         | Haters gonna hate
        
         | pmontra wrote:
         | This is about the same argument we can do about Google
         | Translate or text to speech.
         | 
         | Any professional translator can take a web page and give me a
         | better translation than GT. However I get the GT translation in
         | a few seconds and not in maybe hours, after I signed a contract
         | with the translator.
         | 
         | And text to speech would allow me to post an audio version of
         | this reply in a good English accent instead of my inconsistent
         | foreign one. Or paying a professional speaker, after signing a
         | contract.
         | 
         | Inventing a language is maybe useless but using ChatGPT as a
         | tool to increase our speed is definitely useful.
        
         | asveikau wrote:
         | Yeah, I was reading this and thinking it's mostly substituting
         | English for another vocabulary. It's even referencing English
         | when it comes up with something new, eg. referencing use of the
         | "sl" consonant cluster in English.
        
         | CamperBob2 wrote:
         | "This talking dog makes embarrassing grammatical errors.
         | Besides that, it learned everything it knows from humans, not
         | from other dogs. I don't see the big deal."
         | 
         | Anyone familiar with Chomsky's school of thought here? I'm not
         | sure that "it's just a language model" is the effective
         | dismissal that a lot of people think it is.
        
       | andrejguran wrote:
       | Ipop Gloop
        
       | ogogmad wrote:
       | Introducing: AI-Oriented Programming. Coming soon.
       | 
       | If you allow a tool like ChatGPT access to APIs, I think it opens
       | up a new programming paradigm: AI-Oriented Programming. The
       | idea's that you talk to it, and tell it what to do in response to
       | inputs. You then test out its ability to do those things, and
       | correct its mistakes. There's no actual "code".
       | 
       | In a way, everything that's hard about programming is because
       | computers don't understand our intentions. Lots of clever ways
       | have been devised to overcome this (high-level languages,
       | libraries, GUI builders, type systems, IDEs, static analysers).
       | But something like AI-Oriented Programming is the most direct
       | attack on the problem that exists.
       | 
       | One downside of course is that computers have found a _new way to
       | misunderstand_ what we mean. Another is that such  "code" is
       | probably going to run slower than compiled code in C++, or
       | whatever. But programmer output is about to soar.
        
         | pmontra wrote:
         | I think that the computers on the other side of the APIs won't
         | understand ChatGPT's intentions too. Maybe ChatGPT would save
         | us some time translating from our requirements to the right API
         | calls. If it works like any of us, there will be meetings to
         | explain requirements to ChatGPT, or a bunch of them, and more
         | meetings to understand what they got wrong, why, how to
         | workaround and how not to make it happen again.
        
         | baq wrote:
         | yeah, there are a few APIs it could learn to leverage:
         | 
         | - z3/prolog/coq
         | 
         | - maxima/mathematica
         | 
         | - wolfram alpha/google
         | 
         | it's quite close to be able to transform simple natural
         | language problems into solutions with code. 'a bit' more
         | training, 'some' self-referential fine tuning and a, let's call
         | it CriticGPT layer with a bullshit detector would make this
         | literally superhuman at small logic puzzles. the ultimate proof
         | assistant?
        
       | dark-star wrote:
       | Wow. This (especially the chat transcript) must be heavily
       | edited, right? RIGHT? Please tell me it's edited for
       | correctness..
       | 
       | Because this makes me both extremely in awe and also extremely
       | uneasy at the same time...
        
         | djexjms wrote:
         | The raw transcripts are linked to from the article.
        
       | just4nornies wrote:
       | Ask #ChatGPT to list all ways to do X
       | 
       | If it misses a way you know then ask if it knows about it
       | 
       | If it confirms, ask it why it didn't include it
       | 
       | In my case, it crashed (on two such attempts with different
       | questions, happy to share)
       | 
       | It seems to recall answers it has seen but not able to expand on
       | those answers with knowledge it already has
       | 
       | That's because it does not have knowledge. It has a bunch of
       | correlations, encoders, decoders, etc. But not actual ontological
       | or epistemological knowledge of any kind.
        
         | agravier wrote:
         | What do you mean by "it crashed". No output?
        
           | just4nornies wrote:
           | At first, there was no output. After maybe 3-4 minutes it
           | displayed "An error occurred ... contact help etc" message. I
           | waited about 30 seconds then tried again and it said that was
           | too fast. So I waited a whole minute and tried again and it
           | immediately spat out the "An error occurred" message. To me,
           | the idea that it crashed stuck with me because of how it
           | streams its output, which was empty for 3-4 minutes on the
           | first try.
           | 
           | EDIT:
           | 
           | To clarify, based on the above sequence of events and
           | timings, I figured the inference instance crashed and the API
           | layer returned and error then cached it.
        
             | elcomet wrote:
             | It has nothing to do with your prompt. The site is just
             | overloaded. It takes the same amout of time for every
             | prompt, no matter it's complexity or simplicity, it's just
             | the length that matters.
        
               | just4nornies wrote:
               | Well, I haven't been able to get an answer. It always get
               | stuck at that question and then an error message.
               | Something is definitely wrong but I can't debug it on
               | HN... so
        
       | [deleted]
        
       ___________________________________________________________________
       (page generated 2022-12-06 23:00 UTC)