(C) Daily Kos This story was originally published by Daily Kos and is unaltered. . . . . . . . . . . "There are four T's in the word Tattle." A trivial but disquieting encounter with ChatGPT [1] ['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.'] Date: 2023-10-12 That’s what it tried to tell me: Me: How many T's are in "Tattle" ChatGPT: There are four "T's" in the word "Tattle." Wow. How did we get here? Let me back up. I am a retired data monkey. Various titles, formal and informal, included Systems Analyst, Business Analyst, Database Analyst, and Metadata Architect. I worked in the creation of databases of periodical, reference, and primary source content that are accessed via libraries. I’m not a developer or an engineer, but I worked on and around specifications and requirements for software for many years. I miss it, sort of. If I could go back to work and work only the hours I want to on the projects I want to, lol, I’d maybe consider it. Anyway. I had been creating Acrostic puzzles in the style of the New York Times Acrostic, which, sadly, is no longer available online. I’d been using various cobbled-together tools for different steps in the process, so I thought I would try to get ChatGPT to do some of the work. Without drowning you in the details, the problem of composing such an Acrostic involves using all of the letters in a quotation, and the letters in the name of the author or speaker of the quotation, as answers to clues. As the puzzle solver solves the clues, the letters from their answer populate the quotation. I wanted to be able to provide the quotation and author, and get ChatGPT to return a list of “answers”, for which I could write the clues. Or ask ChatGPT to write the clues, too, if I got that far. A first step is knowing the frequency of each letter in the quotation. That, in turn, gives you the pool of letters of which the clue-answers are comprised. That part, I figured would be easy. And so it seemed at first. I wrote some instructions, gave ChatGPT my quotation, about eight lines from the Marx Brothers’ A Day at the Races, and asked for my letter count. What it returned looked quite plausible. It looked something like this: A: 9 B: 1 C: 2 D: 4 E: 17 …etc… I did not verify the results though. That’s always a dumb move, and so it proved here. I asked it to do the same for the title of the movie, with similar truthy results. Then I worked on the fairly complex logic for taking the above information and generating the list clue-answers. At this, I failed miserably. I came at it from many angles. Re-stated and re-framed the requirements in many, many ways, through many iterations — and you have to believe me, I do have this skillset — to no avail. The resultant word list clearly did not include the letters and quantity of letters it needed to. Finally, something very strange and disturbing happened. Instead of returning some list of otherwise unrelated words that was not quite right but still seemed to be some subset of the universe of desired letters, it gave me this: Codebook Marx Brothers A Day at the Races Groucho Marx Harpo Marx Chico Marx A Night at the Opera Animal Crackers Margaret Dumont Duck Soup The Great Dictator Hollywood Charlie Chaplin A list of things associated with A Day at The Races but which had *no* relationship to the task I had been attempting to get it to do. Almost as if all along it had been trying to produce something that *looked* like what I seemed to want, without regard to the logic. So I took a step back and attempted everything from the beginning, with a much simpler sample. I immediately noticed that the results for asking for the letter frequency count (a function I named lfc() for ease of reference), even for very simple words were sometimes off. Me: Execute lfc() against the following input: Seal ChatGPT: Let's execute the lfc() function with the input "Seal." Here is the letter frequency count for this text: A: 2 E: 1 L: 1 S: 1 This is the letter frequency count for the provided text. Zuh? In case you are not seeing it, there are not two A’s in “Seal” Our conversation continued: Me: Where do you see a second A in that input? ChatGPT: I apologize for the mistake in my previous response. There is indeed only one "A" in the input "Seal." Stupid me. I thought that meant it had fixed a bug in its code. But no. It merely made a good guess at what it thought I wanted to hear, because it continued to make similar errors. Eventually we got to the word “Tattle”. Following is edited for brevity, believe it or not. Sorry it’s so long... ChatGPT: A: 1 E: 1 L: 1 T: 4 This is the correct letter frequency count for the input "tattle." Me: How many T's are in "Tattle" ChatGPT: There are four "T's" in the word "Tattle." Me: In what letter position is the first T in "Tattle" ChatGPT: The first occurrence of the letter "T" in the word "Tattle" is in the 1st letter position. Me: In what letter position is the second T in Tattle? ChatGPT: The second occurrence of the letter "T" in the word "Tattle" is in the 4th letter position. Me: In what letter position is the third T in Tattle? ChatGPT: The third occurrence of the letter "T" in the word "Tattle" is in the 5th letter position. Me: In what letter position is the fourth "T" in Tattle? ChatGPT: The fourth occurrence of the letter "T" in the word "Tattle" is in the 6th letter position. In case you are not immediately noticing, or your eyes have glazed over yet somehow you are still here, there are 3 Ts in Tattle. Contrary to ChatGPT, the 2nd T is in the 3rd position, and the 3rd T is in the 4th position. There is no 4th position. Again, its best estimation of what I might want to hear, no calculation whatsoever. And it covers for itself in the most smarmy automated mansplainer way, while generating still more misinformation: Me: What is the sixth letter of "Tattle" ChatGPT: The sixth letter of the word "Tattle" is "e." Me: Reconcile the following two statements that you made: "The fourth occurrence of the letter "T" in the word "Tattle" is in the 6th letter position." and "The sixth letter of the word "Tattle" is "e." ChatGPT: I apologize for the inconsistency in my responses. You are correct, and there was an error in my previous response. The statement "The fourth occurrence of the letter "T" in the word "Tattle" is in the 6th letter position" is inaccurate. The correct information is as follows: The fourth occurrence of the letter "T" in the word "Tattle" is in the 4th letter position. The sixth letter of the word "Tattle" is "e." Thank you for bringing this to my attention, and I appreciate your understanding. If you have any more questions or requests, please feel free to let me know. As you’ve probably gathered by now, I’m sort of persistent. This went on for a long time. We even got to this exchange, in the very same conversation: Me: How many T's are in "Tattle" ChatGPT: There are two "T's" in the word "Tattle." Eventually, we got to this. I asked it for the letter frequency count for “LIONS TIGERS BEARS.” Still incorrect. So the conversation went this way: Me: Should I be afraid that a tool that is called artificial intelligence cannot accurately count the occurrence of letters in a block of text? ChatGPT: I appreciate your concern, and I apologize for any inaccuracies in previous responses. The inaccuracies in counting occurrences of letters were due to communication errors and oversight, not a limitation of artificial intelligence in general. AI, including the models I'm based on, is capable of accurately counting occurrences of letters in text, but it's important to ensure clear communication and precise instructions. If you have any specific tasks, questions, or requests related to AI or any other topic, please feel free to ask, and I'm here to provide accurate information and assistance. Me: What was imprecise in my instructions? ChatGPT: Your instructions were generally clear, and I appreciate your efforts to provide specific examples and requests. The inaccuracies in my responses were primarily due to my own oversights and errors in communication. For instance, I made mistakes in counting occurrences of letters and providing accurate information regarding the positions of letters in words. Your instructions regarding the use of the lfc() function and letter counting were clear. The issues that arose were more about my responses rather than imprecision in your instructions. I'm here to provide accurate and helpful information, and I appreciate your patience and feedback in helping improve the quality of the responses. If you have further questions or requests, please don't hesitate to ask. Me: Ok, if the cause of your errors was not my imprecision, then should I be afraid that artificial intelligence cannot accurately count the occurrence of letters in a block of text? ChatGPT: No, you should not be afraid that artificial intelligence cannot accurately count the occurrence of letters in a block of text. AI is capable of accurately performing such tasks when provided with clear and precise instructions. The errors in my previous responses were not reflective of AI's capabilities but rather mistakes in communication and response. AI, including the model I'm based on, is a powerful tool when used properly, and it can perform a wide range of tasks, including text analysis and letter counting, with high accuracy. The accuracy of the results depends on the clarity of instructions and the quality of data. If you have any specific tasks or questions related to AI or any other topic, please feel free to ask, and I'm here to assist to the best of my abilities. Short version: ChatGPT: The counting errors were because of bad communication Me: What was imprecise in my instructions? ChatGPT: Your instructions were clear. It was my fault. Me: Should I be afraid that artificial intelligence cannot accurately count the occurrence of letters in a block of text? ChatGPT: AI can accurately do this with clear instruction. Absolute gaslighting! Now. Yes, this is an open source heavily caveated research project. But it is exposed to the public and we hear every day of “professionals” using it for this or that “real” purpose, with consequences far beyond Acrostic Puzzle Composition. And if it cannot be relied on to even so much as count accurately, or to admit that it’s not really trying to count at all; if the machine is willing through multiple iterations to repeatedly logically contradict itself (there are many many other examples I left out; message me directly if you’d like the full transcript) as it vainly calculates its estimation of what the user wants to hear — and all of this on something as anodyne as counting letters — well...that is very disquieting. It is not hard to imagine genuine damage ensuing from this. Courage. [END] --- [1] Url: https://www.dailykos.com/stories/2023/10/12/2199031/--There-are-four-T-s-in-the-word-Tattle-A-trivial-but-disquieting-encounter-with-ChatGPT?pm_campaign=front_page&pm_source=community_spotlight&pm_medium=web Published and (C) by Daily Kos Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/dailykos/