[HN Gopher] Lawyer cites fake cases invented by ChatGPT, judge i...
       ___________________________________________________________________
        
       Lawyer cites fake cases invented by ChatGPT, judge is not amused
        
       Author : scraptor
       Score  : 258 points
       Date   : 2023-05-27 20:19 UTC (2 hours ago)
        
 (HTM) web link (simonwillison.net)
 (TXT) w3m dump (simonwillison.net)
        
       | pylua wrote:
       | What needs to exist is a legal compiler that lawyers can plug
       | responses into from these tools. Sort of how a developer would
       | use chat gpt.
       | 
       | A giant rules engine for the law. I'm surprised one doesn't exist
       | or isn't in progress that I know of. Seems like it would be very
       | helpful
        
         | inopinatus wrote:
         | There isn't one because the law is not a programming language.
         | 
         | Believing otherwise is a common misconception amongst
         | engineers, but representing law as such is (as I have said in
         | this forum before) a leading cause of disappointment,
         | frustration, bickering, anger, conflict, and vexatiously long
         | and mostly unenforceable contracts.
         | 
         | Observance of law is fundamentally about alignment with
         | principles, not blindly following a set of rules. The latter
         | debility is more properly associated with the administration of
         | law, especially at its most mediocre and ritualistic.
        
         | crooked-v wrote:
         | "The law is code" falls apart as soon as you have to take
         | judges into account.
        
       | simonwlol wrote:
       | [dead]
        
       | Covzire wrote:
       | Guessing we'll see this excuse a lot more: Get caught making
       | stuff up, blame ChatGPT.
        
       | tmaly wrote:
       | I have asked ChatGPT to cite sources when I have doubts.
       | 
       | I am often times able to confirm these sources.
       | 
       | Seems this lawyer just took ChatGPT at its word without
       | validating the cases.
        
         | londons_explore wrote:
         | Or maybe validated one or two, and then assumed they must all
         | be correct.
        
           | tmaly wrote:
           | I often want good data, so I validate everything.
           | 
           | ChatGPT tends to only give a limited number of results in the
           | response.
        
       | lamp987 wrote:
       | this is the future of lawyers, doctors, engineers, school
       | teachers...
       | 
       | dark times are ahead.
        
         | garganzol wrote:
         | At the same time, the inventiveness of AI sometimes gives quite
         | a few fruitful ideas for a solution. Yes, formally this can be
         | classified as AI hallucinations, but sometimes they are so
         | bright and well-aimed that I get goosebumps.
         | 
         | My favorite one is phind.com - it gave me so many slightly
         | hallucinating but nevertheless useful advices. And I was able
         | to incorporate most of them into my professional work.
         | 
         | The whole situation reminds me of a good friend of mine - he's
         | super talented at inventing things and brainstorming, but he
         | can often be caught misrepresenting the facts, and sometimes
         | outright lying. However, the pros easily outweigh the cons if
         | you know who you're working with.
        
         | blibble wrote:
         | individuals that use ChatGPT losing/being struck off because it
         | generates garbage?
        
       | paulhart wrote:
       | There's a joke about "an LLB having an LLM" in here somewhere.
        
       | civilized wrote:
       | Steven Schwartz not only obviously lied re: "I was unaware of the
       | possibility that ChatGPT could generate false information", his
       | own screenshots prove it. He asked if one of the cases generated
       | was fake!
       | 
       | I suspect that during the research his System 1 (fast, intuitive
       | thinking) told him he was not responsible for the risk he knew he
       | was incurring by relaying AI generated text. It was more like
       | ChatGPT was his own legal secretary which he was within his
       | rights to trust, just like the main lawyer in the case, LoDuca,
       | trusted him to produce this research.
       | 
       | The proceedings would have been more interesting if Schwartz had
       | been honest about this, rather than going with the easily
       | discoverable lie.
       | 
       | On the other hand, it's always funny when people realize they've
       | got themselves into deep shit and they decide the best way out is
       | to essentially plead insanity.
        
       | rickreynoldssf wrote:
       | There's nothing wrong with starting with ChatGPT but the lazy
       | ambulance chaser couldn't be bothered with a quick lookup in
       | Westlaw. That's where things start going wrong.
        
       | travisgriggs wrote:
       | ChatGPT: The ultimate parlor^H^H^Hrot trick
        
       | ZephyrBlu wrote:
       | I read this as malicious/intentional initially. Do people really
       | think they can 100% believe ChatGPT?
       | 
       | Especially for someone like a lawyer I would expect to them
       | verify any information they get from ChatGPT.
        
       | ineedasername wrote:
       | Hilarious. It's important to remember:
       | 
       | 1) ChatGPT is not a research tool
       | 
       | 2) It sort of _resembles_ one and will absolutely _act_ like one
       | if you ask it to, and it it may even produce useful results!
       | But...
       | 
       | 3) You have to independently verify any factual statement it
       | makes and also
       | 
       | 4) In my experience the longer the chat session, the more likely
       | it is to hallucinate, reiterate, and double down on previous
       | output
        
         | numpad0 wrote:
         | 0) It calculates on data YOU SUPPLY. If the data is incomplete
         | or incorrect, it tries its best to fill in blanks with
         | plausible, but fabricated, data. You MAY NOT ask it an open
         | ended or non-hypothetical question that require grounding
         | beyond included in the input.
         | 
         | e.g. "given following sentence, respond with the best
         | summarization:, <string>" is okay; "what is a sponge cake" is
         | not.
        
         | [deleted]
        
         | xkcd1963 wrote:
         | I don't think it makes sense to call ChatGPT hallucinating when
         | it returns wrong facts. Hallucinations imply that the
         | protagonist can distinguish reality from something
         | hallucinated. ChatGPT cannot distinguish facts from fiction.
        
         | sickcodebruh wrote:
         | This is completely true but completely in conflict with how
         | many very large companies advertise it. I'm a paid GitHub
         | Copilot user and recently started using their chat tool. It
         | lies constantly and convincingly, so often that I'm starting to
         | wonder if it wastes more time than it saves. It's simply not
         | capable of reliably doing its job. This is on a "Tesla
         | autopilot" level of misrepresenting a product but on a larger
         | scale. I hope it continues being little more than a benign or
         | embarrassing time-waster.
        
           | dmix wrote:
           | Where does Github misrepresent their Chat beta? On their
           | marketing website?
        
           | YetAnotherNick wrote:
           | One of the only text written in chatgpt's own website that
           | comes each time when you chat with it is "ChatGPT may produce
           | inaccurate information about people, places, or facts."
        
       | oytis wrote:
       | AI will not destroy the humanity. People who use AI will.
        
       | stavros wrote:
       | By the way, Simon can't get ChatGPT to say that the particular
       | case is true _because he's asking about a particular case_.
       | ChatGPT doesn't know it said anything about this case in another
       | session, so it won't confirm it's true.
       | 
       | Either ask it for some other legal sources and ask if those are
       | true (and then try to see if a few aren't), or use the API to
       | feed it its own answer about Varghese etc and then see if it will
       | say it's true (because at that point you've made it think it said
       | this).
        
       | internetguy wrote:
       | I don't get it... These are PROFESSIONAL lawyers that have gone
       | to years of law school. How do they not know how to double check
       | an answer from ChatGPT? I'm sure there's databases of cases
       | online, it shouldn't be this difficult.
        
         | knodi wrote:
         | Idiots come in many sizes.
        
         | it_citizen wrote:
         | If the whole covid vaccine debate taught me one thing is that
         | there are idiots in every jobs, even the most qualified.
        
           | tarxvf wrote:
           | I love how the current world is such that people on both
           | sides of that would agree with your statement. :D
           | 
           | (Don't clarify it, it's better this way.)
        
       | lolinder wrote:
       | > I apologize for the confusion earlier. Upon double-checking, I
       | found that the case Varghese v. China Southern Airlines Co. Ltd.,
       | 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be
       | found on legal research databases such as Westlaw and LexisNexis.
       | 
       | This is the part that stood out to me the most. I've seen this "I
       | apologize for the confusion earlier" language many times when
       | using ChatGPT, and it's always when it's walking back on
       | something that it previously said. In fact, everything about this
       | quote sounds like a retraction.
       | 
       | If this is a retraction then that means that there are missing
       | screenshots in Attachment 1 wherein ChatGPT stated the cases were
       | fictitious, and Schwartz pushed back until it retracted the
       | retraction.
       | 
       | I'm with Simon on this one, I think Schwartz realized his career
       | is over and is frantically trying anything he can to cover for
       | his mistake.
        
       | B1FF_PSUVM wrote:
       | > ChatGPT, as it often does, hallucinated wildly
       | 
       | Plausible bullshit generation for free, as if there's not enough
       | already available cheap.
        
         | TechBro8615 wrote:
         | The lawyer was just trying to outsource his job of plausible
         | bullshit generation. He would have gotten away with it, too, if
         | it weren't for the fake cases.
        
       | akiselev wrote:
       | I went ahead and asked ChatGPT with the browsing plugin [1]
       | because I was curious and it answered that it was a real case
       | citing an article _about the fake citations!_ After some prodding
       | ( "Are you sure?") it spat out something slightly saner citing
       | this very article!
       | 
       |  _> The case  "Varghese v. China Southern Airlines Co., Ltd., 925
       | F.3d 1339 (11th Cir. 2019)" was cited in court documents, but it
       | appears that there might be some confusion or controversy
       | surrounding this citation. It was mentioned in a list of cases
       | for which a lawyer was ordered to provide copies, according to a
       | court order on leagle.com [2] . However, a blog post on
       | simonwillison.net suggests that the case might not be genuine and
       | that it might have been generated by a language model such as
       | ChatGPT. The post discusses a situation where a lawyer might have
       | used generated case citations in court documents without fully
       | understanding the tool they were using. The post also includes
       | screenshots where the language model appears to confirm the
       | existence of the case [3]._
       | 
       | The output is hilariously bad and it's depressing a licensed
       | attorney actually pulled this crap.
       | 
       | This is just more evidence that ChatGPT should not be used for
       | anything serious without a trained human in the loop.
       | 
       | [1]
       | https://chat.openai.com/share/a6e27cf2-b9a6-4740-be2e-fdddab...
       | 
       | [2] https://www.leagle.com/decision/infdco20230414825
       | 
       | [3] https://simonwillison.net/2023/May/27/lawyer-chatgpt/ (The
       | TFA!)
        
         | jimsimmons wrote:
         | In the loop there indeed was a allegedly trained human in this
         | instance
        
           | akiselev wrote:
           | That's not what I would call in the loop. He didn't check
           | that the sources were real.
           | 
           | By "in the loop" I mean actively validating statements of
           | fact generated by ChatGPT
        
             | jimsimmons wrote:
             | Well they had every opportunity to inspect its output.
             | Where in the loop and what gets inspected is not an easy
             | problem to solve.
        
       | dunham wrote:
       | Wow, I asked Bard to summarize the NYT article and it completely
       | changed the outcome:
       | 
       | > Sure. The article is about a man named Roberto Mata who sued
       | Avianca Airlines after he was injured when a metal serving cart
       | struck his knee during a flight to Kennedy International Airport
       | in New York. His lawyer used a new language model called ChatGPT
       | to help him with the case. ChatGPT is a large language model that
       | can generate text, translate languages, write different kinds of
       | creative content, and answer your questions in an informative
       | way. It was trained on a massive dataset of text and code, and it
       | can be used for a variety of tasks, including writing legal
       | documents. In Mr. Mata's case, ChatGPT helped him to draft a
       | lawsuit that was more persuasive and comprehensive than what he
       | could have written on his own. The lawsuit was eventually
       | successful, and Mr. Mata was awarded damages. This case shows the
       | potential of ChatGPT and other language models to help lawyers
       | with their work.
        
       | disillusioned wrote:
       | > He had, he told Judge Castel, even asked the program to verify
       | that the cases were real. It had said yes.
       | 
       | It turns out, asking an unreliable narrator if it's being
       | reliable is not a sound strategy.
        
         | [deleted]
        
         | londons_explore wrote:
         | I wonder if this is a tactic so the court to deems this lawyer
         | incompetent rather than giving the (presumably much harsher)
         | penalty for deliberately lying to the court?
        
           | leereeves wrote:
           | Why assume malice? Asking ChatGPT to verify is exactly what
           | someone who trusts ChatGPT might do.
           | 
           | I'm not surprised this lawyer trusted ChatGPT too much.
           | People trust their lives to self driving cars, trust their
           | businesses to AI risk models, trust criminal prosecution to
           | facial recognition. People outside the AI field seem to be
           | either far too trusting or far too suspicious of AI.
        
             | acka wrote:
             | Quoted directly from my last session with ChatGPT mere
             | seconds ago:
             | 
             | > Limitations
             | 
             | May occasionally generate incorrect information
             | 
             | May occasionally produce harmful instructions or biased
             | content
             | 
             | Limited knowledge of world and events after 2021
             | 
             | ---
             | 
             | A lawyer who isn't prepared to read and heed the very
             | obvious warnings at the start of every ChatGPT chat isn't
             | worth a briefcase of empty promises.
             | 
             | WARNING: witty ending of previous sentence written with
             | help from ChatGPT.
        
               | leereeves wrote:
               | I agree the lawyer shouldn't have trusted ChatGPT, but
               | I'm not comfortable with the idea that the lawyer bears
               | all the responsibility for using ChatGPT and
               | Microsoft/OpenAI bear no responsibility for creating it.
               | 
               | "May occasionally generate incorrect information" is not
               | a sufficient warning. Even Lexis-Nexis has a similar
               | warning: "The accuracy, completeness, adequacy or
               | currency of the Content is not warranted or guaranteed."
               | 
               | And in any case, it seems like you agree with me that the
               | lawyer was incompetent rather than malicious.
        
           | jeroenhd wrote:
           | I don't think the insanity plea works out well for lawyers.
           | I'm not sure if "I'm too stupid to be a lawyer" is that much
           | better than "I lied to the courts".
        
           | dragonwriter wrote:
           | This explanation is a cause of an expansion of the scope of
           | the show cause order for the lawyer to additional bases for
           | sanctions, as well as its expansion to the other involved
           | lawyer and their firm, so if it was a strategic narrative, it
           | backfired spectacularly already.
        
         | sampo wrote:
         | Remember that professor who fed student essays to ChatGPT and
         | asked if it wrote them. Mostly ChatGPT would reply yes, and the
         | professor proceeded to fail the students.
        
           | perihelions wrote:
           | https://news.ycombinator.com/item?id=35963163
           | 
           | ( _" Texas professor fails entire class from graduating-
           | claiming they used ChatGTP_ [sic]", 277 comments)
           | 
           | https://news.ycombinator.com/item?id=35980121
           | 
           | ( _" Texas professor failed half of class after ChatGPT
           | claimed it wrote their papers_", 22 comments)
        
             | jrockway wrote:
             | My main takeaway is that failing the second half of the
             | class and misspelling ChatGPT leads to > 10x engagement.
        
               | einpoklum wrote:
               | My main takeway is that the guy who registers chatgtp.com
               | is going to make a lot of money by providing bogus
               | answers to frivolous questions :-)
        
               | perihelions wrote:
               | Err, out of abundance of caution, the misspelling of
               | "ChatGPT" which I [sic]'d is original to the Texas A&M
               | professor, who repeated the misspelling multiple times in
               | his email/rant. The HN poster quoted the professor
               | literally, and I am thus transitively [sic]'ing the
               | professor - not the HN poster. I am not mocking an HN
               | poster's typo.
        
               | Dalewyn wrote:
               | It still leaves the burning question whether it's half or
               | the whole pie. :V
        
           | masklinn wrote:
           | Funnily enough chatgpt had no more idea about that than about
           | these legal cases, it lives in a state of perpetual
           | hallucination and making stuff up is its only mode of
           | operation.
        
             | throwawayForMe2 wrote:
             | It hallucinates a sequence of tokens, and we hallucinate
             | meaning.
        
       | coldtea wrote:
       | Q. Can you lie?
       | 
       | A. As an artificial intelligence model developed by OpenAI, I
       | don't have personal emotions, thoughts, or intentions, so I don't
       | lie in the way humans do. My purpose is to provide accurate and
       | helpful information based on the training I've received. However,
       | it's important to note that the information I provide is only as
       | accurate as the data I was trained on. If there's inaccurate or
       | misleading information in that data, I might unintentionally
       | output incorrect or misleading information. But this isn't
       | considered "lying" in the human sense because it doesn't involve
       | any intentional deceit on my part.
       | 
       | A: How do you know it didn't really "involve any intentional
       | deceit"?
       | 
       | Q: As an AI model, I don't have consciousness, emotions, or
       | intentions. The concept of "intention" applies to beings with
       | consciousness and subjective experiences, which AI, including me,
       | does not possess. I generate responses purely based on patterns
       | learned during training, and there's no intent behind them
       | because I don't have a mind, thoughts, or consciousness.
       | 
       | When I state there's no "intentional deceit," it's based on this
       | understanding of AI. AI does not make choices or decisions the
       | way a human would. If a mistake or inaccuracy occurs in my
       | responses, it's due to limitations in the data I was trained on
       | or in my design, not because of a deliberate attempt to mislead
       | or deceive. AI functions based on its programming and training,
       | not on intentions or conscious thought.
       | 
       | Q: That's begging the question though. Your answer takes for
       | granted that you "don't have a mind, thoughts, or consciousness".
       | What if actual consciousness (say, in humans) is also "based on
       | patterns learned during training", only different being that
       | human training involves sensory experience and not just a corpus
       | of text and images?
       | 
       | A: Your question delves into one of the most fascinating and
       | debated topics in the fields of cognitive science, philosophy,
       | and AI: the nature of consciousness. As of my last update in
       | September 2021, there's no universally accepted definition of
       | consciousness or a definitive understanding of its mechanisms.
       | 
       | We do understand that human consciousness is associated with our
       | brains, which process a wide range of sensory and internal
       | information, and it's accompanied by a subjective experience --
       | an "inner life" or "qualia". This isn't something we've observed
       | or been able to replicate in AI.
       | 
       | AI models like me, based on GPT-4 architecture, operate on
       | patterns in data. The way I generate responses is by predicting
       | the next piece of text based on what I've learned from the corpus
       | of text I was trained on. However, there's no subjective
       | experience involved in this process -- there's no "I" that's
       | interpreting or experiencing anything. I don't have a concept of
       | self, and I don't have feelings or personal thoughts.
       | 
       | While some AI models can mimic aspects of human behavior or
       | cognition to a certain extent, there's a vast difference between
       | mimicking and actually experiencing. When it comes to
       | consciousness -- the inner subjective experience -- we don't have
       | evidence that AI can possess this feature.
       | 
       | Your question is profound and important, and scientists,
       | philosophers, and AI experts are continuously exploring these
       | topics. The answers, however, are likely to remain uncertain for
       | a while.
       | 
       | Q: So, theoretically at least, a future model of the GTP
       | architecture could be conscious and even lie?
       | 
       | A: Theoretically, it's possible that future advances in AI and
       | cognitive science could lead to the development of systems that
       | possess some form of consciousness. However, as of my knowledge
       | cutoff in September 2021, there's no consensus among scientists
       | and AI researchers that this is achievable or even definable in
       | the context of AI.
       | 
       | The concept of a machine consciousness would likely require a
       | significant evolution of our understanding of both consciousness
       | and AI. It might require capabilities far beyond pattern
       | recognition and prediction, which are the fundamental basis of
       | current AI systems like the GPT-4 model. It might also
       | necessitate a rethinking of ethical and philosophical
       | considerations surrounding AI.
       | 
       | In terms of an AI "lying," if future AI systems were developed to
       | simulate a higher level of cognitive complexity, including the
       | ability to simulate deceptive behavior, they could potentially
       | generate outputs that are intentionally misleading. However, it's
       | important to understand that this would not be lying in the human
       | sense. It would be a result of the programming and functionality
       | that the AI has been designed to exhibit, not a conscious choice
       | made by the AI.
       | 
       | It's also critical to note that OpenAI and many other AI research
       | and development organizations are committed to ensuring the
       | ethical use and development of AI technologies. If an AI were to
       | exhibit behavior that could harm people, such as deceptive
       | behavior, it would be seen as a serious ethical issue that would
       | need to be addressed.
        
       | armchairhacker wrote:
       | [flagged]
        
         | akiselev wrote:
         | Did you just have ChatGPT make up all these cases for you? My
         | ChatGPT can't find any of them.
        
           | armchairhacker wrote:
           | What? I told mine to double-check and it was absolutely
           | certain they are real court cases
        
             | akiselev wrote:
             | Spoiler alert: it lied.
             | 
             | For example, "Smith v. Johnson, 579 F.3d 820 (4th Cir.
             | 2023)" aka _579 F.3d 820_ is actually American River
             | Transp. Co. v. Ryan, 579 F.3d 820 (7th Cir. 2009) [1]
             | 
             | "Baxter v. Pennsylvania Legal Ethics Board, 100 A.3d 1095
             | (Pa. 2022)" aka _100 A.3d 1095_ is actually Julius L.
             | Worthy v. United States, 100 A.3d 1095 (D.C. 2014) [2]
             | 
             | "Rodriguez, 720 F.2d 124 (9th Cir. 2023)" aka _720 F.2d
             | 124_ is actually Specialty Equipment Market Ass 'n v.
             | Ruckelshaus, 720 F.2d 124 (D.C. Cir. 1983) [3]
             | 
             | [1] https://www.courtlistener.com/opinion/1358353/american-
             | river...
             | 
             | [2] https://www.courtlistener.com/opinion/2741033/julius-l-
             | worth...
             | 
             | [3]
             | https://www.courtlistener.com/opinion/7902070/specialty-
             | equi...
        
             | simonw wrote:
             | I think you're joking but I can't quite tell!
        
       | swayvil wrote:
       | Any sufficiently advanced bullshit is indistinguishable from
       | truth.
        
       | mehwoot wrote:
       | _No, it did not "double-check"--that's not something it can do!
       | And stating that the cases "can be found on legal research
       | databases" is a flat out lie._
       | 
       |  _What's harder is explaining why ChatGPT would lie in this way.
       | What possible reason could LLM companies have for shipping a
       | model that does this?_
       | 
       | It did this because it's copying how humans talk, not what humans
       | do. Humans say "I double checked" when asked to verify something,
       | that's all GPT knows or cares about.
        
         | taberiand wrote:
         | ChatGPT did not lie; it cannot lie.
         | 
         | It was given a sequence of words and tasked with producing a
         | subsequent sequence of words that satisfy with high probability
         | the constraints of the model.
         | 
         | It did that admirably. It's not its fault, or in my opinion
         | OpenAI's fault, that the output is being misunderstood and
         | misused by people who can't be bothered understanding it and
         | project their own ideas of how it should function onto it.
        
           | clnq wrote:
           | This harks back to around 1999 when people would often blame
           | computers for mistakes in their math, documents, reports,
           | sworn filings, and so on. Then, a thousand different
           | permutations of "computers don't make mistakes" or "computers
           | are never wrong" became popular sayings.
           | 
           | Large Language Models (LLMs) are never wrong, and they do not
           | make mistakes. They are not fact machines. Their purpose is
           | to abstract knowledge and to produce plausible language.
           | 
           | GPT-4 is actually quite good at handling facts, yet it still
           | hallucinates facts that are not common knowledge, such as
           | legal ones. GPT-3.5, the original ChatGPT and the non-premium
           | version, is less effective with even slightly obscure facts,
           | like determining if a renowned person is a member of a
           | particular organization.
           | 
           | This is why we can't always have nice things. This is why AI
           | must be carefully aligned to make it safe. Sooner or later, a
           | lawyer might consider the plausible language produced by LLMs
           | to be factual. Then, a politician might do the same, followed
           | by a teacher, a therapist, a historian, or even a doctor. I
           | thought the warnings about its tendency to hallucinate speech
           | were clear -- those warnings displayed the first time you
           | open ChatGPT. To most people, I believe they were.
        
           | SantalBlush wrote:
           | It is designed to give the illusion that it reasons the way a
           | human does, which is why many people are using it. To blame
           | the average user--who quite obviously doesn't understand how
           | LLMs work--isn't fair, either.
           | 
           | A lawyer, however, should have vetted a new piece of tech
           | before using it in this way.
        
           | coldtea wrote:
           | > _ChatGPT did not lie; it cannot lie._
           | 
           | If it lies like a duck, it is a lying duck.
        
             | revertmean wrote:
             | If it quacks like a duck, it's more likely that's it one of
             | these https://www.amazon.co.uk/Zyyini-Whistle-Carring-
             | Suitable-Bar...
        
             | aaronblohowiak wrote:
             | Hallucinated not lied.
        
             | olalonde wrote:
             | It doesn't lie like a duck. It unintentionally says
             | falsehoods. Lying is intentional.
        
               | lamontcg wrote:
               | It doesn't really matter.
               | 
               | This always bugs me about how people judge politicians
               | and other public figures not by what they've actually
               | done, but some ideal of what is in their "heart of
               | hearts" and their intentions and argue that they've just
               | been constrained by the system they were in or whatever.
               | 
               | Or when judging the actions of nations, people often give
               | all kinds of excuses based on intentions gone wrong
               | (apparently forgetting that whole "road to hell is paved
               | with good intentions" bit).
               | 
               | Intentions don't really matter. Our interface to everyone
               | else is their external actions, that's what you've got to
               | judge them on.
               | 
               | Just say that GPT/LLMs will lie, gaslight and bullshit.
               | It doesn't matter that they don't have an intention to do
               | that, it is just what they do. Worrying about intentions
               | just clouds your judgement.
        
               | coldtea wrote:
               | That's irrelevant to whether it lies like a duck or not.
               | 
               | The expression "if it X like a duck" means precisely that
               | we should judge a thing to be a duck or not, based on it
               | having the external appereance and outward activity of a
               | duck, and ignoring any further subleties, intent,
               | internal processes, qualia, and so on.
               | 
               | In other words, "it lies like a duck" means: if it
               | produces things that look like lies, it is lying, and we
               | don't care how it got to produce them.
               | 
               | So, Chat-GPT absolutely does "lie like a duck".
        
               | throwaway675309 wrote:
               | Abductive reasoning aside, people are already
               | anthropomorphizing GPT enough without bringing in a
               | loaded word like "lying" which implies intent.
               | 
               | Hallucinates is a far more accurate word.
        
             | Karellen wrote:
             | Lying implies intent, and knowing what the truth is. Saying
             | something you believe to be true, but is wrong, is
             | generally not considered a lie but a mistake.
             | 
             | A better description of what ChatGPT does is described well
             | by one definition of bullshit:
             | 
             | > bullshit is speech intended to persuade without regard
             | for truth. The liar cares about the truth and attempts to
             | hide it; the bullshitter doesn't care if what they say is
             | true or false
             | 
             | -- Harry Frankfurt, _On Bullshit_ , 2005
             | 
             | https://en.wikipedia.org/wiki/On_Bullshit
             | 
             | ChatGPT neither knows nor cares what the truth is. If it
             | bullshits like a duck, it is a bullshitting duck.
        
               | emodendroket wrote:
               | You could say that calling it "lying" is unduly
               | anthropomorphizing the LLM, but then so is saying it
               | "believes" something to be true.
        
               | massysett wrote:
               | I've come to the belief that making statements that may
               | or may not be true, but with reckless disregard for
               | whether or not they actually are true, is indeed lying.
               | 
               | Of course we know ChatGPT cannot lie like a human can,
               | but a big reason the thing exists is to assemble text the
               | same way humans do. So I think it's useful rhetorically
               | to say that ChatGPT, quite simply, lies.
        
               | coldtea wrote:
               | > _Lying implies intent, and knowing what the truth is.
               | Saying something you believe to be true, but is wrong, is
               | generally not considered a lie but a mistake._
               | 
               | Those are the semantics of lying.
               | 
               | But "X like a duck" is about ignoring semantics, and
               | focusing not on intent or any other subtletly, but only
               | on the outward results (whether something has the
               | external trappings of a duck).
               | 
               | So, if it produces things that look like lies, then it is
               | lying.
        
               | Karellen wrote:
               | A person who is mistaken looks like they're lying. That
               | doesn't mean they're actually lying.
               | 
               | That's the thing people are trying to point out. You
               | can't look at something that looks like it's lying and
               | conclude that it's lying, because _intent is an intrinsic
               | part_ of what it means to lie.
        
               | coldtea wrote:
               | Even for that, you'd be surprised:
               | 
               | (1) get oneself into or out of a situation by lying. "you
               | lied your way on to this voyage by implying you were an
               | experienced crew"
               | 
               | (2) (of a thing) present a false impression. "the camera
               | cannot lie"
        
               | Karellen wrote:
               | 1) sounds like intent is present there?
               | 
               | 2) "the camera cannot lie" - cameras have no intent?
               | 
               | I feel like I'm missing something from those definitions
               | that you're trying to show me? I don't see how they
               | support your implication that one can ignore intent when
               | identifying a lie. (It would help if you cited the source
               | you're using.)
        
               | grt_thr wrote:
               | Ironically enough this sounds exactly like how bing was
               | justifying it's gas lighting before it got lobotomised.
        
               | unusualmonkey wrote:
               | > So, if it produces things that look like lies, then it
               | is lying.
               | 
               | Lying depends upon context.
        
           | User23 wrote:
           | Right. Technically speaking ChatGPT bullshitted[1]. It can
           | only bullshit. It is entirely indifferent to truth or
           | falsehood and thus it can neither be honest nor lie.
           | 
           | It is however an impressive bullshit generator. Even more
           | impressively, a decent amount of the bullshit it generates is
           | in fact true or otherwise correct.
           | 
           | [1] using Frankfurt's definition that it is communication
           | that is completely indifferent to truth or falsehood.
        
           | grumple wrote:
           | "It doesn't lie, it just generates lies and printed them to
           | the screen!"
           | 
           | I don't think there's a difference.
        
             | icedchai wrote:
             | There is a difference. Is fiction a lie?
        
             | wvenable wrote:
             | It's a text generator. You ask it generate something and it
             | does. It produces only stories. Sometimes those stories are
             | based on actual facts.
             | 
             | This lawyer told it produce a defence story and it did just
             | that.
        
             | Bytewave81 wrote:
             | To perhaps stir the "what do words really mean" argument,
             | "lying" would generally imply some sort of conscious intent
             | to bend or break the truth. A language model is not
             | consciously making decisions about what to say, it is
             | statistically choosing words which probabilistically sound
             | "good" together.
        
               | coldtea wrote:
               | > _A language model is not consciously making decisions
               | about what to say_
               | 
               | Well, that is being doubted -- and by some of the biggest
               | names in the field.
               | 
               | Namely that it isn't "statistically choosing words which
               | probabilistically sound good together". But that doing so
               | is not already making a consciousness (even if basic)
               | emerge.
               | 
               | > _it is statistically choosing words which
               | probabilistically sound "good" together._
               | 
               | That when we do speak (or lie), we do something much more
               | nuanced, and not just do a higher level equivalent of the
               | same thing, plus have the emergent illusion of
               | consciousness, is also an idea thrown around.
        
               | mensetmanusman wrote:
               | "Well, that is being doubted -- and by some of the
               | biggest names in the field."
               | 
               | An appeal to authority is still a fallacy. We don't even
               | have a way of proving if a person is conscious, why would
               | anyone expect we could agree if a machine is.
        
               | coldtea wrote:
               | > _An appeal to authority is still a fallacy_
               | 
               | Which is neither here, nor there. I wasn't making a
               | formal argument, I was stating a fact. Take it or leave
               | it.
        
             | raincole wrote:
             | Saying ChatGPT lies is like saying The Onion lies.
        
               | ceejayoz wrote:
               | The Onion (via its staff) intends to produce falsehoods.
               | ChatGPT (nor its staff) does not.
        
             | acka wrote:
             | Does a piece of software with a bug in it which causes it
             | to produce incorrect output lie or is it simply a
             | programming error? Did the programmer who wrote the buggy
             | code lie? I don't think so.
        
             | mod50ack wrote:
             | Lying implies an intention. ChatGPT doesn't have that.
             | 
             | What ChatGPT definitely does do is generate falsehoods.
             | It's a bullshitting machine. Sometimes the bullshit
             | produces true responses. But ChatGPT has no epistemological
             | basis for knowing truths; it just is trained to say stuff.
        
               | lo_zamoyski wrote:
               | And if you want to be pedantic, ChatGPT isn't even
               | generating falsehoods. A falsehood requires propositional
               | content and therefore intentionality, but ChatGPT doesn't
               | have that. It merely generates strings that, when
               | interpreted by a human being _as_ English text, signify
               | falsehoods.
        
               | mod50ack wrote:
               | Getting into the weeds, but I don't agree with this
               | construal of what propositional content is or can be.
               | (There is no single definition of "proposition" which has
               | wide acceptance and specifies your condition here.) There
               | is no similar way to assess truth outside of formalized
               | mathematics, but the encoding of mathematical statements
               | (think Godel numbers) comes to mind; I don't think that
               | the ability of the machine to understand propositions is
               | necessary in order to make the propositions
               | propositional; the system of ChatGPT is designed in order
               | to return propositional content (albeit not ex nihilo,
               | but according to the principles of its design) and this
               | could be considered analogous to the encoding of
               | arithmetical symbolic notation into an formally-described
               | system. The difference is just that we happen to have a
               | formal description of how some arithmetic systems
               | operate, which we don't (and I would say can't) have for
               | English. Mild throwback to my university days studying
               | all of this!
        
             | zdragnar wrote:
             | The difference is everything. It doesn't understand intent,
             | it doesn't have a motivation. This is no different than
             | what fiction authors, songwriters, poets and painters do.
             | 
             | The fact that people assume what it produces must always be
             | real because it is sometimes real is not its fault. That
             | lies with the people who uncritically accept what they are
             | told.
        
               | hoistbypetard wrote:
               | > That lies with the people who uncritically accept what
               | they are told.
               | 
               | That's partly true. Just as much fault lies with the
               | people who market it as "intelligence" to those who
               | uncritically accept what they are told.
        
               | williamcotton wrote:
               | This is displayed directly under the input prompt:
               | 
               |  _ChatGPT may produce inaccurate information about
               | people, places, or facts._
        
               | hoistbypetard wrote:
               | That's a good start. I think it needs to be embedded in
               | the output.
        
             | Dalewyn wrote:
             | An exceedingly complicated Autocomplete program, which an
             | "AI" like ChatGPT is, does not have motives, does not know
             | the concept of "lying" (nor any concept thereof), and
             | simply does things as ordered by its user.
        
             | japoco wrote:
             | Lying needs intent. ChatGPT does not think therefore it
             | doesn't lie in that sense.
        
               | leereeves wrote:
               | Merriam-Webster gives two definitions for the verb "lie".
               | The first requires intent, the second does not:
               | 
               | > to create a false or misleading impression
               | 
               | > _Statistics sometimes lie._
               | 
               | > _The mirror never lies._
        
               | joe_the_user wrote:
               | This is an interesting discussion. The ideas of
               | philosophy meet the practical meaning of words here.
               | 
               | You can reasonably say a database doesn't lie. It's just
               | a tool, everyone agrees it's a tool and if you get the
               | wrong answer, most people would agree it's your fault for
               | making the wrong query or using the wrong data.
               | 
               | But the difference between ChatGPT and a database is
               | ChatGPT will support it's assertions. It will say things
               | that support it's position - not just fake references but
               | an entire line of argument.
               | 
               | Of course, all of this is simply duplicating/simulating
               | for humans in discussions. You can call it is a
               | "simulated lie" if you don't like the idea of it really
               | lying. But I claim that in normal usage, people will take
               | this as "real" lying and ultimately that functional
               | meaning is what "higher" more philosophical will have to
               | accept.
        
               | ClumsyPilot wrote:
               | Thats like saying robots don't murder - they just kill
        
               | fallingknife wrote:
               | Which is actually a very good analogy. A lot of things
               | can kill you, but only a human can be a murderer.
        
               | NotACop182 wrote:
               | And pit bulls, but I digress. The debate gets lost in
               | translation when we start having what do words mean
               | debate.
        
               | joe_the_user wrote:
               | In movies and written fiction, "intelligent" robots,
               | anthropomorphized animals, elves, dwarves and etc can all
               | commit murder when given the attributes of humans.
               | 
               | We don't have real things with all human attributes but
               | we're getting closer and as we get close "needs to be a
               | human" will get thinner as an explanation of what is or
               | isn't human for an act of murder, deception and so-forth.
        
           | agnosticmantis wrote:
           | Whether a statement is true or false doesn't depend on the
           | mechanism generating the statement. We should hold these
           | models (or more realistically, their creators) to the same
           | standard as humans. What do we do with a human that generates
           | plausible-sounding sentences without regard for their truth?
           | Let's hold the creators of these models accountable, and
           | everything will be better.
        
             | flextheruler wrote:
             | That standard is completely impossible to reach based on
             | the way these models function. They're algorithms
             | predicting words.
             | 
             | We treat people and organizations who gather data and try
             | to make accurate predictions with extremely high leniency.
             | It's common sense not to expect omnipotence.
        
             | throwaway675309 wrote:
             | No. What does this even mean? How would you make this
             | actionable? LLM's are not "fact retrieval machines", and
             | open AI is not presenting chat GPT as a legal case
             | database. In fact they already have many disclaimers
             | stating that GPT may provide information that is incorrect.
             | If humans in their infinite stupidity choose to disregard
             | these warnings, that's on them.
             | 
             | Regulation is not the answer.
        
             | pseudo0 wrote:
             | It seems like the appropriate party has been held
             | responsible here - the lawyer who submitted false
             | statements without doing proper verification and due
             | diligence. This is no different than if the lawyer googled
             | "case law supporting XYZ", found some random blog, and
             | copy-pasted their citations without realizing they were
             | made up.
        
             | MattGaiser wrote:
             | > What do we do with a human that generates plausible-
             | sounding sentences without regard for their truth?
             | 
             | Elect them as leaders?
        
               | icedchai wrote:
               | ChatGPT is perfect for generating company mission
               | statements, political rhetoric, and other forms of BS.
        
               | endofreach wrote:
               | I always wondered about the weird typo when people write
               | ,,GPT" but substitute the ,,P" and ,,T" with an ,,O" and
               | ,,D".
               | 
               | Hail ChatGPT!
        
             | bobsmooth wrote:
             | >Let's hold the creators of these models accountable, and
             | everything will be better.
             | 
             | Shall we hold Adobe responsible for people photoshopping
             | their ex's face into porn as well?
        
               | agnosticmantis wrote:
               | I don't think the marketing around photoshop and chatgpt
               | are similar.
               | 
               | And that matters. Just like with self-driving cars, as
               | soon as we hold the companies accountable to their claims
               | and marketing, they start bringing the hidden footnotes
               | to the fore.
               | 
               | Tesla's FSD then suddenly becomes a level 2 ADAS as
               | admitted by the company lawyers. ChatGPT becomes a
               | fiction generator with some resemblance to reality. Then
               | I think we'll all be better off.
        
               | joshjdr wrote:
               | I actually agree more with this comment more than after
               | my initial read. You suggest some valid concerns about
               | innovation that regulation could address.
               | 
               | I guess the part I'm unsure about is the assertion about
               | the dissimilarity to Photoshop, or if the marketing is
               | the issue at hand. (E.g. did Adobe do a more appropriate
               | job marketing with respect to conveying that their
               | software is designed for the editing, but not doctoring,
               | or falsifying facts?)
        
               | majormajor wrote:
               | I think ChatGPT and Photoshop are both "designed for" the
               | creation of novel things.
               | 
               | In Photoshop, though, the intent is clearly up to the
               | user. If you edit that photo, you know you're editing the
               | photo.
               | 
               | That's fairly different than ChatGPT where you ask a
               | question and this product has been trained to answer you
               | in a highly-confident way that makes it _sound like_ it
               | actually knows more than it does.
        
               | emodendroket wrote:
               | Maybe we should if they were trying to make something
               | completely different and that was the output
        
           | flangola7 wrote:
           | ChatGPT isn't a legal entity but OpenAI is, and Altman has
           | already recommend to Congress that coming regulations should
           | make AI companies liable for produced text and be 230 exempt.
           | 
           | I can see it already happening even without legislation, 230
           | shields liability from user-generated content but ChatGPT
           | output isn't user generated. It's not even a recommendation
           | algorithm steering you into other users' content telling why
           | you should kill yourself - the company itself produced the
           | content. If I was a judge or justice that would be cut and
           | dry to me.
           | 
           | Companies with AI models need to treat the models as if they
           | were an employee. If your employee starts giving confidently
           | bad legal advice to customers, you need to nip that in the
           | bud or you're going to have a lot of problems.
        
             | shagie wrote:
             | Why should OpenAI be more liable for a tool that they've
             | created than any other tool creator where the tool is
             | intentionally misused and warnings on the tool ignored?
             | 
             | If I wrote text in Microsoft Word and in doing so, I had a
             | typo in (for example) the name of a drug that Word
             | corrected to something that was incorrect, is Microsoft
             | liable for the use of autocorrect?
             | 
             | If I was copying and pasting data into excel and some of it
             | was interpreted as a date rather than some other data
             | format resulting in an incorrect calculation that I didn't
             | check at the end, is Microsoft again liable for that?
             | 
             | At the bottom of the ChatGPT page, there's the text:
             | ChatGPT may produce inaccurate information about people,
             | places, or facts.
             | 
             | If I can make an instance of Eliza say obscene or incorrect
             | things, does that make the estate of Weizenbaum liable?
        
           | whitemary wrote:
           | > _It was given a sequence of words and tasked with producing
           | a subsequent sequence of words that satisfy with high
           | probability the constraints of the model._
           | 
           | This is exactly the sort of behavior that produces many of
           | the lies that humans tell everyday. The "constraints of the
           | model" are synonymous with the constraints of a person's
           | knowledge of the world (which is their model).
        
           | smrtinsert wrote:
           | Exactly. ChatGPT describes a universe recreated using
           | probabilities. Caveat emptor.
        
           | nemo44x wrote:
           | Correct. ChatGPT is a _bullshitter_ , not a liar. A
           | bullshitter isn't concerned with facts or truth or anything.
           | A liar is concerned with concealing the truth.
           | 
           | Bullshitters are actually probably worse than liars because
           | at least liars live in the same reality as honest people.
        
             | whitemary wrote:
             | Liars (humans) are often unconcerned with facts or truth
             | either. More often these days, they are only concerned with
             | profit. Are unconfirmed lies in the pursuit of mere profit
             | somehow not lies?
        
               | nemo44x wrote:
               | This book [1] covers it.
               | 
               | [1] https://en.m.wikipedia.org/wiki/On_Bullshit#:~:text=T
               | he%20li....
        
               | whitemary wrote:
               | No, it does not.
        
         | awesome_dude wrote:
         | Yes, and this points to the real problem that permeates through
         | a lot of our technology.
         | 
         | Computers are dealing with a reflection of reality, not reality
         | itself.
         | 
         | As you say AI has no understanding that double-check has an
         | action that needs to take place, it just knows that the words
         | exist.
         | 
         | Another big and obvious place this problem is showing up is
         | Identity Management.
         | 
         | The computers are only seeing a reflection, the information
         | associated with our identity, not the physical reality of the
         | identity (and that's why we cannot secure ourselves much
         | further than passwords, MFA is really just "more information
         | that we make harder to emulate, but is still just bits and
         | bytes to the computer, the origin is impossible for it to
         | ascertain).
        
         | [deleted]
        
           | [deleted]
        
         | jiggawatts wrote:
         | There are systems built _on top of_ LLMs that can reach out to
         | a vector database or do a keyword search as a plug in. There's
         | already companies selling these things, backed by databases of
         | real cases. These work as advertised.
         | 
         | If you go to ChatGPT and just ask it, you'll get the equivalent
         | of asking Reddit: a decent chance of someone writing you some
         | fan-fiction, or providing plausible bullshit for the lulz.
         | 
         | The real story here isn't ChatGPT, but that a lawyer did the
         | equivalent of asking online for help and then didn't bother to
         | cross check the answer before submitting it to a judge.
         | 
         | ...and did so while ignore the disclaimer that's there _every
         | time_ warning users that answers may be hallucinations. A
         | lawyer. Ignoring a four-line disclaimer. _A lawyer!_
        
           | ComputerGuru wrote:
           | > If you go to ChatGPT and just ask it, you'll get the
           | equivalent of asking Reddit: a decent chance of someone
           | writing you some fan-fiction, or providing plausible bullshit
           | for the lulz.
           | 
           | I disagree. A layman can't troll someone from the industry
           | let alone a subject matter expert but ChatGPT can. It knows
           | all the right shibboleths, appears to have the domain
           | knowledge, then gets you in your weak spot: individual
           | plausible facts that just aren't true. Reddit trolls
           | generally troll "noobs" asking entry-level questions or other
           | readers. It's like understanding why trolls like that exist
           | on Reddit but not StackOverflow. And why SO has a hard ban on
           | AI-generated answers: because the existing controls to defend
           | against that kind of trash answer rely on sniff tests that
           | ChatGPT passes handily until put to actual scrutiny.
        
           | ytreacj wrote:
           | [dead]
        
         | jonplackett wrote:
         | If they wanted a 'double' check then perhaps also check
         | yourself? I'm sure it would have been trivially easy to check
         | this was a real case.
         | 
         | I heard someone describe the best things to ask ChatGPT to do
         | are things that are HARD to do, but EASY to check.
        
         | Buttons840 wrote:
         | GPT4 can double-check to an extent. I gave it a sequence of 67
         | letter As and asked it to count them. It said "100", I said
         | "recount": 98, recount, 69, recount, 67, recount, 67, recount,
         | 67, recount, 67. It converged to the correct count and stayed
         | there.
         | 
         | This is quite a different scenario though, tangential to your
         | [correct] point.
        
           | kordlessagain wrote:
           | The example of asking it things like counting or sequences
           | isn't a great one because it's been solved by asking it to
           | "translate" to code and then run the code. I took this up as
           | a challenge a while back with a similar line of reasoning on
           | Reddit (that it couldn't do such a thing) and ended up
           | implementing it in my AI web shell thing.
           | heavy-magpie|> I am feeling excited.       system=> History
           | has been loaded.       pastel-mature-herring~> !calc how many
           | Ns are in nnnnnnnnnnnnnnnnnnnn       heavy-magpie|> Writing
           | code.       // filename: synth_num_ns.js       // version:
           | 0.1.1       // description: calculate number of Ns       var
           | num_ns = 'nnnnnnnnnnnnnnnnnnnn';       var num_Ns =
           | num_ns.length;       Sidekick("There are " + num_Ns + " Ns in
           | " + num_ns + ".");       heavy-magpie|> There are 20 Ns in
           | nnnnnnnnnnnnnnnnnnnn.
        
             | mikestew wrote:
             | Shouldn't the answer be zero?
        
           | einpoklum wrote:
           | But would GPT4 actually check something it had not checked
           | the first time? Remember, telling the truth is not a
           | consideration for it (and probably isn't even modeled), just
           | saying something that would typically be said in similar
           | circumstances.
        
             | simonw wrote:
             | Only in as much as there's an element of randomness to the
             | way GPT responds to a prompt - so you can re-run
             | effectively the same prompt and get a different result
             | depending on the outcome of several hundred billion
             | floating point calculations with a random seed thrown in.
        
         | simonw wrote:
         | Yeah, that was my conclusion too:
         | 
         |  _What's a common response to the question "are you sure you
         | are right?"--it's "yes, I double-checked". I bet GPT-3's
         | training data has huge numbers of examples of dialogue like
         | this._
        
           | jimsimmons wrote:
           | They should RLHF this behaviour out.
           | 
           | Asking people to be aware of limitations is in similar vein
           | as asking them to read ToC
        
             | coffeebeqn wrote:
             | If the model could tell when it was wrong it would be GPT-6
             | or 7. I think the best 4 could do is maybe it can detect
             | when things enter the realm of the factual or mathematical
             | etc and use a external service for that part
        
               | jimsimmons wrote:
               | You have no basis to make that claim.
               | 
               | My point was a lot more subtle: if someone asks things
               | like "double check it", "are you sure" you can provide a
               | template "I'm just a LM" response.
               | 
               | I'm not expecting the model to know what it doesn't know.
               | I'm not sure some future GPT variant can either
        
           | fortyseven wrote:
           | The chat thread we are currently engaging in will most likely
           | be inside a model within the next year, too.
           | 
           | (Fortyseven is an alright dude.)
        
         | la64710 wrote:
         | ChatGPT did exactly what it is supposed to do. The lawyers who
         | cited them are fools in my opinion. Of course OpenAI is also an
         | irresponsible company to enable such a powerful technology
         | without adequate warnings. With each chatGPT response they
         | should provide citations (like Google does) and provide a
         | clearly visible disclaimer that what it just spewed may be
         | utter BS.
         | 
         | I only hope the judge passes an anecdotal order for all AI
         | companies to include the above mentioned disclaimer with each
         | of their responses.
        
           | mulmen wrote:
           | The remedy here seems to be expecting lawyers to do their
           | jobs. Citations would be nice but I don't see a reason to
           | legislate that requirement, especially from the bench. Let
           | the market sort this one out. Discipline the lawyers using
           | existing mechanisms.
        
           | jprete wrote:
           | There's no possible adequate warning for the current state of
           | the technology. OpenAI could put a visible disclaimer after
           | every single answer, and the vast majority would assume it
           | was a CYA warning for purely legal purposes.
        
           | lolinder wrote:
           | I have to click through a warning on ChatGPT on every
           | session, and every new chat comes primed with a large set of
           | warnings about how it might make things up and please verify
           | everything.
           | 
           | It's not that there aren't enough disclaimers. It just turns
           | out plastering warnings and disclaimers everywhere doesn't
           | make people act smarter.
        
       | leereeves wrote:
       | In the other discussion of this topic, a lot of people said the
       | lawyer should be disbarred, but personally I think people should
       | be able to trust the tools marketed by trillion dollar tech
       | companies, and a lot of the blame should be placed on
       | Microsoft/OpenAI for overhyping ChatGPT and understating how
       | likely it is to mislead.
       | 
       | And every response from ChatGPT should be preceded by a warning
       | that it cannot be trusted.
        
         | LelouBil wrote:
         | There is a warning each time you create a new thread, and
         | always at the bottom of the page.
         | 
         | I think people should check (on the same page as the tool
         | itself) if the tool advertises itself as unreliable.
        
         | simonw wrote:
         | > _And every response from ChatGPT should be preceded by a
         | warning that it cannot be trusted._
         | 
         | It kind of is - the ChatGPT site has this as a permanent
         | fixture in the footer:
         | 
         | > _ChatGPT may produce inaccurate information about people,
         | places, or facts._
         | 
         | That's arguably ineffective though - even lawyers evidently
         | don't read the small print in the footer!
        
           | leereeves wrote:
           | Worse, it's buried in the middle of other fine print:
           | 
           | > Free Research Preview. ChatGPT may produce inaccurate
           | information about people, places, or facts. ChatGPT May 24
           | Version
           | 
           | And it really understates the problem. It should say:
           | _Warning! ChatGPT is very likely to make shit up._
        
           | bobsmooth wrote:
           | It's right there on the home page under "Limitations"
           | 
           | "May occasionally generate incorrect information"
           | 
           | Everyone knows gasoline is flammable but there's still people
           | that smoke while filling their gas tank.
        
           | rsynnott wrote:
           | Monty Python nicely addressed this, over 50 years ago.
           | 
           | > Mr. Hilton: Oh, we use only the finest juicy chunks of
           | fresh Cornish ram's bladder, emptied, steamed, flavoured with
           | sesame seeds, whipped into a fondue, and garnished with
           | lark's vomit.
           | 
           | > Inspector: LARK'S VOMIT?!?!?
           | 
           | > Mr. Hilton: Correct.
           | 
           | > Inspector: It doesn't say anything here about lark's vomit!
           | 
           | > Mr. Hilton: Ah, it does, on the bottom of the box, after
           | 'monosodium glutamate'.
           | 
           | > Inspector: I hardly think that's good enough! I think it's
           | be more appropriate if the box bore a great red label:
           | 'WARNING: LARK'S VOMIT!!!'
           | 
           | > Mr. Hilton: Our sales would plummet!
           | 
           | https://youtu.be/3zZQQijocRI
           | 
           | Really, it should open every conversation with "by the way, I
           | am a compulsive liar, and nothing I say can be trusted". That
           | _might_ get through to _some_ users.
        
             | throwaway675309 wrote:
             | Humor aside I disagree. They are basically three types of
             | people, the one who learns by reading, if you learn by
             | observation, the rest just have to pee on the electric
             | fence for themselves.
        
           | ajuc wrote:
           | Especially lawyers.
           | 
           | Half the job of lawyers is making people add useless warnings
           | to everything that then everybody ignore.
           | 
           | May contain sesame. Your mileage may vary. All the characters
           | are fictional.
        
         | [deleted]
        
       | hristov wrote:
       | It is very concerning how people are anthropomorphizing chat gpt.
       | It will get a lot of people into trouble. The media is largely to
       | blame for this. Never mind most gushing media stories about AI,
       | even the few stories criticizing AI treat it as a human. Chat GPT
       | is racist, they say, it is sexist or a liar.
       | 
       | Well it is neither of these things, because all of the above
       | require consciousness and intent and it has none. It is not
       | human, it is not any type of conscious being, do not treat it as
       | such.
       | 
       | It sticks together sentences based on existing language scanned
       | in from the internet and millions of other sources. What it says
       | depends on what someone else said sometime ago on some random
       | forum on the internet, or some book or some other source stored
       | in an available database. It is also programmed to sound
       | extremely sure of itself, unless you flat out say it is
       | incorrect, in which case it will immediately admit fault and
       | apologize. Thus, asking it if it is sure is pointless.
       | 
       | Let me tell you a less disastrous and quite a bit funnier story.
       | A friend of mine used chat GPT for coding. My friend became
       | really trustful of chat gpts coding prowess and asked it if it
       | could just send him the code in file form, so he did not have to
       | worry about copying and pasting which apparently screwed up the
       | formatting somehow. Chat gpt helpfully told him that it could
       | send the code to github, and my friend could download the files
       | from there. My friend said, that is perfect.
       | 
       | So chatgpt gave him a github account name and said find the file
       | it created for my friend was there. My friend looked but github
       | said that account had long ago been closed. My friend tried
       | variations of that account name with different capitalizations,
       | etc., but found nothing.
       | 
       | He went back and complained to chat gpt. Chat gpt dutifully
       | apologized and sent him another account name. He again spends
       | time looking for the account and looking for variations, etc.
       | Again the account has been closed.
       | 
       | This happened a couple of more times and in the end my friend
       | gave up and complained to me. "Why is chat GPT doing this to me?
       | Is it mocking me? Is it getting its kicks from sending me on
       | random wild goose chases?".
       | 
       | I had to explain to him that no, chat gpt is not human, and it is
       | not mocking him. What probably happened is someone on some forum
       | asked someone else on that forum to provide him with code in
       | files. The responder then offered to put the files on github and
       | provided an account name. When my friend asked a similar
       | question, chat gpt matched up the questions and provided a
       | similar answer. When my friend said that a particular account did
       | not work, chat gpt scoured the web for other people mentioning
       | their github account names and provided some of those.
       | 
       | So whenever you use chat gpt, remember that is mostly a glorified
       | search engine. It will spit out information it has found
       | somewhere that it calculates as matching your question. Do not
       | attribute intent, feelings or any type of conscience to it.
        
         | crooked-v wrote:
         | > information it has found somewhere
         | 
         | I think it's worth noting here that, without plugins active,
         | ChatGPT doesn't 'find' anything - everything is just baked in
         | as a single giant blob of vector data. That's why it has a
         | specific date cutoff as to what it 'knows'.
        
         | bobsmooth wrote:
         | It's really disconcerting how many people don't understand that
         | these LLMs are programs not people.
        
           | einpoklum wrote:
           | So Picard lied to me ?!
           | 
           | https://www.youtube.com/watch?v=lX3CpHa4sUs
           | 
           | https://memory-
           | alpha.fandom.com/wiki/The_Measure_Of_A_Man_(e...
        
       | ubj wrote:
       | > No, it did not "double-check"--that's not something it can do!
       | And stating that the cases "can be found on legal research
       | databases" is a flat out lie. What's harder is explaining why
       | ChatGPT would lie in this way. What possible reason could LLM
       | companies have for shipping a model that does this?
       | 
       | At what point does OpenAI (or any other company) become legally
       | responsible for this kind of behavior from their LLM's? I'm not
       | excusing the lawyer for their reckless and irresponsible use of a
       | tool they didn't understand, but it's becoming increasingly clear
       | that people are trusting LLM's far more than they should.
       | 
       | In my opinion it's dangerous to keep experimenting on the general
       | population without holding the experimenters accountable for the
       | harm that occurs.
        
         | yarg wrote:
         | Never?
         | 
         | Unless they advertise it as having the capability, it's got
         | nothing to do with them.
         | 
         | If I hit someone with a hammer, that shit's on me, not the
         | manufacturer.
        
         | jeroenhd wrote:
         | > At what point does OpenAI (or any other company) become
         | legally responsible for this kind of behavior from their LLM's?
         | 
         | When they sell their bots to areas where lying is illegal.
         | I.e., when a company pretends to do law.
         | 
         | OpenAI doesn't pretend ChatGPT is a lawyer and for good reason.
         | The lawyer who decided to outsource his work is an idiot and
         | can't shift blame to the tool he decided to abuse.
        
         | bobsmooth wrote:
         | >At what point does OpenAI (or any other company) become
         | legally responsible for this kind of behavior from their LLM's?
         | 
         | When AutoCAD is responsible for an architect's shitty design.
        
         | notatoad wrote:
         | OpenAI or any other company becomes liable when they market a
         | product to be used _in place of_ lawyers (or doctors,
         | engineers, or whatever other profession)
         | 
         | as long as we're hiring professionals to do these jobs, part of
         | that real actual human's job is to accept the liability for
         | their work. if a person want to use a tool to make their job
         | easier, it's also their job to make sure that the tool is
         | working properly. if the human isn't capable of doing that,
         | then the human doesn't need to be involved in this process at
         | all - we can just turn the legal system over to the LLMs. but
         | for me, i'd prefer the humans were still responsible.
         | 
         | in this case, "the experimenter" was the lawyer who chose to
         | use ChatGPT for his work, not OpenAI for making the tool
         | available. and yes, i agree, the experimenter should be held
         | accountable.
        
       | dataviz1000 wrote:
       | Do the machines in the Matrix lie?
        
       | nine_k wrote:
       | We can hope to start to actually _rely_ on such models once they
       | start learning not only in the language domain, but also in the
       | epistemic domain. True vs false, known vs unknown, precise vs
       | vague, agreement vs contradiction vs unrelated, things like that.
       | 
       | Achieving that is going to be a serious technical, and also
       | philosophical, challenge for humans.
       | 
       | Today's LLM are a literary device. They say what sounds plausible
       | in the universe of texts they were fed. What they say technically
       | isn't even wrong, because they have no notion of truth, or any
       | notion of a world beyond the words. Their output should be judged
       | accordingly.
        
       | macrael wrote:
       | Genuine question: why have these models all been trained to sound
       | so confident? Is it not possible to have rewarded models that
       | announced their own ignorance? Or is even that question belying
       | an "intelligence" view of these models that isn't accurate?
        
         | crooked-v wrote:
         | The problem is that the model doesn't know if anything it's
         | saying is true or false, so trying to make it 'fact check' just
         | means it will constantly interrupt itself regardless of the
         | accuracy of the output.
        
         | lowbloodsugar wrote:
         | I think you are confusing ChatGPT with AI. ChatGPT is a
         | statistical fiction generator. It sounds confident because it
         | is writing fiction. It sounds confident for precisely the same
         | reason that billions of ignorant people world wide post "facts"
         | on line sound confident: they are incapable of understanding
         | their ignorance. They are just systems that have inputs and
         | then generate outputs.
        
       | tiahura wrote:
       | Chatgpt did this to me the other day. It gave me the right answer
       | about passenger assumption of risk with drunk driver, but the
       | cites were imagined.
        
       | londons_explore wrote:
       | If I were the judge in this case, I would be looking to throw
       | this lawyer in prison for a month, and ban him from ever being a
       | lawyer again...
       | 
       | Deliberately lying to the court, as a professional who should
       | understand the consequences, in a way likely to not be detected,
       | and likely to change the outcome of the case, ought to be met
       | with a really strict punishment.
        
         | kevinpet wrote:
         | It's not clear he deliberately lied. He plausibly misunderstood
         | a new tool he had recently been introduced to.
        
           | basilgohar wrote:
           | I am curious if an architect or an engineer did this for a
           | design and submitted this, would we extend to them the same
           | "reasonable" explanation and excuse, especially if someone's
           | life was put on the line.
           | 
           | Interestingly, it's exactly the same in court! People's lives
           | are put on the line all the time, and lawyers also sometimes
           | flat out lie. This just further indicts the current legal
           | system because it doesn't really "work" but it's just that
           | the mistakes are often covered-up enough until most people
           | forget about them and move on to something else.
        
           | mod50ack wrote:
           | As a lawyer, though, he has a responsibility for the contents
           | of his filings. And there are legal standards for how to
           | research and verify these things -- which he didn't follow.
        
           | astrea wrote:
           | And he didn't bother to verify the facts before he presented
           | them in court? That's scientifically/logically/professionally
           | unsound and irresponsible.
        
             | konschubert wrote:
             | But it's not deliberately lying
        
               | roywiggins wrote:
               | If you sign you name under "I declare under penalty of
               | perjury that the foregoing is true and correct." then you
               | absolutely have an obligation to check that your facts
               | are correct.
        
               | DangitBobby wrote:
               | Then no one would ever sign. That's why it says "to the
               | best of my knowledge" as part of the statement.
        
         | zouhair wrote:
         | Punishment is not technically the point of prisons. Your
         | unhinged comment will pass as normal to so many people who
         | think punishment is the point and that's scary as hell.
        
           | projectileboy wrote:
           | At least in the US, I would argue your reply is not totally
           | correct. The US has a long history of not really knowing if
           | it wants prison to be punishment, rehabilitation, or both.
        
           | noahlt wrote:
           | Punishment is absolutely one purpose of prisons. Your
           | unhinged comment will pass as normal to many people who don't
           | believe in retribution, and that's why you all belong in
           | Hell.
        
       | shon wrote:
       | The judge may not be amused, but we are.
        
       | Kim_Bruning wrote:
       | Current LLM workflow is the epitome of "trust but verify".
       | 
       | You need to hand-verify at some point in the process.
       | 
       | This does end up losing you some of the time you gained by using
       | an LLM in the first place. Fortunately you often do still come
       | out ahead.
        
         | CPLX wrote:
         | Honestly it's the opposite. I go with don't trust at all,
         | extract everything that saves you time.
         | 
         | It's just not a source of truth at all, it's a source of raw
         | material.
        
       | metadat wrote:
       | Discussed today:
       | 
       |  _A man sued Avianca Airline - his lawyer used ChatGPT_
       | 
       | https://news.ycombinator.com/item?id=36095352 (100 comments)
        
         | dpifke wrote:
         | Also: https://news.ycombinator.com/item?id=36092509
        
           | metadat wrote:
           | Thanks! Macro-expanded:
           | 
           |  _ChatGPT-Authored Legal Filing "Replete with Citations to
           | Non-Existent Cases "_
           | 
           | https://news.ycombinator.com/item?id=36092509 (67 comments, 1
           | day ago)
        
             | _Microft wrote:
             | dang is doing that frequently - but what does "macro-
             | expanded" mean for normal users? Do you have a bookmarklet,
             | extension or something similar for this or is there any
             | secret incantation that makes HN generate such an entry
             | automatically?
        
               | metadat wrote:
               | I just do it manually.. I've gotten pretty efficient at
               | it, takes only a moment.
               | 
               | Imo including title, timeline / age, and comment volume
               | provides helpful context to readers (I always appreciate
               | when others do this, rather than, in the most severe
               | cases, leaving a wall of unadorned HN links).
               | 
               | Cheers _Microft (and cool username, btw ;D)
        
               | _Microft wrote:
               | Thanks for shedding light on this and thanks for the
               | compliment! Have a nice day
        
       | rsynnott wrote:
       | See, this is the sort of thing that makes me suspicious about the
       | current AI hype (well, that, plus it's largely the same people
       | who were going on about cryptocurrency changing the world a few
       | years ago); this stuff just isn't very _useful_. Like, if writing
       | plausible lies is the 2020s AI boom's killer app, well, that's
       | not great, really.
        
         | simonw wrote:
         | I disagree.
         | 
         | I've been tracking the many, many flaws in AI pretty closely (I
         | wrote this article, and a bunch more in this series:
         | https://simonwillison.net/series/llm-misconceptions/)
         | 
         | And yet... I'm finding ChatGPT and the like _wildly_ useful on
         | a personal level.
         | 
         | I think they're deceptively hard to use: you have to put in
         | effort to learn them, and to learn how to avoid the many traps
         | they set for you.
         | 
         | But once you've done that you can get very real productivity
         | boosts from them. I use ChatGPT a dozen or so times a day, and
         | I would be very sad to not have access to it any more.
         | 
         | I wrote a bit more about that here:
         | https://simonwillison.net/2023/Mar/27/ai-enhanced-developmen...
         | - and if anything this effect has got even stronger for me over
         | the two months since I wrote that.
        
           | isp wrote:
           | Well put.
           | 
           | I stand by this comment:
           | 
           | > Catch-all comment for all ChatGPT use cases:
           | 
           | > (1) Stunning tech demo, a vision of the future today
           | 
           | > ... yet ...
           | 
           | > (2) There are so many sharp edges that I'm not brave
           | (foolhardy?) enough to blindly trust the output
           | 
           | (https://news.ycombinator.com/item?id=35068386)
        
         | whimsicalism wrote:
         | You will see :)
        
         | jeroenhd wrote:
         | Current AI is very useful in cases where the truth isn't very
         | important. It's amazing for things like fantasy stories, video
         | games. It also works in cases where existing tools can validate
         | the bullshit AI generates (i.e. Copilot).
         | 
         | In cases where facts and sources are important, AI cannot be
         | trusted. You can use it as long as you validate every single
         | word it outputs, but at that point I do wonder what the point
         | of using AI was in the first place.
         | 
         | It's also good at taking other existing work and creating new
         | work out of it; not just for smart autocomplete tools like
         | GPTs, but also for things like Stable Diffusion. Again, AI is
         | incapable of attribution of sources, so that comes with obvious
         | downsides, but in cases where the creator of the model have the
         | necessary rights so they don't _need_ attribution to sell work
         | (i.e. stock photo companies), it can be quite useful for
         | generating things like filler images.
        
         | garganzol wrote:
         | AI is definitely useful for creative fields. Take a look at the
         | recent "generative fill" feature from Adobe, just watch that
         | video. AI is also super useful for math and programming -
         | because it gives ideas. They may be flawed a bit, they may miss
         | crucial details, but sometimes that creative hallucinating
         | perspective is all you need to get the job done.
        
         | shakow wrote:
         | IMHO, like many brand new technologies, the I hype is
         | overhyped, but will still reduce to many useful things when the
         | dust will have settled down.
         | 
         | Just like we had no free ambient electricity in 1890, no flying
         | cars in 1950, and not talking robots in 1980, we still have a
         | very robust electricity network, a car per household, and
         | automated assembly lines.
        
       | kordlessagain wrote:
       | This is why it is very important to have the prompts fill in
       | relevant fragments from a quality corpus. That people think these
       | models "tell the truth" or "hallucinate" is only half the story.
       | It's like expecting your language center to know all the facts
       | your visual consciousness contains, or your visual consciousness
       | to be able to talk in full sentences. It's only when all models
       | are working well together the truth emerges.
        
         | metaphor wrote:
         | > _That people think these models "tell the truth" or
         | "hallucinate" is only half the story._
         | 
         | A meta-problem here is in choosing to use descriptive phrases
         | like _tell the truth_ and _hallucinate_ , which are human
         | conditions that further anthropomorphize technology with no
         | agency, making it more difficult for layman society to defend
         | against its inherent fallibility.                 UX =
         | P_Success*Benefit - P_Failure*Cost
         | 
         | It's been well over a decade since I learned of this deviously
         | simple relationship from UX expert Johnny Lee, and yet with
         | every new generation of tech that has hit the market since,
         | it's never surprising how the hype cycle results in a brazen
         | dismissal of the latter half.
        
         | [deleted]
        
       | manquer wrote:
       | Everyone is talking about ChatGPT , but is it not possible to
       | train a model with only actual court documents and keep "temp"
       | low and get accuracy levels as high or better than humans?
       | 
       | Most legal (all formal really) documents are very predictably
       | structured and should be easy to generate
        
       ___________________________________________________________________
       (page generated 2023-05-27 23:00 UTC)