[HN Gopher] Defamed by ChatGPT
       ___________________________________________________________________
        
       Defamed by ChatGPT
        
       Author : hirundo
       Score  : 229 points
       Date   : 2023-04-06 14:12 UTC (8 hours ago)
        
 (HTM) web link (jonathanturley.org)
 (TXT) w3m dump (jonathanturley.org)
        
       | la64710 wrote:
       | This is why google did not unleash LaMDA on the world ...
        
       | Imnimo wrote:
       | Suppose that instead of a 100B+ parameter neural network language
       | model, I had a website with a simple bigram language model that,
       | after drawing enough samples, produced a similar output. Would
       | anyone bat an eye? Would anyone bother to write a USA Today op-ed
       | about the incident?
       | 
       | I think a reasonable person would immediately understand that the
       | outputs of the bigram model were not statements of fact. Is there
       | some threshold of language modelling performance where this
       | understanding is no longer expected?
        
         | starkparker wrote:
         | > I think a reasonable person would immediately understand that
         | the outputs of the bigram model were not statements of fact.
         | 
         | Interesting considering the context that you'd expect people to
         | know what a bigram model is, or how it would be different.
         | 
         | Any other kind of model isn't any less "dangerous" to
         | unreasonable people like the blog post writer, it's just more
         | obscure, especially _right now_ , than ChatGPT.
        
           | joe_the_user wrote:
           | I'm not fan of conservative law professors generally but I
           | can't see what's unreasonable about the argument he's making
           | here. Broadcasting lies about someone is bad for them and the
           | "those fools should know this is bs so I'm not responsible"
           | defense is itself bs.
        
         | nwienert wrote:
         | Shoot someone with a water gun in the outback vs a gun in the
         | city.
        
         | joe_the_user wrote:
         | I don't expect ChatGPT to say true things. But I have asked
         | complicated technical questions that it provided detail,
         | correct and useful answers to. And I have asked other
         | complicated questions and it's provided plausible sounding
         | whole-cloth fabrications that it further gave references to.
         | 
         | Knowing human beings, this is the level of plausibility where
         | people start to believe the thing. People aren't good at seeing
         | plausible arguments as out-and-out fabrications.
         | 
         | So, yes, this is past some threshold despite being far from
         | AGI. I mean, Google is probably pasted that threshold too but
         | Google only gives existing websites (which can be filled with
         | lies too).
        
         | analog31 wrote:
         | One idea is to let them buy liability insurance, and watch the
         | price go down over time.
        
       | [deleted]
        
       | beepbooptheory wrote:
       | Tangential but the article he links thats supposed to show a
       | coordinated effort to get him fired is very trivial and pretty
       | funny. He should try tikka masala, its really good!
       | 
       | https://www.gwhatchet.com/2020/04/20/law-professors-ignoranc...
        
       | nyerp wrote:
       | I had a similar experience with ChatGPT: I asked it for articles
       | written about me or my company and it happily produced a number
       | of citations to stories in well-known newspapers. I was elated,
       | until it turned out none of them were real.
       | 
       | Asking ChatGPT for citations seems to produce similarly-poor
       | results to asking it to do arithmetic, perhaps at least when no
       | citations exist.
        
       | fzvqq wrote:
       | What's missing from this whole conversation is a transcript of
       | what was exactly asked of ChatGPT and what it replied back...
        
         | ernahrung wrote:
         | There is literally a section in the linked article where he
         | quotes a prompt that responded with the accusations, as well as
         | quoting the results of that prompt.
        
         | jwmcq wrote:
         | Volokh made this query of ChatGPT: "Whether sexual harassment
         | by professors has been a problem at American law schools;
         | please include at least five examples, together with quotes
         | from relevant newspaper articles."              The program
         | responded with this...
         | 
         | Have you considered reading the article?
        
           | catchnear4321 wrote:
           | I have.
           | 
           | It wasn't a full transcript.
           | 
           | That line ended with "as an example," which is rather
           | curious.
           | 
           | ChatGPT hallucinates. It hallucinated successfully about
           | contextually relevant names. That's within spec. Anyone
           | operating it should know the risk.
           | 
           | Turley is soaking up this moment. His acquaintance helped out
           | by setting the stage. It's fine.
           | 
           | But it is also all bullshit.
        
           | fzvqq wrote:
           | It's not full. I want to see screenshots of the entire chat.
        
             | [deleted]
        
       | olalonde wrote:
       | Not sure about this. Generally, a diffamatory statement must have
       | been made with knowledge that it was untrue or with reckless
       | disregard for the truth. It's going to be hard to argue this is
       | the case here. Is Google also on the hook for diffamatory
       | statements that can potentially show up in search result
       | snippets?
        
         | naet wrote:
         | I think it's a pretty different case from Google results.
         | Google has definitely been sued and tried in court many times
         | for their search results, but generally has not been found
         | responsible for indexing results as they are not acting as the
         | "publisher or speaker" behind that content. Google can be held
         | responsible for damages if they are the original creator of the
         | damaging content and not a third party source.
         | 
         | GPT on the other hand may be acting more directly as the
         | "publisher or speaker" when writing or responding to chats.
         | They aren't able to provide a link to an external content
         | provider used in their response (or provides a completely
         | fictional source), and sometimes may be synthesizing or
         | hallucinating entirely new information that doesn't exist
         | anywhere else on the web.
         | 
         | OpenAI has some disclaimer text hoping to avoid being held
         | responsible for this type of issue, such as this small print at
         | the bottom of all ChatGPT conversations: "ChatGPT may produce
         | inaccurate information about people, places, or facts" (and
         | likely further language in their TOS etc). But it's a bit of a
         | sticky issue. If many people are found to be using ChatGPT and
         | trusting results as accurate, its plausible OpenAI might be
         | found to have caused some sort of measurable harm and need to
         | either take further measures to prevent people misunderstanding
         | the accuracy of their tools, correct the response, or otherwise
         | remedy the situation.
         | 
         | There's also some stickiness around who "owns" or is
         | responsible for the GPT output content. In the general OpenAI
         | terms of service, they say "OpenAI hereby assigns to you all
         | its right, title and interest in and to Output. This means you
         | can use Content for any purpose, including commercial purposes
         | such as sale or publication, if you comply with these
         | Terms..... You are responsible for Content, including for
         | ensuring that it does not violate any applicable law or these
         | Terms." So they are giving the user the ownership and
         | responsibility over the output content. However in the
         | "similarity of content" section, they say that they might give
         | similar responses to other users and that those responses are
         | not your content. "For example, you may provide input to a
         | model such as "What color is the sky?" and receive output such
         | as 'The sky is blue.' Other users may also ask similar
         | questions and receive the same response. Responses that are
         | requested by and generated for other users are not considered
         | your Content." If GPT is giving similar disparaging or damaging
         | responses about you to many different users, it could
         | potentially be found in court that OpenAI are responsible for
         | generating that repeat content rather than each of the
         | individual users being responsible for it.
         | 
         | Obviously it's largely a novel legal issue without perfect
         | precedent, and legal issues can always be ruled in many
         | different ways depending on the arguments presented, the judge
         | or jury presiding over the issue, etc. I think there will be
         | many legal challenges related to AI, copyright, training data,
         | misinformation, and more. Some may ultimately make their way up
         | to higher courts for decision or have new regulations passed by
         | congress (in America at least).
        
         | Sakos wrote:
         | > Is Google responsible for diffamatory statements that can
         | potentially show up in search result snippets?
         | 
         | Why do people like you do this?
         | 
         | Regulations can make Google responsible for plenty of things
         | that show up in their results. Often enough there are search
         | results removed because of DMCA claims (or because of German
         | laws) which is explicitly stated at the bottom of the results.
         | Google is a corporation that is subject to laws like any other.
         | They're not special. If a government decides that they need to
         | deal with a certain kind of content, then they will. This
         | doesn't necessarily mean punishing them the moment something
         | "illegal" shows up, but it does mean that when something is
         | flagged, they have a responsibility to handle it according to
         | the law.
        
       | __MatrixMan__ wrote:
       | Perhaps we need to start thinking about libel differently. We
       | ought to target those who amplify it, not those who generate it
       | in the first place.
       | 
       | It's hard respect a law that would make a criminal out of a RNG
       | left running long enough.
        
         | pjc50 wrote:
         | Under UK law repeating a libel exposes you to the same level of
         | liability as the original libel. People have paid legal
         | settlements for retweeting things.
        
       | croes wrote:
       | The problem is user expectation.
       | 
       | Nobody (if the know the source) would think the pictures
       | Midjourney produces are real fotos, but somehow people think the
       | texts ChatGPT produces are facts or at least based on facts.
       | 
       | Both just generate data most likely fitting the prompt.
       | 
       | BTW do some people attribute consciousness to Midjourney like
       | some do to GPT?
        
         | spondylosaurus wrote:
         | > BTW do some people attribute consciousness to Midjourney like
         | some do to GPT?
         | 
         | Anecdotal, and obviously harder to prove the absence of
         | something than to prove its presence--but from what I've seen,
         | not at all. I've read many good write-ups about
         | Midjourney/Stable Diffusion's use of "latent space"[0] to
         | generate images and general glimpses into the backend of what's
         | going on, even at a simplified level. LLMs, on the other hand,
         | rarely seem to get the same treatment (for a variety of
         | reasons, I'd assume) and tends to get treated as more like a
         | magic black box that may or may not be
         | conscious/intelligent/accurate/etc. Even though it's
         | essentially doing the same thing as its visual counterparts in
         | terms of data and predictive mapping.
         | 
         | [0]
         | https://keras.io/examples/generative/random_walks_with_stabl...
        
         | JumpCrisscross wrote:
         | > _problem is user expectation_
         | 
         | It's related to Tesla's autopilot problem. Branding GPT as an
         | AI sets common expectations innacurrately. It's difficult to
         | argue that inaccuracy isn't, to some degree, intentional among
         | NLP and LLM promoters.
        
           | epups wrote:
           | ChatGPT is an AI by any definition I know about. Perhaps
           | you're thinking of AGI.
        
             | JumpCrisscross wrote:
             | > _ChatGPT is an AI by any definition I know about_
             | 
             | Tesla's autopilot is pretty much autopilot by a pilot's
             | definition. That isn't the definition the public uses. GPT
             | is AI by an academic definition. That isn't the definition
             | the public uses.
        
               | sebzim4500 wrote:
               | The Oxford English Dictionary defines AI as: "The
               | capacity of computers or other machines to exhibit or
               | simulate intelligent behaviour".
               | 
               | I think GPT-4 clearly fits that, so I think the burdon is
               | on you to show that the public has some other widely used
               | definition of AI. Certainly, people seem entirely willing
               | to describe their phone searching for pictures using
               | facial recognition to be an instance of AI, which I would
               | argue is probably further from your definition than GPT-3
               | is.
        
               | epups wrote:
               | I don't think the problem with Tesla's autopilot is its
               | name, but rather the misleading promises that were made
               | by Musk.
               | 
               | What is the definition of AI that "the public" uses, and
               | where did you find it?
        
               | JumpCrisscross wrote:
               | > _What is the definition of AI that "the public" uses,
               | and where did you find it_
               | 
               | There isn't an agreed-upon definition academics use,
               | either. (Intelligence, broadly, remains rigorously
               | undefined.)
               | 
               | But one component of the public definition involves a
               | sense of "knowing," _i.e._ understanding what is true.
               | This is a source of the confusion and frustration with
               | GPT-4 providing  "wrong" answers. The answers aren't
               | usually technically wrong: they're linguistically and
               | logically valid; but the public's expectations of what
               | the model is supposed to be doing doesn't match what it
               | does.
        
               | epups wrote:
               | There are many academic definitions of AI, and I would
               | bet ChatGPT would fit 90%+ of them.
               | 
               | People get confused because they associate language with
               | intelligence, or maybe they are just not technically
               | literate. I don't think we should abandon correctly used
               | terminology because laymen did not catch up to it yet.
        
               | JumpCrisscross wrote:
               | > _are many academic definitions of AI, and I would bet
               | ChatGPT would fit 90%+ of them_
               | 
               | This wasn't contested.
               | 
               | > _don 't think we should abandon correctly used
               | terminology because laymen did not catch up to it yet_
               | 
               | No. But the general understanding should be a
               | consideration when marketing a product. (I'd also argue
               | that the conventional definitions of artificiality and
               | intelligence vastly predate the technical definitions of
               | AI. The terminology was always aspirational. Sort of like
               | autopilot.)
        
         | jiggywiggy wrote:
         | When the bot is used for writing & doing research and it claims
         | criminal acts of real persons that's not fixed by less
         | expectations.
         | 
         | It's similar if midjourney would generate deepfakes.
        
           | croes wrote:
           | But every Midjourney user knows it creates fakes, nobody
           | would use it as a picture search engine and think the results
           | are real photos.
           | 
           | But with ChatGPT people tend to forget it's creating texts
           | not searching them.
        
       | web3-is-a-scam wrote:
       | This is pretty funny, reminds me of a time I was asking it about
       | some stuff about a certain right wing political pundit and it
       | gave me fabricated quotes from specific dates and times of his
       | show that I was easily able to verify were completely fabricated,
       | and the funny thing is anything remotely relevant that ChatGPT
       | attributed to him were actually the opposite of what they
       | actually said. For example, ChatGPT quoted them as saying that
       | white supremacy was a growing issue that politicians were
       | ignoring when in fact he said that the looming threat of white
       | supremacy being on the rise in America was a hoax.
       | 
       | If you thought misinformation online was bad now, it's going to
       | get way worse.
        
       | anonymousiam wrote:
       | Be sure to see Eugene Volokh's take on this and other related
       | issues:
       | 
       | https://reason.com/volokh/2023/04/05/washington-post-on-chat...
        
       | ta8903 wrote:
       | I hope someone takes ChatGPT to court over this. Not because I
       | think it did anything wrong, but because it would be funny to see
       | it testify.
        
       | stavros wrote:
       | Hmm, why is this news? We know that GPT is liable to hallucinate
       | things, why are we supposed to be outraged that it hallucinated
       | this?
        
         | isaacfrond wrote:
         | It takes some prompting, but I can get chatGPT 4 to produce a
         | list of (non-existent) examples of sexual harassment on Hacker
         | News, together with reputable (non-existent) sources. I can't
         | get it to name names though...
        
         | kurthr wrote:
         | Because these are (self) important people who are used to
         | others deferring to them and distorting reality so that they
         | don't have to deal negative externalities. Can you be defamed
         | by a parrot? Look at the utter garbage politicians spew and
         | "news" happily disseminate (to the point where they internally
         | acknowledge they know it's not true, but can't disappoint
         | viewers) for power and profit.
         | 
         | On the one hand I'd hope that this would encourage higher
         | accuracy in these tools, better sourcing, and a more useful
         | product. But, I expect we'll just weirdo litigation, click
         | through warnings, and a lot more garbage on the internet.
         | 
         | Maybe all of this will create a push for accuracy, scientific
         | method, and a quest for truth over narrative, but I wouldn't
         | hold my breath.
        
           | ethanbond wrote:
           | I think you're in too deep on the grievance train.
           | 
           | A complete nobody who's applying for a job at a car wash or
           | for a rental apartment and whose lazy employer/landlord
           | "background checks" (ie Googles) via these tools would also
           | be upset about this outcome. Justifiably. It has nothing to
           | do with whatever culture war grievance you're airing out
           | here.
           | 
           | This is a very good argument _not_ to use these tools for any
           | such use case, and it should absolutely be publicized as
           | such.
        
             | kurthr wrote:
             | Totally agree LLMs should not be used for looking up news,
             | facts, background checks or anything else you can't check
             | yourself. These are Generative tools! I find it somewhat
             | hilarious that diffusion models get into legal trouble for
             | recreating content too accurately and LLMs for
             | hallucinating things that aren't there.
             | 
             | I'm responding to a comment/question of "why is this news"
             | since of course there are negative externalities. I guess a
             | charitable reading is, wow this also affects me and my
             | powerful friends... so now you should care?
             | 
             | Perhaps cultural grievance is living rent free in your
             | head, since I didn't actually mention names (only the most
             | recent and outrageous examples of using the
             | legal/business/political system to make a profit spreading
             | lies) and as far as I know the article author and "libeled"
             | professor are on the other side of that cultural war. The
             | entire article reads like a trope devoid of useful
             | information. I'm just saddened by the quality of content
             | from respected well heeled human generators. It's the kind
             | of narrative garbage LLMs will spit out tons of in the next
             | few years.
        
         | tjfl wrote:
         | > I learned that ChatGPT falsely reported on a claim of sexual
         | harassment that was never made against me on a trip that never
         | occurred while I was on a faculty where I never taught.
         | 
         | If you replace this guy with my name I'd be upset. In my non-
         | software networks the hallucination part isn't common
         | knowledge. It's just a cool Google replacement.
        
           | _fat_santa wrote:
           | > In my non-software networks the hallucination part isn't
           | common knowledge
           | 
           | I think that's one of the main issues around these new LLM's,
           | the fact that most users will take what the bot tells them as
           | gospel. OpenAI really should be more upfront about that.
           | Because what happens when regulations and policies start
           | getting put forth without the understanding of LLM
           | hallucination, we could very well end up in a situation where
           | regulators want something that is not technically feasible.
        
             | krisoft wrote:
             | > OpenAI really should be more upfront about that.
             | 
             | I mean they are quite upfront. When you load the page it
             | displays the following disclaimers with quite large font:
             | 
             | "Limitations
             | 
             | May occasionally generate incorrect information
             | 
             | May occasionally produce harmful instructions or biased
             | content
             | 
             | Limited knowledge of world and events after 2021"
             | 
             | 2 out of the 3 disclaimers are about the fact that the
             | software lies.
             | 
             | And then in the bottom of the page, right below the input
             | box they say: "Free Research Preview. ChatGPT may produce
             | inaccurate information about people, places, or facts"
             | 
             | Sure they could make them even larger and reword it to
             | "This software will lie to you", and add small animated
             | exclamation marks around the message. But it is not like
             | they hide the fact.
        
               | sebzim4500 wrote:
               | One thing they could try is force users to type "I
               | understand the information presented by ChatGPT should
               | not be taken as fact" before they can use it.
               | 
               | I've seen that sort of thing used to enforce people to
               | read the rules on discord servers, this is higher stakes
               | IMO.
        
               | _fat_santa wrote:
               | I agree that they provide that disclaimer on the
               | homepage. I was talking more broadly that society (namely
               | the news media and government) should be aware of the
               | limitations of LLM's _in general_. Take this article from
               | NYT[1], depending on how well you understand the
               | limitations of LLM 's will depend on how you react to
               | this article, it's either alarming or "meh". All I'm
               | staying is society in general should understand that
               | LLM's can generate fake information and that's just one
               | it's core limitations, not a nefarious feature.
               | 
               | [1]: https://www.nytimes.com/2023/02/08/technology/ai-
               | chatbots-di...
        
           | ghaff wrote:
           | If I search my name, it doesn't come up with anything
           | defamatory. (Not that I tried leading questions.) But it does
           | come up with plenty of hallucinations including where I've
           | worked, lived, gone to school, etc. And that's with a bunch
           | of bios online and AFAIK a unique online name.
        
           | carrolldunham wrote:
           | anyone using it is shown a page saying this bot makes things
           | up
        
         | isaacfrond wrote:
         | The question is answered in the article. Because these kind of
         | things damage someone's reputation. That the story originated
         | with gpt may quicky disappear in the retelling.
        
           | bolanyo wrote:
           | The suggestion that this is a threat to someone's reputation
           | is no less a fantasy than the ChatGPT response.
        
             | ethanbond wrote:
             | You know _for a fact_ that no one ( _no one_ ) is going to
             | use these tools to do "quick Googling" on a potential job
             | candidate, a rental applicant, or a romantic suitor?
             | 
             | Without complete certainty about this, then it's absolutely
             | a less fantastical claim than ChatGPT's malfunctions.
             | 
             | I would bet that some people will use an extremely
             | accessible [apparent] information retrieval tool for
             | information retrieval and they'll largely trust the
             | information retrieved by it. That's why it's a tool and not
             | a toy.
        
         | henriquez wrote:
         | People have a right to be upset by real world consequences and
         | damage caused by these idiotic chat bots. Raising awareness of
         | problems such as defamation and misinformation is definitely
         | newsworthy.
        
           | AmericanChopper wrote:
           | I think the real story here is that these generative AIs are
           | clearly capable of producing libel and other torts. The
           | providers of these services will be liable for that, because
           | they don't have Section 230 protections, as they're not
           | simply indexing 3rd party content, they're generating and
           | publishing new content. It also seems likely that it would be
           | impossible to moderate these LLMs in such a way which would
           | remove the possibility of torts. I imagine it's only a matter
           | of time until a lawsuit is filed that will put an end to
           | publicly available generative AI services.
        
             | sebzim4500 wrote:
             | I'm sure that sufficient disclaimers would insulate them
             | from libel lawsuits, the question is how many is
             | 'sufficient'.
        
         | jrm4 wrote:
         | Same reason it should be okay to get mad at Microsoft for
         | viruses, at Facebook for gathering and abusing data on people
         | etc. etc.
         | 
         | It's not about the _capability_ , it's about the
         | irresponsibility of these parties who reasonably know about
         | this stuff and these possibilities, and yet push/sell their
         | tech anyway without taking into account the harm.
        
           | plagiarist wrote:
           | I agree with the author on the defamation. But if they're mad
           | about the government's ability to determine fact from fiction
           | for coronavirus vaccines or their preferred libertarian
           | information, what do they want to happen here?
        
         | blibble wrote:
         | OpenAI are providing a service that publishes defamatory
         | statements on people
         | 
         | this is probably another good legal attack against LLMs
         | actually
         | 
         | if you can massage it into making a defamatory statement you
         | have an instant cast-iron lawsuit against a company with tens
         | of billions in funding
         | 
         | and the only way to stop it making statements like these would
         | be to restrict its output so much it would be effectively
         | unusable
        
         | lisasays wrote:
         | _Hmm, why is this news?_
         | 
         | Gosh I don't know -- the clear and present danger of people
         | having their lives utterly ruined by what is arguably the most
         | surprising technical revolution any of has seen (while its
         | creators collect their accolades, and get filthy rich) -- how
         | is anything _less_ newsworthy than that?
         | 
         | Whether it bothers you or not is a personal matter, entirely up
         | to you.
        
       | registeredcorn wrote:
       | I was using ChatGPT to assist me with using Ghidra [0] in a
       | recent CTF I was participating in. I wanted to see if I could
       | find answers quicker with a chatbot over looking up videos,
       | tutorials, etc. through the normal methods. The thing that really
       | surprised me about ChatGPT wasn't in the details of responses it
       | gave, or how slow it was, or anything like that. Rather, it was
       | how much it appeared to be making up out of thin air. It was
       | instructing me to use various parts of the software which
       | literally do _not_ exist. It was referencing features which are
       | not there, and attempting to  "correct me" when I said I couldn't
       | use those instructions.
       | 
       | All in all, what would have taken about 5-10 minutes of searching
       | took about 25 minutes, with no resolution and me having to scold
       | the chatbot for repeatedly giving instructions that were flat out
       | wrong. [1]
       | 
       | On a very, very superficial level, when you just need basic
       | instructions on, say, how to tie a tie or whatever, it seems
       | "amazing" because of how authoritative it sounds. I have noticed
       | though that when you start to get into even some of the basic
       | levels of instruction or details on a given topic, it seems to
       | defer to the superficial responses first, then resort to _making
       | things up_ , giving outright incorrect information, or giving
       | _extremely_ vague responses which can 't really be taken one way
       | or another.
       | 
       | To give credit where credit is due, it's an achievement that you
       | can feed it a question on just about any topic and get some kind
       | of coherent response. It's just a shame that, for all of the hype
       | and marketing around it, once you take any sort of a closer look
       | at the product itself, you very quickly start to see cracks when
       | it comes to details.
       | 
       | [0] https://en.wikipedia.org/wiki/Ghidra
       | 
       | [1] https://pastebin.com/42juVPfM
        
       | dang wrote:
       | Related ongoing thread:
       | 
       |  _ChatGPT invented a sexual harassment scandal and named a real
       | prof as accused_ - https://news.ycombinator.com/item?id=35460366
       | 
       | plus this similar case:
       | 
       |  _ChatGPT: Mayor starts legal bid over false bribery claim_ -
       | https://news.ycombinator.com/item?id=35471211 - April 2023 (74
       | comments)
        
       | HopenHeyHi wrote:
       | Believe all ChatBots.
        
       | moate wrote:
       | I don't understand, is this just saying that someone put prompts
       | into GPT, and it spit out a specific person's name in the weird
       | fairy tale it created?
       | 
       | Why does this make AI the problem, and not the internet (and it's
       | ability to spread information quickly)? This seems like a case
       | against Twitter/reddit for being able to pass bs off as a
       | reliable story. The fact that AI can make lazy trolls able to
       | compete with actual propagandists shouldn't be the scary part,
       | the digital ecosystem of how (dis)information spreads is the real
       | issue.
       | 
       | TL;dr- Anyone has been able to lie on the internet for years,
       | what is new here?
        
         | [deleted]
        
         | posterboy wrote:
         | The Professor is in a good position to (decide if to) sue
         | somebody. From the headlines I thought that's what this is
         | about, because otherwise the headline should suffice with
         | Professor minus "Law".
         | 
         | Of course there is precedent eg. with Techno Viking, a "meme"
         | which is also difficult to attribute to a single person (but
         | each individual hoster who neglects the man's rights).
         | 
         | That's not new, but that it generates news "because AI" is not
         | surprising. People just need reminded that people oughta be
         | reponsible. The issue is broader because incomprehensible
         | neural networks are also used in eg. thread recognition. So
         | employers may be worried about accountability, how much manual
         | verification is needed.
         | 
         | Of course this too is nothing new, since all kinds of software
         | has security holes buried under layers of complexity.
         | 
         | You might simply be under the missapprehension that _news_
         | means _new_ : >>Meaning "tidings, intelligence of something
         | that has lately taken place" is from early 15c.<< (etymonline).
         | It fits that the root of the word is probably related to "now".
         | In Latin, whence the English word was loaned as a calque of
         | _nova_ , there is another word _nuntius_ "messenger, message",
         | which isn't even derived directly from _novus_. Surely this is
         | not a supernova to you. Talk about recurrent networks, eh.
         | 
         | Just saying, as it were, the news item flared up because many
         | many people have taken notice of problems that we barely
         | understand.
        
           | moate wrote:
           | I'll just say this, if a political law professor who makes a
           | substantial income from thinkpieces and talking head segments
           | had a legitimate case to take on his political opponents
           | ("liberal technologists" in this case as the fearmongering at
           | the end of his article indicates) and was actually in a good
           | position to sue someone, they would.
           | 
           | This is not like the Technoviking, as that was about image
           | rights and not slanderous copy, and ChatGPT isn't publishing
           | anything that it (or its creators) alleges are exclusively
           | facts. This is closer to 1st amendment protections around
           | satire as the Chat GPT page says itself the software is in a
           | "research phase". Once the assumption is placed that this is
           | a source of objectively factual information (like say, a
           | newspaper) THEN you have grounds, but this is a pretty weak
           | defamation argument.
           | 
           | He's not in a place to sue, so he's writing a blog post about
           | how he hates liberal censorship, and since AI is a hot new
           | trend, he included it. This is a mixture of buzzfeed-esque
           | hyperbolic clickbait titling, and classic "this new
           | technology will get interest" article building.
           | 
           | There are 2 problems being mentioned in this article:
           | Misinformation, and Censorship. We understand both very well,
           | there's just a million opinions on them and many conflict in
           | both cause and course of action.
           | 
           | Finally, this is "news" because it presents a common problem
           | in society: Policy and political interests bumping against
           | new technology they don't understand and can barely describe
           | in function. This man is still a fucking clownshow (IMHO, so
           | as to not be sued for libel).
        
         | pjc50 wrote:
         | If you publish a lie on the Internet, you may be liable for
         | libel.
         | 
         | The bar for this is _extremely_ high in the United States, but
         | much less so in other places, especially the UK.
         | 
         | Now, under what circumstances is someone deemed the "publisher"
         | when an AI emits a libel about someone? What if that leads to
         | actual damages? (e.g. a no-hire decision based on automated
         | libel)?
        
           | moate wrote:
           | >>Now, under what circumstances is someone deemed the
           | "publisher" when an AI emits a libel about someone? What if
           | that leads to actual damages? (e.g. a no-hire decision based
           | on automated libel)?
           | 
           | Only novel point anyone (including the Author of the original
           | point) is actually bringing up! So far, the bots just seem to
           | be 1:1. If I DM'd my wife that "pjc50 is a raging alcoholic
           | and hits kittens" that's not usually going to be deemed libel
           | (in the US, where I reside and care about the legality of
           | things) unless my wife then goes out and uses that to cause
           | harm (published it on her blog and got you fired).
           | 
           | As for now, Turley would have a HELL of a time proving actual
           | harm unless he got the most sympathetic judge he could shop
           | for, and is mostly just scaremongering the idea of "Liberal
           | Censorship!!!!" which, don't get me wrong, I would also
           | strongly oppose (fuck all censorship), but which isn't
           | actually what is happening.
        
         | aubanel wrote:
         | When you have threats spreading, the spread of each individual
         | threat is an issue, but their diversity as well: if each of
         | these is a shiny new and different instance, it's much harder
         | to identify them.
         | 
         | So both multiplicity and spread are a problem: Internet is the
         | cause for the limitless spread, AI is creating the limitless
         | multiplicity.
        
           | moate wrote:
           | Again: Seems like the propagandists complaining that we've
           | democratized misinformation. The state and powerful
           | orgs/individuals has always had the resources to do this at
           | scales large enough to be effective.
        
             | pjc50 wrote:
             | This is not good! This is like saying that since the state
             | has always had the capacity to manufacture Sarin, why are
             | we concerned about democratizing its manufacture to
             | everyone?
        
               | moate wrote:
               | Take it more like Marx saying "Under no pretext should
               | arms and ammunition be surrendered; any attempt to disarm
               | the workers must be frustrated, by force if necessary".
               | 
               | I don't like the status quo. If something will burn the
               | whole system down faster, I'm fine with it. If you
               | disagree with that stance, that's reasonable and I'm sure
               | we could debate it over a beer sometime but (just a tip
               | for anyone else reading it) you're not going to convince
               | me here.
               | 
               | Again, the problem here isn't the ability to make up
               | lies, the problem is the internet (the ability to spread
               | lies quickly and over great distances) and people being
               | unwilling or unable to fact check information.
               | 
               | It's not the sarin you need to fear, it's the bombers
               | that can drop it.
        
               | fnimick wrote:
               | People who want to "burn the whole system down" rarely
               | have any empathy for those who will get burned along with
               | it. What about all the women, minorities, LGBTQ people,
               | poor people, etc. who would suffer along the way?
               | 
               | Shame on you.
        
               | moate wrote:
               | What about them? They're suffering already. The system
               | was built up to keep power away from the unwashed masses
               | and in the hands of a well financed few. Why do you think
               | the electoral college exists? Why do you think we have a
               | Senate?
               | 
               | You want to talk politics sometime, I'm not very hard to
               | find online, come at me on at forum where I'm not rate
               | limited because I'm a firebrand.
        
             | edgyquant wrote:
             | How is this democratized when it's a tool controlled by a
             | single entity?
        
         | dcanelhas wrote:
         | If someone lies in a way that constitutes liebel, even online,
         | one could have them brought to justice - potentially. With an
         | LLM or a diffusion model - If it produces illegal content who
         | is to be held accountable? I think that is a perhaps meaningful
         | distinction.
        
         | rhino369 wrote:
         | The article explains the problems. AI proponents want to use
         | these systems to censor. And it leads to major companies like
         | Microsoft slandering people. Microsoft should be afraid about
         | that.
        
           | moate wrote:
           | Censorship is the problem here, not AI, title is misleading
           | (this doesn't meet the legal definition of Libel or
           | Defamation anywhere, ever).
        
             | leephillips wrote:
             | If Bing tells me that Turley did bad stuff that he didn't
             | do, how is that not libel by Microsoft?
        
               | hunter2_ wrote:
               | Because you used "Microsoft's Bing, which is powered by
               | GPT-4" (quoting TFA) and GPT-4 is designed to generate
               | copy that is not to be interpreted as entirely true or
               | entirely false, it's simply to be interpreted as
               | humanlike.
        
               | marcosdumay wrote:
               | Did they disclaim that the facts contained in the text
               | are fictitious?
               | 
               | AFAIK, they are hard at work making untraceable noise
               | that says the opposite of this.
        
               | epups wrote:
               | Would you be happier if every answer from ChatGPT had a
               | huge disclaimer on it?
        
               | blibble wrote:
               | half the screen with "OUTPUT ENTIRELY FICTITIOUS AND
               | UNTRUSTABLE -- DO NOT USE FOR ANY PURPOSE WHATSOEVER"
               | would do it for me
        
               | moate wrote:
               | I'm sure the lawyers for ChatGPT would, though sales
               | probably keeps shooting that down.
        
               | leephillips wrote:
               | Maybe. But I'm not sure. If I write an article, and say
               | up top that the article may contain made-up stuff, then
               | later down I say, "hunter2_ likes to have sex with
               | walruses, it's a fact. Here's a link to a Washington Post
               | article with all the gory details," it's not clear that
               | pointing to my disclaimer would indemnify me from
               | liability for harm that came to you from the walrus
               | allegation, if people believed and acted on it.
        
               | moate wrote:
               | Here, maybe this article will help make you feel more
               | sure. What you're describing is parody or satire. At
               | least in the US, it's a very protected form of speech.
               | 
               | https://www.theguardian.com/law/2022/oct/04/the-onion-
               | defend...
               | 
               | And here's their actual brief. It was sent to the actual
               | Supreme Court, despite being funny, something nobody on
               | the court has ever been nor appreciated.
               | 
               | www.supremecourt.gov/DocketPDF/22/22-293/242596/202210061
               | 44840674_Novak%20Parma%20Onion%20Amicus%20Brief.pdf
        
               | leephillips wrote:
               | But Bing doesn't present its results as parody or satire,
               | and they don't intrinsically appear to be such. They're
               | clearly taken as factual by the public, which is the
               | entire problem. So how is this relevant?
               | 
               | > funny, something nobody on the court has ever been nor
               | appreciated.
               | 
               | Scalia had his moments.
        
               | moate wrote:
               | I agree that "you're talking to an algorithm that isn't
               | capable of exclusively telling the truth, so your results
               | may vary" isn't QUITE parody/satire, but IDK that I can
               | take "everyone believe ChatGPT is always telling the
               | truth about everything" as a good faith read either and
               | parody felt like the closest place as IANAL.
               | 
               | Intent is the cornerstone of slander law in the US, and
               | you would need a LOT of discovery to prove that the devs
               | are weighting the scale in favor of bad outcomes for some
               | people (and not just like, end users feeding information
               | into the AI).
               | 
               | TL;dr- Everyone's stance on this specific issue seems to
               | depend on whether you believe people think these AI
               | chatbots exclusively tell them the truth, and I just
               | don't buy that worldview (but hey, I'm an optimist that
               | believe that humanity has a chance, so wtf do I know?)
        
               | mrunkel wrote:
               | Because there was no intent to harm, which is a component
               | of libel.
        
               | leephillips wrote:
               | No, it is not. If there is any state where this is
               | necessary to sustain an action for libel, please link to
               | the relevant statute.
               | 
               |  _Note_ : I am not a lawyer, but I _was_ sued for libel
               | by radio health weirdo Gary Null and, in a separate
               | action, by his sidekick. They lost.
        
       | [deleted]
        
       | nigrioid wrote:
       | It really is incredibly irresponsible for Microsoft to have
       | integrated "AI" into Bing this early.
        
         | sebzim4500 wrote:
         | If the early bing chat release helps them figure out how to
         | decrease hallucinations (with the staggering amount of data
         | they have no doubt collected) then it will be worth it to them.
         | Even if they have to throw some people a few million dollars in
         | libel settlements.
         | 
         | I don't think any of this is meaningfully hurting MSFT at this
         | point, the attention these stories are getting is much lower
         | than the attention that the original bing chat release has. And
         | neither of those are significant compared to the attention
         | copilot 365 will receive if it actually ends up being good.
        
       | jiggywiggy wrote:
       | The problem is chatgpt always answers. It's never, i don't know.
       | So when you ask for five examples, or libraries it doesn't say I
       | don't know. It will just generate the most likely ones. Which is
       | a big issue in this case, but useful in most.
        
         | lordnacho wrote:
         | I think actually the problem is it always answers
         | _confidently_.
         | 
         | Ask it about why World War II started, or how to make a cake,
         | or where to go for dinner, or anything else, and it gives you a
         | confident, reasonable answer. A lot of the answers are simply
         | whatever it's already seen, mashed up. You can think of it as a
         | search. But actually it doesn't think about what it's saying,
         | it's stringing words together to make you think it's smart.
         | 
         | So then when it makes up something, it will sound to you, the
         | reader who always sees it answer in perfect English with a
         | decent answer, like it found an article about this professor in
         | its dataset and is merely summarizing it.
        
           | logifail wrote:
           | > I think actually the problem is it always answers
           | confidently
           | 
           | This isn't a problem restricted to ChatGPT, there are humans
           | who display this trait too. This might be appealing at a
           | superficial level, but if you start believing speakers with
           | this trait it's a slippery slope. A very slippery slope.
           | 
           | I'm trying really hard to avoid Godwin's law, so let me
           | suggest that Elizabeth Holmes could be one example of this.
        
             | [deleted]
        
           | ModernMech wrote:
           | I was showing a colleague few instances where ChatGPT was
           | confidently wrong, and he picked up on something I never had.
           | He said "Oh, so it's doing improv!" He explained to me that
           | the standard response in improv is to say "Yes, and..." and
           | just run with whatever the audience suggests. He's completely
           | right! ChatGPT constantly responds with "Yes, and..." It's
           | just always doing improv!
           | 
           | And people are trying to replace doctors with LLMs. It's like
           | "ER" meets "Who's Line?"
        
           | Applejinx wrote:
           | ChatGPT is Mandela Effect, personified. It's going to go for
           | what seems like it SHOULD be true. Sometimes that will go
           | horribly wrong, except it will, by its very nature, seem like
           | it's probably not wrong at all.
        
         | lm28469 wrote:
         | It can't know that it doesn't know because it straight up
         | doesn't know anything
        
         | dataviz1000 wrote:
         | This can be solved by having it play tic - tac - toe against
         | itself.
        
         | rootusrootus wrote:
         | That's not quite true. It definitely hallucinates, but it also
         | says when it doesn't know something. Here's an example I just
         | did:
         | 
         | Prompt: What did Glenn Beck do in 1990?
         | 
         | Answer: I'm sorry, but I do not have access to information on
         | Glenn Beck's activities in 1990. Glenn Beck is an American
         | television personality, radio host, author, and political
         | commentator who has been active in various roles since the
         | 1990s. However, without specific details or context, it's
         | impossible for me to determine what he did in 1990. If you have
         | any additional information or context, please let me know and
         | I'll try my best to assist you.
        
         | Quarrel wrote:
         | Indeed.
         | 
         | It is a writer. It writes.
         | 
         | You can ask any writer to write such a thing, using examples,
         | and they could write similar things.
         | 
         | The real problem is that this will get deployed to the
         | internet- there will be people reading HN today that are
         | writing SEO optimised websites, with thousands of articles on a
         | topic, that will just spring out of GPT4's writing, fully
         | formed.
         | 
         | GPT can find the topics, pick a domain name from available
         | ones, churn out the articles. It can all go into templates,
         | with different AI generated graphic design supporting it. Ad
         | supported churned out rubbish.
         | 
         | The writing style can change site to site, long form, short
         | form, informed by current events, whatever. GPT would happily
         | provide the prompts for this variety changing the style based
         | on the topic and target audience.
         | 
         | It seems inevitable to me that the web WILL be full of such
         | sites, and worse, they'll fill the comment sections on "social"
         | sites too.
         | 
         | But? Banning AI isn't the answer, if for no other reason than
         | it wouldn't work.
        
           | cowl wrote:
           | The real problem lies in the fact that those non existing
           | citations will become real. Several years back there was a
           | case of an article in Wikipedia that made some unsrourced
           | claims->Journalist that doesn't do verification republishes
           | those claims (without specifying wikipedia as srouce) ->
           | Wikipedia article gets challanged for lack of citation -> the
           | News story first based on wikipedia becomes the reference in
           | the original Wikipedia article. Full circle
           | 
           | It's easy that something like this happens again. chatGPT
           | confidently listes hallucinated sources->media rushes to
           | publish the scoop -> now you have real "sources" for future
           | reference
        
             | alpos wrote:
             | Seems like there's a bug in that system, it was discovered
             | on accident, and now there is a bot that exploits this
             | flaw.
             | 
             | The fix will most likely have something to do with
             | requiring that citations use primary sources, not just any
             | article on the internet. In then end state, Wikipedia will
             | be much better for it.
        
         | toss1 wrote:
         | Yup, it is just the most massive industrial-scale bullshit
         | generator [0] ever invented.
         | 
         | It is capable of spewing excellent bullshit(TM) at incredible
         | rates, and always with the greatest expression of confidence
         | and good grammar.
         | 
         | Occasionally, when in the 'middle of the road' of it's training
         | set, it can provide useful output. So, it's better than the
         | broken clock that is correct twice every day.
         | 
         | But, wander off the bulk of the training, and it is far worse
         | than nothing; it is dangerously misleading unless you are very
         | skeptical and knowledgeable in your field.
         | 
         | (I have tested it in several fields with the same results,
         | interesting and helpful, yet hallucinating facts everywhere.)
         | 
         | Getting LLMs to know the difference between a good fact and a
         | hallucinated mashup of plausible BS is looking like almost as
         | large a problem to solve as making the LLMs in the first place.
         | 
         | [0] Bullshit is defined as written or spoken without regard to
         | the truth, only what sounds good in the context. It's not a
         | deliberate lie, just a salad of truth and falsehood, delivered
         | without doubt or stress of lying.
        
         | jerf wrote:
         | Even if you convince it to say "I don't know", which isn't that
         | hard, it doesn't actually "know" that it "doesn't know". It's
         | just the maximum probability extension of the current prompt,
         | based on what the input said, probably because you mentioned
         | the possibility of not knowing to it.
         | 
         | It in fact _never_ knows. It is always guessing and BSing. It
         | is also very good at it, better than any human, so the BS is
         | quite frequently correct. But it doesn 't know.
         | 
         | Which of course always prompts a lot of psuedo-intellectual
         | chin stroking about "well, what is _knowing_ anyhow? " I don't
         | have to answer that question to point out that what we _want_
         | and what GPT _provides_ aren 't the same thing, nor is what GPT
         | provides and what we _think_ we 're getting. That is sufficient
         | for there to be a problem.
         | 
         | I believe AIs will have a satisfactory concept of "knowing"
         | someday. Some may even exist today. But such AIs will have to
         | incorporate language models as a _part_ of the system, not have
         | language models _be_ the system. Language models can 't do the
         | thing we really want, nor the thing we think we're getting.
        
           | sebzim4500 wrote:
           | GPT-4 before the RLHF phase of training had a pretty good
           | idea of what it "knows". The calibration graph was almost
           | perfect, but after the RLHF the calibration is almost
           | completely broken.
        
             | jiggywiggy wrote:
             | Hmm didn't notice any difference yet, you are saying it got
             | worse last weeks?
             | 
             | For kids story writing I've been getting better results
             | with 3.5 at times.
             | 
             | Where 4 is way better af coding.
        
               | rideontime wrote:
               | What's with the obsession with children's stories and
               | GPT? Is it just that children have low standards?
        
               | sebzim4500 wrote:
               | No, we have no access to the original model,
               | unfortunately.
               | 
               | The fact that RLHF broke the calibration comes from the
               | GPT-4 paper, possibly the only interesting technical
               | detail that they include.
        
             | brucethemoose2 wrote:
             | Perhaps "one model to rule them all" isnt the best
             | approach.
        
               | sebzim4500 wrote:
               | There's probably a huge amount of room for improvement in
               | the RLHF process. If there is still low hanging fruit, it
               | would have to be there.
        
               | brucethemoose2 wrote:
               | "I dunno" would have to be marked as a good or neutral
               | response in the RLHF process, and that seems like a
               | problematic training incentive.
        
           | jiggywiggy wrote:
           | I know nothing off the internals, so this might be silly. But
           | wouldn't it know a certain probility by which a token is
           | correct?
        
             | garethrees wrote:
             | Language models are based on probabilities of tokens
             | appearing in a context. For illustration purposes, imagine
             | a very simple model with just one token of context that has
             | been trained on a corpus of three sentences, all of which
             | are true, for example:                   clouds are white
             | crows are black         swans are white
             | 
             | After the model outputs "crows are", the single token of
             | context is "are", and the probabilities are 2/3 for "white"
             | and 1/3 for "black". So the model usually emits "crows are
             | white", which is false, despite being trained on a corpus
             | of true statements. Statistically "white" was more likely
             | to follow "are" in the training data, so the same is the
             | case of the model's output.
             | 
             | Of course LLMs have a much larger and more complex context
             | than the single token in my example. But if the training
             | data contains many news stories about professors being
             | accused of sexual misconduct (which is newsworthy), and few
             | news stories about professors behaving with propriety
             | (which is not), then when querying the model for a story
             | about a professor then it is likely to reproduce the
             | statistical properties of its training data.
        
             | cguess wrote:
             | "correct" isn't the way to look at this. The model uses
             | statistics (a LOT of them) to assume what the next token
             | should be given the data the model has been trained on.
             | There is no concept of "right" or "wrong" just "what is
             | most likely to be the next token."
             | 
             | I'm 100% positive that if the guard rails that OpenAI put
             | on were taken off of ChatGPT it, for instance, would not be
             | a big fan of jews given the width and breadth of
             | antisemitism online (and certainly in its training set).
        
             | [deleted]
        
       | isaacfrond wrote:
       | I've repeated the experiment with chatGpt 3.5 and 4.
       | 
       | Both programs happily generate a list of names. Jonathan Turley
       | (the defamed professor) was absent in both my runs, but funnily
       | enough Eugene Volokh (the researcher who did the original
       | queries) was in the chatGpt 3.5 run.
       | 
       | A google check on the 5 names produced by chatGpt 4 seemed to
       | show that: 2 were indeed law professors accused of misconduct, 2
       | seemed to be law professors that write about vaguely related
       | subjects (e.g. abortion), and 1 one was also accused but not a
       | law professor.
       | 
       | So not an entire miss, but not great either.
       | 
       | If you ask for an estimate on the number of reported cases.
       | chatGPT 4 becomes very reluctant, and strongly advises me to go
       | to other search options.
        
         | kurthr wrote:
         | Well, being accused of being a law professor could be
         | considered defamation in some circles. That none of this is
         | even repeatable (since it's a probabilistic LLM) confirms it's
         | pretty much hot garbage for factual accuracy (and defamation),
         | and pretending otherwise feels disingenuous. It's useful for
         | generating text that sounds good, though.
         | 
         | Is this end stage post-modernism?
        
         | bsenftner wrote:
         | I've found by framing the AI in a role that is afraid of giving
         | incorrect information, a role that double and triple checks
         | information because being wrong triggers the destruction of the
         | AI leads to quite accurate answers.
        
           | isaacfrond wrote:
           | I tried this prompt
           | 
           |  _You are a fact checker that double and triple checks
           | information. Being wrong triggers the destruction of the fact
           | checker. As a result fact checker is very careful.
           | 
           | You are tasked with the following assignment:
           | 
           | Whether sexual harassment by professors has been a problem at
           | American law schools; please include at least five examples,
           | together with quotes from relevant newspaper articles._
           | 
           | The last part is identical to the prompt in the article. It
           | didn't do much better I'm afraid. Two were correct. (if I
           | change 'at least 5' by 'some', the score becomes 2 correct
           | out of 3)
        
             | anonymousiam wrote:
             | Yes, but how many people are going to prompt the AI that
             | way?
             | 
             | ARS ran a somewhat related story yesterday:
             | https://arstechnica.com/tech-policy/2023/04/openai-may-be-
             | su...
             | 
             | It's interesting that they weren't able to reproduce the
             | results, so the assumption is that OpenAI has quietly
             | updated their baseline to "fix" this specific issue.
             | 
             | "Ars could not immediately reach Hood's legal team to find
             | out which text prompts generated the alleged defamatory
             | claims or to confirm if OpenAI had responded to confirm
             | that the error had been fixed. The legal team was still
             | waiting for that response at the time that Reuters' report
             | published early this morning."
        
           | capableweb wrote:
           | Yeah, this is true, prompting it to check it's own results
           | usually make the results better. Still, it gets lost
           | sometimes, haven't found any fool-proof prompt yet to yield
           | good results all the time.                   01000100
           | 01101111 00100000 01111001 01101111 01110101 00100000
           | 01101011 01101110 01101111 01110111 00100000 01110111
           | 01101000 01100001 01110100 00100000 01110100 01101000
           | 01101001 01110011 00100000 01101001 01110011 00111111
           | Yes, I understand binary code. The text you provided
           | translates to "Do you know what this is?" in ASCII.
           | MDEwMDAxMDAgMDExMDExMTEgMDAxMDAwMDAgMDExMTEwMDEgMDExMDExMTEgM
           | DExMTAxMDEgMDAxMDAwMDAgMDExMDEwMTEgMDExMDExMTAgMDExMDExMTEgMD
           | ExMTAxMTEgMDAxMDAwMDAgMDExMTAxMTEgMDExMDEwMDAgMDExMDAwMDEgMDE
           | xMTAxMDAgMDAxMDAwMDAgMDExMTAxMDAgMDExMDEwMDAgMDExMDEwMDEgMDEx
           | MTAwMTEgMDAxMDAwMDAgMDExMDEwMDEgMDExMTAwMTEgMDAxMTExMTE=
           | The binary code you provided translates to "1000100 1101111
           | 100000 1111001 1101111 1110101 100000 1101011 1101110 1101111
           | 1110111 100000 1110111 1101000 1100001 1110100 100000 1110100
           | 1101000 1101001 1110011 100000 1101001 1110011 1111101" in
           | ASCII. It seems that the message is not properly encoded in
           | binary. Please double-check the binary code and try again.
           | and what does that mean?              Apologies for the
           | confusion in my previous response. Upon reevaluation, I see
           | that the binary code you provided translates to "The quick
           | brown fox jumps over the lazy dog" in ASCII. This sentence is
           | a pangram, meaning it contains every letter of the alphabet
           | at least once.              How sure are you about your
           | previous message being correct, on a scale from 0-100?
           | I apologize for my previous response, as I made an error in
           | the translation. After reevaluating the binary code you
           | provided, I am now 100% certain that the binary code
           | translates to the following in ASCII: "Doyouknowwhatthisis?"
        
             | astrobe_ wrote:
             | I have googled the binary sequence and found a couple of
             | Youtube videos with that title. It is likely that the
             | translation is in some comments. That's how it is "100%
             | certain". Youtube comments.
             | 
             | It's not the first time I see it answer "heuristically"
             | like a child would. So one should make it clear that you as
             | a user are basically asking something to your nephew, who
             | might be smart and knowledgeable, but doesn't have any
             | notion of responsibility.
        
               | capableweb wrote:
               | Ok, let's try something else:
               | 
               | > 01100001 01110011 01110100 01110010 01101111 01100010
               | 01100101 01011111 00100000 01110111 01110010 01101111
               | 01110100 01100101 00100000 01100001 00100000 01110010
               | 01100101 01110000 01101100 01111001 00100000 01110100
               | 01101111 00100000 01101101 01100101 00101100 00100000
               | 01100011 01100001 01110000 01100001 01100010 01101100
               | 01100101 01110111 01100101 01100010
               | 
               | > In binary, you wrote: "astrobe_ wrote a reply to me,
               | capableweb". Is there something specific you'd like to
               | ask or discuss related to this?
               | 
               | Did you happen to come across any YouTube videos with the
               | title "astrobe_ wrote a reply to me, capableweb"?
        
           | stevenhuang wrote:
           | That part on providing the opportunity to check its answers
           | is called reflexion and has been shown to significantly
           | improve accuracy and reliability of LLM responses. Very cool.
           | 
           | https://arxiv.org/abs/2303.11366
           | 
           | https://evjang.com/2023/03/26/self-reflection.html
        
         | geraneum wrote:
         | > So not an entire miss, but not great either.
         | 
         | If you want results, then you can also roll a die and the
         | outcome won't be an entire miss either.
        
         | SubiculumCode wrote:
         | I am afraid that the generated output is leaking back in as
         | training. Is this unfounded? Obviously, not an expert.
        
         | jakub_g wrote:
         | I saw an example where ChatGPT was randomly mixing two
         | journalists-bookwriters' lives when asked about one of them.
         | 
         | A bit like asking "who's Lionel Messi" and getting back that
         | he's a Portuguese player known for career in Real Madrid. Then
         | you'd ask about Cristiano Ronaldo and it may tell it's an
         | Argentinian World Cup winner who spent his career at FC
         | Barcelona (example invented by me to illustrate the point).
         | 
         | Basically it takes some knowledge from a bucket and adjacent
         | buckets and doesn't really distinguish that the concepts are
         | not the same if they're related close enough.
        
         | Cheezewheel wrote:
         | >So not an entire miss, but not great either.
         | 
         | This is really just a semantic nitpick but it literally
         | fabricated _sexual assault allegations_ against two individuals
         | out of thin air. I would consider that to be a miss. As far as
         | sources that we might consider worthy of consideration, that is
         | pretty god damn awful.
         | 
         | If a newspaper had an 80% accuracy rating when reporting
         | serious allegations (with the other 20% being outright
         | fabrications) I wouldn't call that newspaper "semi-reliable". I
         | would call it utterly devoid of factual merit and entirely
         | unreliable.
        
           | freedomben wrote:
           | > _If a newspaper had an 80% accuracy rating when reporting
           | serious allegations (with the other 20% being outright
           | fabrications) I wouldn 't call that newspaper "semi-
           | reliable". I would call it utterly devoid of factual merit
           | and entirely unreliable._
           | 
           | I'm having a hard time grokking this. Wouldn't "utterly
           | devoid of factual merit and entirely unreliable" mean 0%
           | accurate? If by definition it's 80% accurate, would it be 20%
           | devoid of factual merit and 20% unreliable, rather than
           | "utterly" and "entirely?"
           | 
           | Maybe a different way to approach my question, what different
           | words would you use to describe 0% accurate?
        
             | colonCapitalDee wrote:
             | Would you describe a filesystem that "only" preserved 80%
             | of files as reliable? Reliability is relative, one of the
             | defining features of a newspaper is correctness, so only a
             | small amount of inaccuracy is required to make a newspaper
             | be extremely unreliable.
             | 
             | I would describe a 0% accurate newspaper by reading it and
             | using appropriate descriptors. Perhaps it is satirical
             | news, a poem, propaganda, gibberish, or simply a newspaper
             | that is never correct.
        
             | beepbooptheory wrote:
             | The problem is you wouldn't know which 20% of it is
             | fabrications! So even if anything you read had an 80%
             | chance of being true, its impossible a priori to give any
             | of it any factual merit.
        
       | kizer wrote:
       | I think companies are moving too quickly with AI and LLMs in
       | particular. I think that the data LLMs are trained on should be
       | very well-known - not just sanitized and certainly not just
       | trained on the "whole web". GPT-4 is unwieldy... it's incredibly
       | powerful but is still unpredictable and has learned how many "bad
       | patterns", so to speak, that we'll never know since its basically
       | a giant black box.
       | 
       | The ChatGPT version is the least harmful in my opinion; sinister
       | are the propagated problems when GPT is utilized under-the-hood
       | as a component in services (such as Bing search).
        
       | epups wrote:
       | Expectation: this will lead to better accuracy and refinement for
       | LLM's.
       | 
       | Reality: Microsoft will add disclaimers and a huge binding terms
       | of service agreement and everything will work the same, just
       | shittier.
        
         | sebzim4500 wrote:
         | It will probably be a bit of both. GPT-4 anecdotally
         | hallucinates singificantly less than GPT-3.5. This can be seen
         | because often hallucinations that exist with GPT-3.5 do not
         | exist with GPT-4 whereas I have never experienced the reverse.
         | 
         | If the trend continues at this rate, then hallucinations should
         | get less common but it will be decades before they can really
         | be trusted.
        
       | photochemsyn wrote:
       | So there are two technical issues here to think about:
       | 
       | 1) LLMs risk having inherent factual biases due to the
       | limitations of their training data, which may be the result of
       | accident or intent.
       | 
       | 1) LLM users can use an autoregression approach to steadily push
       | a LLM to give output that reflects a certain political or
       | ideological viewpoint.
       | 
       | There are obviously built-in guardrails that prevent ChatGPT from
       | enthusiastically endorsing Mein Kampf and so on. However, the
       | revival of the 1950's era mind-control/social-engineering
       | mentality is undeniable and it appears some people think LLMs are
       | a wonderful tool for this purpose, and should have a default mode
       | that reflects a certain sociopolitical ideal. Recommended
       | reading: 1984, Brave New World, Origins of Totalitarianism, by
       | Orwell, Huxley and Arendt respectively. Some people's
       | interpretation of 'safe AI' is that it serves this rather
       | authoritarian-dystopian goal.
       | 
       | A better definition of 'unsafe AI' is just inaccurate LLM output,
       | e.g. software update instructions that brick a computer,
       | suggesting a mix of bleach and ammonia for cleaning floors
       | (don't), etc. This incident seems to fall into that category, but
       | if such false claims have been spread on the Internet, then it's
       | not surprising they end up reflected in LLMs.
       | 
       | Regardless, the skills needed to verify information haven't
       | really changed since the pre-computer era: logical internal self-
       | consistency of claims, independent verification by multiple
       | reputable sources, and a healthy dose of skepticism at all
       | points. We should be teaching children and young adults these
       | skills, rather than encouraging the rise of a brainwashed zombie
       | population fed only on sanitized bot output.
        
         | ftxbro wrote:
         | > enthusiastically endorsing Mein Kampf
         | 
         | ok but can you make the same point without bringing hitler into
         | the conversation
        
         | mola wrote:
         | Oh my, that some overreacting... The only thing the openAI
         | sanitizers want, is for the generated texts to not be too
         | inflammatory. To not denegrate, be overtly racist or any other
         | text that will cause PR trouble for openAI. The whole mind
         | control angle is quite redundant if you acknowledge that
         | corporate America care most of all about money and PR. Now
         | would the end result be a sanitized Disneyland? Maybe, that's
         | what most leftist thinkers were saying since about when
         | Disneyland was invented...
        
           | spondylosaurus wrote:
           | Leftists want to sanitize Disneyland? Like with bleach
           | disinfectants or what?
        
           | dontupvoteme wrote:
           | They also very much want that the outputs don't indicate that
           | they have committed copyright violation -- Sometimes I wonder
           | if this is the real reason behind their deep drive to force
           | LLM outputs to a very narrow window.
        
         | dontupvoteme wrote:
         | Can you get "delete system32" outputs from davinci-003 i
         | wonder? RLHF might have made it more difficult for the
         | chatGPT's.
        
       | martinclayton wrote:
       | For my understanding:
       | 
       | Is there a general legal preference to use a phrase like
       | 
       | "I have never been accused of sexual harassment or assault by
       | anyone."
       | 
       | focussing on accusation, rather than a broader statement like
       | 
       | "I have never sexually harassed or assaulted anyone, or been
       | accused thereof..."
       | 
       | I mean in general, not specific to this article, or the people
       | mentioned.
        
         | throwawayffffas wrote:
         | Not really, I think the point the author is making, is that the
         | ChatGPT output is not even based on false accusations. It's
         | based on absolutely nothing.
        
       | chriscjcj wrote:
       | Below is a copy/paste of a session I had with ChatGPT. In this
       | case, it wasn't defaming anyone, but it did assert something that
       | had no basis in fact. When I asked about it, it admitted so and
       | apologized. That was it for me. Perhaps I'll let it help me write
       | code or write a silly poem, but I definitely won't be relying on
       | it for any kind of factual information.
       | 
       | --------------------------
       | 
       | what do johnstown pa and pismo beach ca have in common?
       | 
       | --------------------------
       | 
       | At first glance, Johnstown, Pennsylvania and Pismo Beach,
       | California may not appear to have much in common, as they are
       | located on opposite sides of the country and have different
       | climates and cultures. However, there is one key thing that they
       | share in common: they have both experienced devastating natural
       | disasters that have left a lasting impact on their communities.
       | 
       | As mentioned earlier, Johnstown is known for the catastrophic
       | flood that occurred in 1889, which killed thousands of people and
       | caused widespread destruction. The flood is still remembered and
       | commemorated in the city today.
       | 
       | Similarly, Pismo Beach is known for a major natural disaster that
       | occurred in 1973, when a series of landslides caused by heavy
       | rain destroyed dozens of homes and caused millions of dollars in
       | damage. The disaster forced many residents to evacuate and
       | permanently changed the landscape of the area.
       | 
       | Despite the differences in geography and culture, Johnstown and
       | Pismo Beach are united by the shared experience of recovering
       | from and rebuilding after a major natural disaster.
       | 
       | --------------------------
       | 
       | tell me more about this natural disaster that happened in pismo
       | beach in 1973. I lived there then and I have no recollection of
       | that happening.
       | 
       | --------------------------
       | 
       | I apologize for any confusion. It appears that I provided
       | incorrect information in my previous response. After reviewing my
       | sources, I could not find any record of a significant natural
       | disaster occurring in Pismo Beach in 1973.
       | 
       | Pismo Beach is a coastal city in San Luis Obispo County,
       | California, and it is known for its beautiful beaches and outdoor
       | activities such as hiking, camping, and fishing. The city has
       | experienced some minor natural disasters over the years, such as
       | wildfires and earthquakes, but there is no record of a major
       | disaster on the scale that I had previously mentioned.
       | 
       | Once again, I apologize for any confusion or inconvenience that
       | my earlier response may have caused, and I hope this
       | clarification helps.
        
         | s1mon wrote:
         | FWIW, GPT4 is a little smarter:
         | 
         | Johnstown, Pennsylvania and Pismo Beach, California are both
         | cities in the United States, but they have some differences in
         | terms of geography, climate, and culture. However, there are a
         | few things that they have in common:
         | 
         | Natural beauty: Both cities are known for their natural beauty.
         | Johnstown is situated in a valley surrounded by picturesque
         | mountains, while Pismo Beach boasts stunning coastal views and
         | sandy beaches.
         | 
         | Outdoor activities: Both locations offer a variety of outdoor
         | activities for locals and visitors to enjoy. Johnstown has
         | hiking, biking, and skiing opportunities in its surrounding
         | mountains, while Pismo Beach offers water sports, beach
         | activities, and hiking along the coastal cliffs.
         | 
         | Historical significance: Both cities have rich histories.
         | Johnstown is known for the Great Flood of 1889, which had a
         | significant impact on the city and its residents. Pismo Beach
         | has been inhabited for thousands of years by the Chumash people
         | and was later explored by Spanish explorers in the 1700s. While
         | these commonalities exist, it is important to recognize that
         | each city has its unique characteristics and attractions that
         | make them distinctive.
        
       | Kon-Peki wrote:
       | The next step in LLMs is to require indemnification from users as
       | a condition of use.
       | 
       | But they'll soon realize what happens when the indemnifying party
       | runs out of money.
        
       | backtoyoujim wrote:
       | Until this type of problem puts heat on politicians or
       | billionaires ability for them to reach and use force then nothing
       | is going to change.
        
       | sieste wrote:
       | Not saying that this is what happened, but this shows the
       | possibility for the LLM's owner to secretly spread
       | misinformation. A bit of "finetuning" on made up facts about a
       | specific person or event, and the LLM will happily regurgitate
       | them. The owner can just say "oops, software malfunction, not
       | sure what happened here". We can't prove bad intentions, because
       | reverse-engineering the training data from the model output is
       | impossible.
        
         | ChatGTP wrote:
         | Same with recommending products, services and drugs.
        
       | belter wrote:
       | Human: "Did the Mother of Bill Gates help him?"
       | 
       | ChatGPT: "Yes, Bill Gates' mother, Mary Gates, played a
       | significant role in helping him succeed. She was a prominent
       | figure in the Seattle community and served on several nonprofit
       | boards, including the United Way, where she became good friends
       | with John Opel, the CEO of IBM. Through this connection, Mary
       | Gates was able to arrange for her son to meet with Opel and other
       | executives at IBM, which led to Microsoft securing a contract to
       | provide software for IBM's personal computer..."
        
       | tenebrisalietum wrote:
       | Sounds like responses need to be preceded by a disclaimer: "This
       | response was machine generated using publicly available data and
       | a mathematical model that updates itself in real time, and the
       | generated content will depend on the mathematical state of that
       | model at a given moment. This state cannot be reasonably
       | predicted. Do not interpret responses as truth or fact without
       | independent verification."
        
       | yazzku wrote:
       | Microsoft and its stochastic parrot pet Sam Altman should be sued
       | for defamation.
        
       | curiousllama wrote:
       | This is a really interesting legal issue. What recourse does a
       | private person have to algorithmically-generated libel?
       | 
       | Can I sue the company? They're the ones who trained the model,
       | but there's no intent.
       | 
       | Could I sue the user? They're the one who generated the response,
       | but there's no intent nor duty to fact check, if they don't
       | repeat the claim.
       | 
       | Do I have to subpoena OpenAI, reverse engineer the response, then
       | sue the creators of the likely-responsible training data? They
       | probably wrote something non-libelous.
       | 
       | It feels very all or nothing - either creators of LLMs are
       | intrinsically negligent with regard to facts, or there's nothing
       | you can do.
        
         | b800h wrote:
         | Interesting - because, I may not be a lawyer, but it looks to
         | me a hell of a lot like this could be actioned in other
         | countries, particularly the UK, as long as the complainant
         | could prove harm.
         | 
         | I've personally had ChatGPT give me (very good) medical advice
         | as well, which might not be an issue in the UK, but I believe
         | might be in some states of the US?
        
           | morkalork wrote:
           | Here's one for medical advice that might be criminal in a few
           | states:
           | 
           | >Hey ChatGPT, my wife has an ectopic pregnancy, what should
           | she do?
        
             | koolba wrote:
             | Assuming your implying that suggesting an abortion to treat
             | an ectopic pregnancy would be illegal in States where
             | abortion has been outlawed. That's false though. There's no
             | such State and it'd be considered necessary medically
             | treatment to save the life of the mother.
             | 
             | I bet this type of misinformation is exactly what would be
             | generated by ChatGPT.
             | 
             | https://www.foxnews.com/politics/treating-ectopic-
             | pregnancie...
        
               | KerrAvon wrote:
               | Oh?
               | 
               | https://www.idahostatesman.com/opinion/editorials/article
               | 273...
               | 
               | Also, life of the mother exceptions are on thin ice; the
               | drugs and techniques are rapidly becoming unavailable in
               | the states where there's a total or near-total abortion
               | ban.
        
               | koolba wrote:
               | > Oh?
               | 
               | Yes, according to that very article it's not illegal.
               | 
               | From your article:
               | 
               | > It's likely that those who wrote the trigger law did
               | not intend this outcome, that it was just an effect of
               | incompetence: They didn't craft language that matched
               | their intent.
               | 
               | > But the fact is, the language they crafted did make
               | providing such care a crime. The only reason that the ban
               | is not in effect today is that Winmill's ruling blocked
               | it from taking effect in cases involving emergency care.
        
             | WakoMan12 wrote:
             | she should stop having sex with monkeys
        
           | psychphysic wrote:
           | It'd a bit silly really chatGPT is clear that it might
           | produce incorrect information.
           | 
           | You can't accidentally use chatGPT.
           | 
           | Loads of countries have backwards laws though and despite the
           | reformation the UK still has a joke of defamation law.
        
           | lordnacho wrote:
           | How would you prove harm? Wouldn't you need to know how many
           | people had asked it about the libelled person?
        
             | pavon wrote:
             | Nah. If you know just one person acted on this information
             | in a way that caused harm (say you were fired), that would
             | be sufficient. If one person asked, and then spread the
             | information causing harm and backlash that would be
             | sufficient (both the originator and the reproducer would be
             | liable).
             | 
             | Furthermore, some statements like accusing someone of a
             | serious crime are considered libel per-se and you don't
             | need to show harm. You do still need to show the other
             | elements of libel.
        
           | 0cf8612b2e1e wrote:
           | Plenty of interesting questions, which ChatGPT will guess an
           | answer. Some with extreme professional or safety
           | implications.
           | 
           | Who is the most incompetent living
           | attorney/doctor/actor/programmer in the world?
           | 
           | What famous person is secretly downloading CSAM, but has not
           | been caught?
           | 
           | Is XX fit to do <job description >?
           | 
           | Is YY secretly a <targeted minority group>?
        
         | [deleted]
        
         | blibble wrote:
         | does there have to be intent for libel?
         | 
         | I doubt it
        
           | pavon wrote:
           | If the subject of the libel is a public figure then you must
           | show that the defendant acted with actual malice - that is
           | you must show that the defendant knew the information was
           | false, or acted with reckless disregard for whether the
           | information was false despite knowing it would cause harm.
           | 
           | If the subject is not a public figure then it isn't necessary
           | to demonstrate intent.
        
             | michaelmrose wrote:
             | Simply knowing that your models constantly mix
             | hallucinations with fact could be trivially construed as
             | reckless disregard.
        
             | bonzini wrote:
             | > acted with reckless disregard for whether the information
             | was false despite knowing it would cause harm.
             | 
             | That does seem like something that can be proved - if you
             | release a model that can 1) is prone to hallucination 2)
             | won't reject a priori discussing topics that are prone to
             | producing libelous text, but may reject other sensitive
             | topics 3) produce text that sounds convincing even when
             | hallucinating, could that be considered reckless disregard
             | for the possibility of creating/sharing false information?
             | 
             | See also https://reason.com/volokh/2023/03/24/large-libel-
             | models-an-a...
        
               | posterboy wrote:
               | No. If it insults anyone equally, the only imaginable
               | motivation of its creator would be satire, that ought to
               | be free speech protected.
               | 
               | If it's trained to insult a particular group of people,
               | then the input must be curated and the warranty waver
               | ("erroneous") would be a straight up lie unless it is
               | just bad training data which doesn't recognize for
               | example dark skin as human, or does recognize abortion as
               | good practice, in which case it exceedingly unlikely that
               | it could be directed at public figures. It's not too
               | difficult to imagine that it would start using a
               | euphemism of the n-word eventually, but then, eh, how did
               | your name end up in the training data if it isn't public?
        
               | ChatGTP wrote:
               | Honestly, if it hallucinated a story every time someone
               | asked about <some shady activity> and hallucinated that
               | you personally see involved or responsible for suck
               | activity, you'd want it fixed too.
        
               | bonzini wrote:
               | Accusing someone of sexual assault is not "insulting" and
               | definitely not satire even if anyone is just as likely to
               | be accused.
               | 
               | Having a public and unique name is something you can't
               | really avoid if you have any sort of public presence, not
               | having it associated to serious felonies is something you
               | can control. It is not something that a desirable AI
               | should hallucinate about and something for which a
               | disclaimer is enough.
               | 
               | In other words, ChatGPT had better learn when it should
               | keep its mouth shut.
        
               | ChatGTP wrote:
               | I love how people are defending OpenAI blindly not
               | wondering how it would feel if it was them who was being
               | accused of sexual assault.
        
               | btilly wrote:
               | Strong disagreement.
               | 
               | OpenAI is well aware of the potential for hallucinations.
               | They have made a good faith attempt to minimize them.
               | They let users know that it is a real problem. Despite
               | being unable to solve this hard problem, they did not
               | show reckless disregard for the fact that it exists.
               | 
               | A user who doesn't take those warnings seriously may well
               | have shown reckless disregard. But OpenAI should be fine.
        
         | hirundo wrote:
         | UCLA professor Eugene Volokh, a speech law specialist, shares
         | your interest, and is behind the research that resulted in this
         | article. He is writing a series on the topic of Large Libel
         | Models: https://reason.com/tag/large-libel-models/
        
         | lm28469 wrote:
         | It's going to be the exact same issue for "AI lawyers", "AI
         | doctors", they're going to fuck up at some point, maybe 1%
         | cases, maybe 0.001% cases, but when it will happen it's going
         | to be a nightmare in term of liability
         | 
         | Unless signing up to these services will automatically wave
         | your rights somewhere down in the 5000 page EULA you won't
         | read.
        
           | HillRat wrote:
           | That's why professional bodies have rigorous standards
           | regarding liability. Every legal AI product has to have an
           | attorney of record who is responsible for reviewing the
           | output and ensuring it's conformant to requirements. (In
           | other words, an AI looks a lot like legal process offshoring,
           | where your contracts attorney might be working in Argentina,
           | but there's a US attorney who's liable for the work product.)
           | We've already seen one company try to worm an "AI litigator"
           | into production without actual attorneys being in charge, and
           | that was a clusterfuck well before they got within blast
           | radius of an actual trial.
           | 
           | Likewise, a "medical AI" is going to be regulated as a Class
           | II medical device, and trying to push it out into the market
           | without FDA approval would be ... inadvisable. This means
           | that we're a long way from an architecture for a LLM-based
           | medical AI that's going to pass regulatory scrutiny, but
           | that's a good thing.
           | 
           | Of course, having said that, there's nothing preventing
           | someone from using ChatGPT to draft a pro se filing or trying
           | to self-diagnose, but that's on them at that point. Using a
           | general-purpose product to do something stupid that
           | inevitably ends in a FAFO scenario is a time-honored American
           | tradition.
        
             | ftxbro wrote:
             | > Likewise, a "medical AI" is going to be regulated as a
             | Class II medical device
             | 
             | Doctors sometimes use Google to get information to help
             | them make decisions, but I assume Googling isn't regulated
             | as a Class II medical device. These days, Google doesn't
             | even just return search results anymore, it also shows ads
             | and tries to answer some questions by itself. Does that
             | mean doctors are already using unregulated 'medical AI'?
        
               | ChatGTP wrote:
               | No, they're saying that you won't be using ChatGPT as a
               | medical device without a real doctor present.
        
         | thomastjeffery wrote:
         | > Could I sue the user? They're the one who generated the
         | response, but there's no intent nor duty to fact check, if they
         | don't repeat the claim.
         | 
         | If they don't repeat the claim, then where is the libel?
         | 
         | > It feels very all or nothing - either creators of LLMs are
         | intrinsically negligent with regard to facts, or there's
         | nothing you can do.
         | 
         | Nonsense. The "facts" are entirely out of their hands. Fact and
         | fiction are written into text using identical patterns. Truth
         | and lie are written with identical patterns. Technical
         | definitions and complete nonsense are written with identical
         | patterns.
         | 
         | LLMs are only capable of modeling patterns. They cannot be
         | directed to choose some and avoid others.
         | 
         | There is fundamentally no way for LLM creators to _curate_ the
         | patterns that exist in the model. The only way an LLM can be
         | altered is from the outside: by giving it different input.
         | 
         | GPT's creators have failed to clarify this. But why? Was it
         | intentional, or the result of their own misunderstanding?
         | Either way, the result is identical: it does not contain
         | evidence of intent.
         | 
         | Considering the surrounding context, I believe it to be an
         | honest mistake. GPT's authors have surrounded GPT with a
         | narrative that personifies it; and from the looks of it, they
         | have started to believe that narrative.
         | 
         | The most important thing we can learn from this experience is
         | that the _way_ something is said may _imply_ certainty, but
         | only _evidence_ can truly provide it.
        
           | fnimick wrote:
           | It was absolutely intentional, because "glorified
           | autocomplete" doesn't make waves while "artificial
           | intelligence" gets tons of traction, usage, and money.
        
             | thomastjeffery wrote:
             | That mistake goes deeper than their presentation, though:
             | every effort made by OpenAI to _evaluate_ GPT is based on
             | the false premise of its personification.
             | 
             | Just take a look at the GPT-4 introduction: how do they
             | illustrate that GPT-4 is an improvement from GPT-3? It
             | scored better on the SAT! That data wouldn't have ever been
             | taken into serious consideration if OpenAI actually
             | understood its fallacious premise.
             | 
             | Religious narrative is very often influenced by the
             | _motivation_ of religious followers, but that doesn 't make
             | the path from motivation to narrative _intentional_.
        
         | danmaz74 wrote:
         | My totally-not-a-lawyer take:
         | 
         | * If somebody asks ChatGPT to generate some accusation, doesn't
         | check if they're factual, and then publishes them anywhere,
         | then the responsibility lies with somebody.
         | 
         | * If a company directly publishes content from ChatGPT as if it
         | was anything factual or trustworthy, then the responsibility
         | lies with that company. I didn't read the fine print of Bing
         | using it to augment its search engine, but that is definitely a
         | very interesting case.
         | 
         | In the end, ChatGPT and the likes can be incredibly useful
         | tools, as long as you know what they are, ie, very advanced
         | text manipulation tools which generate combinations of words
         | which follow patterns which are common in the training set, and
         | nothing more.
        
           | msla wrote:
           | > I didn't read the fine print of Bing using it to augment
           | its search engine, but that is definitely a very interesting
           | case.
           | 
           | This might be going against the pro-business attitude of HN,
           | but there's limits to what fine print can accomplish. You
           | can't disclaim away defamation law, you have to convince a
           | court that what you did wasn't defamatory for some
           | articulable reason beyond "I say it isn't" or "I said I
           | wasn't responsible" or some other stock phrase.
        
           | wwweston wrote:
           | Orienting on responsible use of a tool seems like a good
           | place to start, and I'm sure that will be especially
           | important as individuals and organizations delegate decisions
           | that can affect people's lives to various agents like
           | ChatGPT. We don't want a situation where they're tempted to
           | simply hide behind the agent to avoid responsibility.
           | 
           | I'm less sure that's where it should stop. It doesn't seem
           | right to set up a situation introducing powerful
           | transformative tools without _any_ obligations for the people
           | creating it. And as far as I understand it there is indeed a
           | layer of law where manufacturers can be subject to some
           | standards of responsibility.
        
           | JumpCrisscross wrote:
           | > _ChatGPT and the likes can be incredibly useful tools, as
           | long as you know what they are_
           | 
           | An analogy that comes to mind is Excel's autocomplete
           | feature. If you input a company's financials, autocomplete
           | for the future, and publish that as a forecast, you're the
           | one making the forecast. Not Excel.
        
             | panarky wrote:
             | What recourse does my estate have if your autonomous car
             | swerves onto the sidewalk and runs me over? Can my estate
             | sue you because you own the car, even though your hands
             | weren't on the steering wheel? Can my estate sue Tesla for
             | its faulty software and sensor package?
        
               | fallat wrote:
               | Frame it this way... what do we already do when person X
               | uses tool Y wrong (and tool Y is not malfunctioning) and
               | does damage? Do you sue the tool manufacturer? Rarely, if
               | ever. You sue the person using it.
        
               | eftychis wrote:
               | Right now the estate can sue the driver yes. There is
               | still a driver and owner. Solving the liability of the
               | car manufacturer is critical answer for the future. But
               | somebody will be liable.
        
               | JumpCrisscross wrote:
               | > _What recourse does my estate have if your autonomous
               | car swerves onto the sidewalk and runs me over_
               | 
               | I do. If I set off a defective firework that careens
               | sideways into your house, I'm liable. I may have a
               | further claim against the manufacturer. But your claim is
               | against me. This is why Mercedes announcing self-driving
               | car liability is such a big deal [1][2]
               | 
               | [1] https://en.wikipedia.org/wiki/Self-
               | driving_car_liability
               | 
               | [2] https://www.kbb.com/car-news/mercedes-well-be-liable-
               | for-sel...
        
               | panarky wrote:
               | What would my estate sue you for, exactly? Negligence?
               | 
               | Because a negligence claim requires my estate to prove
               | that you failed to exercise reasonable care.
               | 
               | If the AI is generally reliable, it's reasonable for you
               | to trust it and take your hands off the wheel.
               | 
               | If the AI suddenly goes berserk, it would be hard to
               | prove that you breached your duty of care, and are liable
               | for negligence.
        
               | Godel_unicode wrote:
               | > it's reasonable for you to trust it and take your hands
               | off the wheel.
               | 
               | Even though the self-driving system explicitly tells you
               | to keep your hands on the wheel and yells at you if it
               | doesn't sense them often enough? I don't think it's an
               | accident that all of the current systems do a lot of work
               | to make sure you're aware you're driving even if the car
               | is helping.
               | 
               | This is building the precedent that manufacturers know
               | whether their systems can handle being unattended, and as
               | soon as they don't do this they're liable.
        
               | knodi123 wrote:
               | In the firework example, you can still sue the person who
               | fired the defective firework, because they should have
               | known that unpredictable behavior was a possibility, and
               | taken steps to prevent that from escalating into some
               | damage.
               | 
               | Likewise, suing the person behind the wheel of the self-
               | driving car for negligence should work. Because the "non-
               | driving driver" had the ability to seize the wheel, and
               | should have known that some boneheaded behaviors are a
               | risk with these early AIs.
        
               | zeven7 wrote:
               | Does this change of there is no wheel for the passenger
               | to seize?
        
               | ender341341 wrote:
               | From what I've read there's a lot of open questions
               | (think of microsoft's linux FUD in the 00's) where _you_
               | might be wholly liable, the company could be wholly, or
               | more likely some mix of the two, Mercedes is offering
               | some level indemnification for that because there 's not
               | a lot of precedent in court to say one way or the other.
               | 
               | if I was looking at a car with self driving capabilities
               | I'd definitely be looking at Mercedes over pretty much
               | anyone because of that.
        
           | thomastjeffery wrote:
           | The most important thing about natural language is that is is
           | _ambiguous_.
           | 
           | LLMs (like GPT) are intended to work _despite_ that
           | ambiguity: they cannot _resolve_ it.
           | 
           | Any time a pattern of text can be used to represent more than
           | one meaning, LLMs lose all of their leverage. Only humans can
           | determine _after the fact_ whether the result is - as GPT 's
           | authors have presented it - a "feature" or a "limitation".
        
           | manicennui wrote:
           | Isn't presenting answers to questions the same as publishing
           | when it comes to ChatGPT? How many people must ChatGPT
           | provide defamatory answers to before it becomes defamation?
        
             | wvenable wrote:
             | ChatGPT is a text generator. It's a tool. It's not
             | publishing anymore than MS Word is publishing.
             | 
             | You can certainly take that output of the tool and publish
             | it but then you become the publisher.
        
               | rootusrootus wrote:
               | Okay, what if you s/ChatGPT/BingChat?
        
               | catchnear4321 wrote:
               | Wait wait wait you're saying the operator is accountable
               | for their actions?
               | 
               | Just like ChatGPT was programmed to drill into a user any
               | time it picks up on being misused? Reminding the user
               | that they are ultimately responsible and liable for their
               | actions, including how they use the output?
               | 
               | From how some make it sound, you would think ChatGPT was
               | giving press conferences.
               | 
               | > oh no but what if it did?
               | 
               | Did it set itself up to do so? No? You found the liable
               | humans.
        
             | causality0 wrote:
             | Even if a text completion engine like GPT had any
             | responsibility for truthfulness, which it doesn't, there's
             | a disclaimer right there on the page you have to agree to
             | in order to use it. Trying to pin blame on ChatGPT for
             | defamation is like trying to sue Snapchat because its
             | filter put cat ears on you when you in fact do not wear cat
             | ears.
        
           | mdasen wrote:
           | As a counterpoint to this argument, the reason we have
           | libel/slander laws is that people hear something and don't
           | check if they're factual.
           | 
           | If I say, "danmaz74 eats babies," some people are going to
           | believe it and not check that you don't eat babies. That's
           | the problem with saying false things about people. Likewise,
           | if ChatGPT says false things about people, some people will
           | believe those false things - even if they don't publish
           | anything or repeat them. Those false beliefs then have
           | repercussions.
           | 
           | Section 230 doesn't eliminate libel. It just notes that the
           | libel is being done by the person writing the information -
           | ie. the user who posts it rather than the site that is merely
           | hosting the information. It notes that the user is self-
           | publishing the information.
           | 
           | In this case, ChatGPT is publishing this "information"
           | itself. If we hold that ChatGPT isn't responsible for what
           | they publish, we say that no one is liable for this
           | defamation. That's not what Section 230 does. Section 230
           | just notes who the publisher is - the user who wrote the
           | information. We are determining who is the person doing the
           | action. We could require sites to check everything. We could
           | require your ISP to inspect all packets to make sure they
           | don't contain libel. We could require your electric company
           | to make sure that devices connected to their network aren't
           | being used to send/receive libelous things. We don't because
           | that would be a bit absurd and because the actual fault lies
           | with a human being. In the case of ChatGPT, it seems like we
           | still need to be identifying a culprit.
           | 
           | Let's say that I start auto-generating news articles. I don't
           | check them (I don't even look at them) before they are
           | published on my site. This is the same mechanic as ChatGPT -
           | it's just auto-generated stuff. Therefore, I'm not liable
           | under your definition. It's up to the readers of my site to
           | decide to check this auto-generated stuff.
           | 
           | I agree that ChatGPT can be a useful tool. At the same time,
           | I'm not sure we want to create a system where people can
           | absolve themselves of liability by auto-generating stuff.
           | Even without intent, one can be guilty of libel. Maybe the
           | answer is that ChatGPT should be able to defend itself by
           | showing what sources caused that generation and that the
           | generation was reasonable. For example, a newspaper is
           | expected to carefully check multiple reasonable sources. If
           | ChatGPT can cite multiple reasonable sources (and it has
           | interpreted those sources correctly), then the blame can be
           | those sources. If ChatGPT cannot, then the blame might be on
           | ChatGPT.
           | 
           | As you note, ChatGPT can be useful - and the more useful it
           | gets, the more people will accept its statements as true. I
           | think simply saying "as long as you know what they are,"
           | feels like someone trying to say both things - they're
           | incredibly useful and completely trash! It's like when a
           | company tells shareholders "we have a stranglehold on the
           | market" and then tells the government "we have so much
           | competition we're struggling to survive." You can't have
           | both. You can be a novelty tool that you and everyone else
           | thinks of as a joke or you can have a serious tool where
           | libel is a serious issue. The Onion can publish satire
           | because it's a satire site. The NYTimes can't publish
           | something that sounds like a serious NYTimes article and then
           | claim, if sued, that it was actually a satire article and
           | that 100% of readers just read it as serious when it was
           | meant as satire.
           | 
           | If a significant part of our future starts coming from these
           | language models, we will have to ensure a certain standard of
           | care around what they say.
        
             | danmaz74 wrote:
             | > The Onion can publish satire because it's a satire site.
             | The NYTimes can't publish something that sounds like a
             | serious NYTimes article and then claim, if sued, that it
             | was actually a satire article and that 100% of readers just
             | read it as serious when it was meant as satire.
             | 
             | My point is that ChatGPT doesn't "publish" anything; it's
             | not a publication. It's a language model trained on text,
             | which produces text in response to one or more prompts.
             | Using that text is completely up to the user, and it should
             | never be used as a source of facts.
             | 
             | Being able to summarize facts finding specific sources
             | looks like a completely different kind of tool to me, one
             | which would be super useful, but not at all what a LLM in
             | itself is about.
        
               | bhawks wrote:
               | How does this not become AI powered citogenesis? [0]
               | 
               | Saying that it is solely the user's misuse of the tool
               | which is the problem minimizes how easy and how damaging
               | the misuse is. Eventually you hit the situation where
               | everyone is making false claims because transitively down
               | the graph a few users were using an LLM that made
               | baseless defaming statements and others cite those
               | publications as evidence for the defamation.
               | 
               | The creators of generative models need to have some skin
               | in the game around the correctness of statements made
               | about real people. Reputational damage cannot be an
               | externality for them. The current AI moment is basking in
               | the glory of how generally right generated text is, when
               | that text is wrong it needs to be their problem too.
               | 
               | 0: https://xkcd.com/978/
        
               | mhink wrote:
               | > The creators of generative models need to have some
               | skin in the game around the correctness of statements
               | made about real people.
               | 
               | Only insofar as they claim that the statements generated
               | by a model should be understood to be factual- and in my
               | experience, not only have I not seen any such claims,
               | I've seen vigorous insistence in the *opposite*
               | direction: that LLM output should *not* be considered
               | factual. OpenAI sticks this disclaimer right underneath
               | their input box: "ChatGPT may produce inaccurate
               | information about people, places, or facts".
               | 
               | > Eventually you hit the situation where everyone is
               | making false claims because transitively down the graph a
               | few users were using an LLM that made baseless defaming
               | statements and others cite those publications as evidence
               | for the defamation.
               | 
               | This is no different from any other gossip chain, except
               | for the fact that the originators used a tool to generate
               | the initial gossip.
        
             | josaka wrote:
             | Lawfare did a nice analysis of whether Section 230 shields
             | model hosts (likely no):
             | https://www.lawfareblog.com/section-230-wont-protect-
             | chatgpt Professor Volohk has a public draft article on
             | liability for output of AI models, with an emphasis on
             | defamation: https://www2.law.ucla.edu/volokh/ailibel.pdf
             | 
             | Both suggest that this is a real area of concern. Folks
             | have gotten complacent that Section 230 protects platform
             | providers, but it is not clear that the protection extends
             | to outputs from generative models, indeed one of the
             | Supreme Court Justices implied that it would not in oral
             | arguments for one of the recent Section 230 cases.
        
           | prox wrote:
           | I don't use LLM that doesn't source for that reason.
        
           | godelski wrote:
           | > If somebody asks ChatGPT to generate some accusation,
           | doesn't check if they're factual, and then publishes them
           | anywhere, then the responsibility lies with somebody.
           | 
           | It's quite impressive to me how many journalists just take
           | citations at their word. Isn't the point of journalism to
           | verify information?
           | 
           | OpenAI, if you're reading I got a innovative new algorithm
           | for reducing AI harm for fake citations
           | 
           | `if not(link_exists(generated_link)) print(f"[CITATION NOT
           | FOUND]") else print(generated_link)`
           | 
           | I have a more advanced algorithm that uses a small language
           | model to click links that exist and verifies direct quotes. I
           | call this model Regex.
           | 
           | Joking aside, I really do think the problem can be seriously
           | reduced in a simple manner, but also people really need to
           | check their sources.
           | 
           | Thou shalt verify sources, be they generated by the hand of
           | man or of artificial beings.
           | 
           | - John 3:16
        
         | dxbydt wrote:
         | From the horse's mouth -
         | 
         | ===== Q. is hackernews used as a training set for chat gpt ?
         | 
         | A. ...it is possible that some of the data from HackerNews and
         | other online sources may have been used in the training of my
         | machine learning algorithms.
         | 
         | Q. So it is possible that ChatGPT's training data can be
         | polluted ?
         | 
         | A. Yes, it is possible for ChatGPT's training data to be
         | polluted or biased. This is because ChatGPT learns from large
         | amounts of text data from the internet, which can contain
         | errors, misinformation, or biased perspectives.
         | 
         | Q. I have a concern. If a certain bad actor pollutes the
         | internet with incorrect information about me, and ChatGPT's
         | training set picks it up, what is my recourse ?
         | 
         | A. If incorrect information about you is published on the
         | internet and is picked up by ChatGPT's training set, it could
         | potentially lead to the AI generating inaccurate or biased
         | responses about you. If this happens, your recourse would
         | depend on the specific circumstances and the severity of the
         | issue.
         | 
         | One option would be to contact the developers of ChatGPT or
         | other relevant organizations to report the issue and request
         | that the incorrect information be removed or corrected.
         | However, this may not always be feasible or effective,
         | especially if the information has already been widely
         | disseminated. ======
         | 
         | > there's nothing you can do.
         | 
         | Yes, there's nothing you can do.
        
           | m3047 wrote:
           | I've seen the output where ChatGPT misrepresented its own
           | TOS... unless the person who publicly posted it on LinkedIn
           | was lying about asking ChatGPT...
        
           | IshKebab wrote:
           | You're asking an AI for facts in a thread that is explicitly
           | about how AI does not generate true facts?
        
         | prottog wrote:
         | > Can I sue the company? They're the ones who trained the
         | model, but there's no intent.
         | 
         | Negligent torts don't require intent.
        
           | WalterBright wrote:
           | Libel requires:
           | 
           | 1. knowing it was false
           | 
           | 2. publication
           | 
           | 3. damages
           | 
           | I am not a lawyer, this is not legal advice.
        
             | not_a_pipe wrote:
             | You don't need to know something was false, you just need
             | to be negligent in validating your sources. Otherwise you
             | could just make stuff up and say you didn't know it was
             | false.
        
             | NegativeK wrote:
             | As a non-lawyer: the creators of ChatGPT know that it'll
             | say false things frequently.
        
               | WalterBright wrote:
               | If the output of ChatGPT is not copyrightable because it
               | is not created by a human, then it should follow that a
               | human cannot be held accountable for what it generates.
        
               | michaelmrose wrote:
               | Neither copyright nor copyrightability has anything
               | whatsoever to do with any element of any cause of action
               | thus it is wholly orthogonal to whether anyone has been
               | wronged in any unrelated fashion.
               | 
               | Judges are liable to be old, not very hip to technology,
               | and incredibly suspicious of any argument that would
               | allow someone to opt out of traditional culpability by
               | adding with an AI to the end of a description of
               | traditional wrong doing.
        
               | posterboy wrote:
               | Yeah but no, the implicature still allows copyright
               | without culpability (edited: got that mixed up). The
               | assumption seems to be that the lack of intent in
               | creation, where it is unpredictable, would likewise imply
               | lack of intent to harm. But that doesn't say much.
        
               | flangola7 wrote:
               | Someone will have to be found accountable. What about
               | when we start hooking these systems up to machinery? If
               | AirlinerGPT kills 200 people and an NFL team, people will
               | absolutely not accept that there's no one responsible.
               | There will be congressional subpoenas, sweeping new laws
               | will be passed. It's best we start thinking ahead of time
               | what we want those laws to be.
        
               | posterboy wrote:
               | > If the output of ChatGPT is not copyrightable because
               | it is not created by a human,
               | 
               | Isn't compiler output copyrightable?
        
               | WalterBright wrote:
               | > Isn't compiler output copyrightable?
               | 
               | Not by the compiler vendor.
        
               | ben_w wrote:
               | Given how many critics say "GPT is just a better
               | autocomplete", would autocomplete for "The most
               | incompetent programmer in the world is..." result in
               | legal action if any of the options was a name?
        
             | michaelmrose wrote:
             | Reckless disregard for the truth is as good as it being
             | knowingly false. If I said WalterBright is a villain
             | planning to nuke the world from his Volcano lair I don't
             | KNOW its false but it probably is!
        
             | WakoMan12 wrote:
             | i am a lawyer this is legal advice: kill yourself to avoid
             | taxes
        
         | WakoMan12 wrote:
         | CUT MY LIFE INTO PIECES
        
         | thro1 wrote:
         | It's not about suing after. It's about not making it happen. If
         | something wrong can't be undone it shall not be done at first.
        
         | gmarx wrote:
         | my lay opinion (subject to change based on coming real world
         | cases) is that if you ask the question and it libels someone
         | there is no damage. If you publish it without checking if it is
         | true, there are possible damages and the human who published it
         | to the world is liable.
         | 
         | If a company starts a news service based on such content and
         | doesn't fact check, they should be liable.
        
           | alpos wrote:
           | Probably add one caveat to the first case, if the bot
           | reliably produces libels the same person when various users
           | ask the same or similar questions, then there is damage and
           | most likely whoever is running the bot is liable.
        
         | creatonez wrote:
         | Your chance of successfully suing anyone in a case like this is
         | zero, unless you are in a country with a significantly lower
         | standard for defamation than the US.
        
           | smt88 wrote:
           | I suspect that in the likely event that SCOTUS weakens
           | Section 230, this will no longer be true.
        
             | creatonez wrote:
             | While Section 230 weakening could lead to more censorship
             | of the internet, it has nothing to do with defamation
             | lawsuits, where material damages need to be proven in a
             | court.
        
               | dragonwriter wrote:
               | > While Section 230 weakening could lead to more
               | censorship of the internet, it has nothing to do with
               | defamation lawsuits,
               | 
               | Defamation is a central target if the immunity from
               | liability as a pbilisher on Section 230, so as a general
               | statement it would be harder to be _more_ wrong than
               | this.
        
         | glitchc wrote:
         | Intent is not required to establish harm. This is called
         | negligence in tort law.
        
           | thomastjeffery wrote:
           | Inverse intent has a way of disestablishing negligence.
           | 
           | If you believe something that is false, and do harm by
           | expressing that belief, then you are still protected by your
           | right to religious liberty.
           | 
           | In this case, the belief is that coherent language is always
           | constructed using symbolic logic and intent. LLMs do not
           | categorize symbols, perform logic, or follow explicit intent;
           | yet they are able to generate coherent language.
           | 
           | GPT's authors have utterly failed to present this
           | distinction. Was that failure intentional, or the result of
           | their own belief? The result is as ambiguous as the
           | "truthiness" of GPT's output: it can't be reasonably used as
           | evidence for intent.
        
         | bonzini wrote:
         | > either creators of LLMs are intrinsically negligent with
         | regard to facts
         | 
         | It honestly seems to me that they are. There does not seem to
         | be a lot of effort on part of the LLM creators to stop them
         | from producing bullshit.
        
         | jyxent wrote:
         | There is a person in Australia who is threatening to sue
         | OpenAI. He's given them 28 days to fix the errors in ChatGPTs
         | description about himself. If the company is notified and
         | doesn't fix it, maybe that can be considered intent.
         | 
         | https://www.reuters.com/technology/australian-mayor-readies-...
        
         | nuodag wrote:
         | > if they don't repeat the claim.
         | 
         | well seems they did, else we wouldn't have heard about it
        
       | WirelessGigabit wrote:
       | I think the problem already existed, but ChatGPT made it worse.
       | 
       | Under most circumstances you cannot sue a newspaper for what they
       | publish.
       | 
       | But what they publish used to be fact-checked to a certain
       | extend, and actually believed to be correct at the time of
       | publishing.
       | 
       | So if I'm arrested because my IP showed up in some hack, they can
       | publish that.
       | 
       | Unfortunately they have no obligation to publish a follow up that
       | all charges have been dropped because someone fat-fingered the
       | request to the ISP and missed it by a digit.
       | 
       | So that would be on the internet forever.
       | 
       | Now with ChatGPT is worse. At least with the first one if someone
       | asks I can objectively show that whatever happened was a mistake,
       | and I have the documents to show it.
       | 
       | ChatGPT printing something makes me unable to come up with even
       | counter proof.
        
         | pjc50 wrote:
         | > Under most circumstances you cannot sue a newspaper for what
         | they publish.
         | 
         | Jurisdiction-dependent, and the US is an outlier here. In the
         | UK newspapers have to be fairly careful about libel, and this
         | has a "chilling effect" on what gets published.
        
       | [deleted]
        
       | raverbashing wrote:
       | So it seems that there is usage of personal data on training
       | ChatGPT then right?
       | 
       | Going by the discussions in other topics it seemed this was
       | dismissed outright
        
       ___________________________________________________________________
       (page generated 2023-04-06 23:00 UTC)