(C) Daily Kos This story was originally published by Daily Kos and is unaltered. . . . . . . . . . . Major Newspaper Gets Object Lesson on Brave New World of ChatGPT [1] ['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.', 'Backgroundurl Avatar_Large', 'Nickname', 'Joined', 'Created_At', 'Story Count', 'N_Stories', 'Comment Count', 'N_Comments', 'Popular Tags'] Date: 2023-04-06 (DK library, uploaded by jamess) I subscribe to The Guardian, not for access (it's free to read) but out of gratitude and a desire to support some of the best journalism going, including excellent international coverage. The paper reported this morning: Last month, one of our journalists received an interesting email. A researcher had come across mention of a Guardian article, written by the journalist on a specific subject from a few years before. But the piece was proving elusive on our website and in search.…The reporter couldn’t remember writing the specific piece, but the headline certainly sounded like something they would have written.… Despite the detailed records we keep of all our content, and especially around deletions or legal issues, they could find no trace of its existence. Why? Because it had never been written. Surprise! The researcher had been using ChatGPT. Which had fabricated a plausible-sounding reference from a trusted source. Were it not for the conscientious effort that this particular (human) user made to verify it, and the extra time put in by (human) Guardian employees to try to track it down, the falsehood could have passed as factual. In the event it only wasted a number of people's time--and pointed up the concern about growing reliance on an automated system whose algorithms are known to produce plausible-appearing bullshit, with some regularity. And this was not a one-off for the paper. Two days ago our archives team was contacted by a student asking about another missing article from a named journalist. There was again no trace of the article in our systems. The source? ChatGPT. Guardian writer Chris Moran (job title, head of "editorial innovation") notes that the utility, with all its known--more than glitches, I must call them known abject failures--is already spreading through the infosphere faster than TikTok did. Faster than anyone expected, like an epidemic of covid (my simile, not Moran's). Nearly 90% in a survey of U.S. high school students had already used it for homework assignments. Google and Microsoft are incorporating it into their very platforms. And The Guardian? Despite this experience and some promises of caution, the bottom line is, they are going there too. The decision was evidently made some time ago, in fact. [W]e’ve created a working group and small engineering team to focus on learning about the technology, considering the public policy and IP questions around it, listening to academics and practitioners, talking to other organisations, consulting and training our staff, and exploring safely and responsibly how the technology performs when applied to journalistic use.… In the next few weeks we’ll be publishing a clear and concise explanation of how we plan to employ generative AI. There's a saying about counterfeiting: Bad money drives out good. The same has proven true of information. It was true when Mark Twain, reportedly, first quipped "A lie can go ten times around the world while the truth is still lacing on its boots." (Twain, or is that one of those pesky fake attributions? Let's ask ChatGPT! 🤪) We've seen it happening with the algorithms on social media, well before ChatGPT. We've seen Qanon and Russian disinformation race around the world In 30 Days Or Less Or Your Money Back, to become as apparently ineradicable as kudzu. That was nothing. Bad information's now poised to go nuclear. And drill into the DNA, like fallout. Moran also writes that his newspaper's exploration of "generative AI," and its (apparently irreversible) intent to replace at least some actual researchers and reporters (BTW, what happens when an AI-incorporating research application consults an AI-incorporating database "out there"? Synergistic salad?)....Sorry, lost the thread there for a sec....Moran writes that all this has caused a good deal of rumination ...more and more on what journalism is for, and what makes it valuable. Moran doesn't answer that. Perhap there is internal disagreement at the newspaper about the proper meaning of "valuable." But at least The Guardian had the luck to get a timely object lesson and has paused for thought. And I don't mean to disparage Moran. He's supplied a valuable essay, at least in part, and I'd encourage reading it through. What it makes me think of, however, is the now plaintive-sounding question once posed by farmer-essayist Wendell Berry: "What Are People For?" Tools were made, but born were hands, Every farmer understands. --William Blake Will we have "generative AI" writing articles for AI to mine for more AI-generated articles, which AI will mine....and doesn't the whole enterprise lose its point? The only people who might think this a good idea are, forgive me, but IMO, severely lacking in self-awareness and/or appreciation for the unique and extraordinary differentness of human language, entwined in human thought and human experience, a gift inborn whose secrets we have barely begun to explore. Now of course there is an easy and automatic response from enthusiasts to any alarms concerning innovation: "Twas ever thus; new technology always gets catastrophized as the end of the world, and it never turns out that way." I'd just point out a couple of things. First: new technologies of the past have included such very mixed blessings as radium watch dials, asbestos fireproofing, and thalidomide, to name only a few. Second, and more importantly: This is exactly how climate deniers are denying. "There have always been extreme weather events in the past, and sometimes a lot of them in a short time. There have always been temperature variations, some of them extreme. People have always catastrophized about the world coming to an end because of natural disasters, eclipses, comets, other nonsense, and it never does." Watersheds are not always obvious to everybody. Poe's protagonist in Descent Into the Maelstrom found himself helpless in the grip of a superhuman force. That's how things feel to this person right now. The rush to cyber-everything included. All Poe's protagonist had the power to do was watch, learn, jump at the right moment, and hand off to Fate. We might do a bit more. At least a little. For instance--a policy here on DK? At least any "generative AI" contributions here should be so labeled? [END] --- [1] Url: https://www.dailykos.com/stories/2023/4/6/2162413/-Major-Newspaper-Gets-Object-Lesson-on-Brave-New-World-of-ChatGPT Published and (C) by Daily Kos Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/dailykos/