(C) Poynter Institute This story was originally published by Poynter Institute and is unaltered. . . . . . . . . . . Fact-checkers urge collaboration, caution in using artificial intelligence tools [1] ['Angela Fu', 'Angela Fu Is A Reporter For Poynter. She Can Be Reached At Afu Poynter.Org Or On Twitter', '.Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow', 'Class', 'Wp-Block-Co-Authors-Plus', 'Display Inline', '.Wp-Block-Co-Authors-Plus-Avatar', 'Where Img', 'Height Auto Max-Width', 'Vertical-Align Bottom .Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow .Wp-Block-Co-Authors-Plus-Avatar'] Date: 2024-06-27 20:35:50+00:00 SARAJEVO, Bosnia and Herzegovina — Though the use of artificial intelligence in journalism has attracted skeptics, some journalists say discerning judgment and a collaborative approach can allow fact-checkers to avoid the pitfalls of the technology and become more efficient. The key, said International Center for Journalists Knight fellow Nikita Roy, is to restrict usage of generative AI in journalism. Fact-checkers should use the tools for “language tasks” like drafting headlines or translating stories, not “knowledge tasks” like answering readers’ questions with a chatbot. She and a panel of fact-checkers from around the world offered examples of such usage Thursday at GlobalFact 11, an annual fact-checking summit hosted by Poynter’s International Fact-Checking Network. “We really owe it to our audience to be one of the most informed citizens on AI because this is having a profound impact on every single industry, as well as the information ecosystem,” said Roy, one of the conference’s keynote speakers. “If we use it responsibly and ethically, it has the potential to streamline workflows and enhance productivity. … Every single minute misinformation is spread online and we delay in getting our fact checks out, that’s another second that the information landscape is being polluted.” Fact-checkers are fighting a changing information landscape, one that is driven both by emerging technologies like generative AI and the whims of major tech companies. Though the low barrier of entry to using generative AI means bad actors can easily disseminate mis- and disinformation, it also means fact-checkers can build tools without acquiring specialized skills. Two years ago, building a claim detection system would require the help of a data scientist, said Newtral chief technology officer Rubén Míguez Pérez. But today, anyone can build one by providing a prompt to ChatGPT. “The power of generative AI is democratizing the access that people — regular people, not data scientists, not even programmers — are going to have (to) this kind of technologies,” Míguez Pérez said. In a 2023 survey of IFCN signatories that saw responses from 137 fact-checking organizations, more than half said they use generative AI to support early research. AI can help outlets identify important, harder-to-find claims to fact-check, said Andrew Dudfield, the head of AI at Full Fact. But AI can’t fact-check for journalists, Dudfield said. In reviewing past fact checks from his own organization, Dudfield found that the vast majority involved “brand new information.” Fact-checkers had to consult experts and cross-reference sources to produce those stories. AI tools, which draw upon existing knowledge sources, can’t do that. Other language tasks fact-checkers can use AI for include summarizing PDFs, extracting information from videos and photos, converting articles to videos and repurposing long videos to shorter ones, Roy said. AI can also make content more accessible for audiences through tasks like generating alt text for images. One of the biggest concerns journalists have with generative AI tools is “hallucinations.” Because these tools are probabilistic and essentially make predictions based on the dataset they use, they are liable to generate nonsensical or false information. For fact-checkers interested in using AI for knowledge tasks, like building chatbots, Factly Media & Research founder and CEO Rakesh Dubbudu advised curating the datasets used by those tools. For example, his team built a large language model that ran off a database comprising press releases from the Indian government. Limiting the pool of knowledge that the AI tool drew from largely solved the problem of hallucinations, Dubbudu said. Generative AI tools that don’t use curated datasets can also pose issues if they regurgitate copyrighted materials. To avoid this problem, Dubbudu suggested that fact-checking organizations use their own materials as their database when creating their first AI tool. “A lot of news agencies today, they have hundreds of thousands of articles,” Dubbudu said. “A lot of us could start by converting those knowledge sources into chatbots because that is your own proprietary data. There is no question of hallucination. There is no question of copyright citations.” Tech companies and other organizations will continue to develop AI tools, regardless of whether fact-checkers participate themselves. The result can be problematic, Lupa founder Cristina Tardáguila warned. She found, for example, a fact-checking chatbot that is likely led by an expert in programming from Russia — “a country that punishes journalists.” Citing past fact-checking collaborations like the Coronavirus Facts Alliance, Tardáguila urged fact-checkers to work together to build their own tools “When dealing with this new, and yet to grow, devil of AI, we need to be together,” Tardáguila said. “We have to have a real group, a tiny community, a representative group that is leading the conversation with tech companies.” [END] --- [1] Url: https://www.poynter.org/ifcn/2024/how-fact-checkers-journalists-use-ai/ Published and (C) by Poynter Institute Content appears here under this condition or license: Creative Commons . via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/poynter/