(C) Poynter Institute This story was originally published by Poynter Institute and is unaltered. . . . . . . . . . . Structural problems with tech platforms prevent fact-checkers from focusing on harm and virality [1] ['Baybars Örsek', 'Baybars Örsek Is Vice President Of Fact-Checking At Logically. He Is The Former Director Of International Programming At The Poynter Institute', 'Former Director Of', '.Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow', 'Class', 'Wp-Block-Co-Authors-Plus', 'Display Inline', '.Wp-Block-Co-Authors-Plus-Avatar', 'Where Img', 'Height Auto Max-Width'] Date: 2024-06-24 09:00:50+00:00 Baybars Orsek is the managing director of Logically Facts, an Ireland-based global fact-checking organization and a signatory to the IFCN’s Code of Principles. From 2019 to 2022, he was director of the International Fact-Checking Network. In today’s digital era and especially the year of elections worldwide, misinformation poses a significant threat. Social media platforms, while serving as vital conduits for information, have become breeding grounds for false and harmful content. As fact-checkers, we sift through this chaos, identify falsehoods, and deliver high-quality information to the public. However, the sheer volume and complexity of multimodal content — videos, images and text — has made this task increasingly challenging, and our current models of working with platforms are not making any of this easier. The rise of generative AI has compounded these challenges, enabling the creation of deceptive content at an unprecedented scale. AI-generated videos and images can be indistinguishable from genuine ones, making it even harder for fact-checkers to identify and debunk falsehoods. This technological advancement has outpaced traditional fact-checking methods. This is all happening as some platforms are scaling back their integrity and authenticity efforts, and others are struggling to get started. Critics of fact-checking have worked to discredit it, and even some fair-minded observers call our efforts a failure. Some of them have a point. Despite platforms’ commitments to fighting false information and supporting fact-checking efforts, many efforts fall short in several key areas: 1. Misplaced focus and arbitrary metrics: Unpredictable and unreliable moderation queues, coupled with the lack of technology solutions for fact-checkers often means the focus of fact-checking and platform collaborations is misplaced. Instead of addressing truly harmful misinformation, focus is placed on less critical content. Although collaborations between some platforms and fact-checkers have provided a helpful start, they have the potential for significant improvement through more timely interventions that center potential harm and virality. Such a prioritization could also lead to partnerships that can scale up and down on the basis of current and emerging risks in specific information environments. 2. Lack of transparency: Social media companies frequently withhold critical data about the volume and spread of misinformation on their platforms. They limit access to content needed for research and intervention purposes. Some platforms even withhold all content and data access entirely. The lack of access and transparency prevents fact-checkers from understanding the full scope of the problem and effectively targeting the most dangerous content. Accompanying this issue is the urgent need for standardized metrics and benchmarks across platforms. This is critical to drive consistent and reliable assessments of misinformation. This approach should align with regional regulations and encourage platforms to adopt transparent practices, facilitating a more effective response to the global misinformation challenge. 3. Insufficient support for fact-checkers: Platforms and the fact-checking community should work hand in hand to build adequate tools and applicable technology. This includes adequate resources to keep fact-checkers up to date with the evolving dynamics of online content and the tactics of those spreading false information. Without advanced technologies and access to data, fact-checkers are at a significant disadvantage in detecting and analyzing the scale of AI-generated multimodal content. 4. Disappearing monitoring tools: Numerous monitoring tools that once played a crucial role in fact-checkers’ discovery processes have been discontinued or limited. These tools were vital for tracking misinformation trends and identifying emerging falsehoods. Their disappearance has left fact-checkers with fewer resources to monitor and combat the spread of misinformation effectively. Fact-checkers are the vanguards in the battle against misinformation. Our expertise, honed through years of scrutinizing and verifying information, is indispensable in sorting fact from fiction. As generative AI evolves, fact-checkers must maintain their domain expertise and command of local context. AI can assist in speeding up the process, but it cannot replace the nuanced understanding and critical thinking that fact-checkers bring to the table. Current platform capabilities primarily focus on on-platform analysis and mitigation. As a result, platforms are often taken by surprise when new narratives and tactics, techniques and procedures — known as TTPs in the cybersecurity world — emerge and content goes viral seemingly out of nowhere. A cross-platform perspective and a collaborative infrastructure — encompassing major and emerging platforms — would allow for earlier detection and a more coordinated response to misinformation trends. Having previously served as the director of the International Fact-Checking Network, I have witnessed firsthand the challenges and triumphs of our community. Fact-checkers are on the front lines of this battle, and we need every tool at our disposal to best contribute to trust and safety efforts. We must push for more meaningful actions from our partners at platforms and the freshly established trust and safety teams at AI companies, advocating for more robust policies and greater transparency on the sheer volume of the problem. To effectively combat misinformation, we require technologies that enable rapid and precise analysis and verification of content. These technologies must facilitate near real-time responses to misinformation. This capability is crucial for staying ahead of the constantly evolving strategies used by purveyors of false information. Our goal should be to identify and correct falsehoods that have the greatest potential to cause harm and spread virally, and foster a more informed and resilient information ecosystem. This requires a collective effort: sharing insights, strategies and innovations that bolster our capabilities and enhance our impact. The upcoming Global Fact conference is an important opportunity to further this dialogue, providing a platform for us to collaborate and strengthen our resolve. The progress over the last few years has provided early signs of success and many opportunities to learn and iterate. Make no mistake: We’re losing the war against mis- and disinformation, but we have won some important battles. Learning from these and scaling collaboratively across platforms and fact-checkers is an opportunity for us to turn the tide. By working together, we can drive positive change with social media platforms, especially those yet to benefit from the value of the fact-checking community, and ensure that our efforts are easy to adopt and directed towards the content with the potential for the greatest harm and virality. Together, we can create a more informed and resilient society, capable of withstanding the onslaught of misinformation and emerging more assertive in the face of adversity. [END] --- [1] Url: https://www.poynter.org/commentary/2024/structural-problems-with-tech-platforms-prevent-fact-checkers-from-focusing-on-harm-and-virality/ Published and (C) by Poynter Institute Content appears here under this condition or license: Creative Commons . via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/poynter/