(C) Daily Kos This story was originally published by Daily Kos and is unaltered. . . . . . . . . . . AI & Deep Fakes: An Important Issue For Congress [1] ['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.'] Date: 2023-10-25 The Republican struggle to find a new speaker has impacted the House’s ability to fight deepfakes. The fight for the speakership forced the House Oversight subcommittee on Cybersecurity, Information Technology, and Government Innovation to postpone its Tuesday hearing on how to promote the development of technology to detect and prevent deepfakes. This comes at a time of two major wars, the Russo-Ukrainian War, and the Israel-Hamas War, and with the United States entering an electoral cycle. The incentives for bad actors to develop deepfakes to swing American opinion are great, but the policies to prevent this happening are absent. Deepfakes, which are made using deep learning tools, are created by feeding a tool with images, whether they are photos or videos, to create fake but convincing images of stuff that simply never happened. Whether it’s a video of Barack Obama calling Donald Trump a “complete dipshit” , or Jon Snow apologizing for Game of Thrones’ terrible ending, deepfakes are really, really convincing. You can put words into someone’s mouth , or put yourself in a movie , or have yourself dancing like you belong on Beyonce’s background dance troupe , or whatever you imagine. Deepfakes have even been used to create porn. In fact, 96% of deepfakes are pornographic , and have been used to create revenge porn, or porn videos featuring celebrities. In short, most of their use so far has been as weapons against women. This is why the subcommittee’s work is so important. It is fighting disinformation as well as exploitation of women. This is especially true in an era where most people are getting their news from the internet. However, although advances in deep learning have made it easy for basically anyone to create a deepfake, those very advances have made it possible to verify images and audio. We are already seeing how deepfakes can poison discourse. In both Israel and the Gaza Strip, there has been controversy around various news items, with people claiming that they are fake. Without credible verification tools, trust in news will continue to decline, and with it, trust in our institutions. Although evidence suggests that most images in that conflict are not deepfakes , the technology has already made people deeply suspicious, even when they are seeing something that is actually true. There have been some initiatives to fight this. Adobe’s Content Authenticity Initiative is one example, and it gives content creators a tool for verifying images and a digital signature so viewers can trust that that content is real. However, that initiative cannot possibly account for all the content on the internet. In fact, the initiative has just 2,000 content creators. It’s not clear that enough people will demand that content uses such tools. This is why the subcommittee’s work is so important. The solution may not come from the bottom-up, but may need to also be as a result of regulations, or incentives. For example, provenance information is usually stripped away from content when it is posted on platforms. Incentives to get platforms to not strip away that metadata would help assure people about the veracity of content. Until the subcommittee can complete its work, we are unlikely to get the kinds of incentives and regulations needed to deter deepfakes and verify content. [END] --- [1] Url: https://www.dailykos.com/stories/2023/10/25/2201596/-AI-Deep-Fakes-An-Important-Issue-For-Congress?pm_campaign=front_page&pm_source=more_community&pm_medium=web Published and (C) by Daily Kos Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/dailykos/