The third session of Challenges and Safeguards against AI-Generated Disinformation will discuss The Evolving Landscape of Misinformation & Strategies for Mitigation with Chris Bregler and Sergey Sanovich on Wednesday, October 30, 2024 at 4:00 pm in Annenberg Conference Room, Shultz Building.

Abstract: In this talk, I will provide an overview of trends we've observed in online misinformation by analyzing years of online fact-checking data from the International Fact Checking Network (IFCN). The focus will be on recent trends in visual misinformation, covering new generative AI-based methods, "cheap fakes," and misleading context attacks. I'll also share some surprising statistics that challenge common beliefs about the most prevalent types of misinformation. Beyond identifying trends, I'll discuss mitigation strategies, including how we can improve information literacy to help people identify misinformation, the role AI can play in detecting manipulated content, and how provenance-based frameworks can help verify information and build trust online.

Challenges-and-Safeguards-against-AI-Generated-Disinformation

ABOUT THE SPEAKERS

Chris Bregler is a Director / Principal Scientist at Google. He is leading teams in Misinfo Research, Media Integrity, DeepFakes, Cheapfakes, VFX Tech, AR/VR, Human Pose and Face Analysis & Synthesis at Google AI with launches in YouTube, DayDream, Photos, JigSaw, and other product areas. He was a Professor at New York University and Stanford University and has worked for several companies including Hewlett Packard, Interval, Disney Feature Animation, LucasFilm's ILM, Facebook's Oculus, and the New York Times. He received his M.S. and Ph.D. in Computer Science from U.C. Berkeley and his Diplom from Karlsruhe University. In 2016 he received an Academy Award in the Oscars Science and Technology category. He has been named Stanford Joyce Faculty Fellow, Terman Fellow, and Sloan Research Fellow. He received the Olympus Prize for achievements in computer vision and pattern recognition and was awarded the IEEE Longuet-Higgins Prize for "Fundamental Contributions in Computer Vision that have withstood the test of time". His work has resulted in numerous awards from the National Science Foundation, Sloan Foundation, Packard Foundation, Electronic Arts, Microsoft, Google, U.S. Navy, U.S. Airforce, and other sources. He's been the executive producer of squid-ball.com, which required building the world's largest real-time motion capture volume, and a massive multi-player motion game holding several world records in The Motion Capture Society. He has been active in the visual effects industry, for example, as the lead developer of ILM's Multitrack system that has been used in many feature film productions, including Avatar, Avengers, Noah, Star Trek, and Star Wars.

Sergey Sanovich is a Hoover Fellow at the Hoover Institution. Before joining the Hoover Institution, Sergey Sanovich was a postdoctoral research associate at the Center for Information Technology Policy at Princeton University. Sanovich received his PhD in political science from New York University and continues his affiliation with its Center for Social Media and Politics. His research is focused on disinformation and social media platform governance; online censorship and propaganda by authoritarian regimes; and elections and partisanship in information autocracies. His work has been published at the American Political Science ReviewComparative PoliticsResearch & Politics, and Big Data, and as a lead chapter in an edited volume on disinformation from Oxford University Press. Sanovich has also contributed to several policy reports, particularly focusing on protection from disinformation, including “Securing American Elections,” issued by the Stanford Cyber Policy Center at its launch.


ABOUT THE SERIES

Distinguishing between human- and AI-generated content is already an important enough problem in multiple domains – from social media moderation to education – that there is a quickly growing body of empirical research on AI detection and an equally quickly growing industry of its non/commercial applications. But will current tools survive the next generation of LLMs, including open models and those focused specifically on bypassing detection? What about the generation after that? Cutting-edge research, as well as presentations from leading industry professionals, in this series will clarify the limits of detection in the medium- and long-term and help identify the optimal points and types of policy intervention. This series is organized by Sergey Sanovich.

Upcoming Events

Wednesday, December 4, 2024
artificial intelligence
Watermarking for AI-Generated Content
The fifth session of Challenges and Safeguards against AI-Generated Disinformation will discuss Watermarking for AI-Generated Content with Xuandong… HHMB 160, Herbert Hoover Memorial Building
Wednesday, December 4, 2024
Ideas-Uncorked
Health Care Policy In The New Trump Administration And Congress
The Hoover Institution in DC hosts Ideas Uncorked on Wednesday, December 4, 2024 from 4:45–6:00 pm ET. The event will feature Hoover Institution… Hoover Institution in DC
Friday, December 6, 2024
Emerging Technology And The Economy
The Hoover Institution cordially invites you to attend a conversation with President and CEO of the Federal Reserve Bank of San Francisco, Mary C.… Shultz Auditorium, George P. Shultz Building
overlay image