The seventh session of Challenges and Safeguards against AI-Generated Disinformation will discuss Progress Towards Robust and Deployable AI Detectors in the Real World with Liam Dugan and Sergey Sanovich on Thursday, February 20th, 2025 at 4:00 pm in HHMB 160, Herbert Hoover Memorial Building.

Learn more by clicking here

Progress Towards Robust and Deployable AI Detectors in the Real World

ABOUT THE SPEAKERS

Liam Dugan is a PhD candidate in Computer Science at the University of Pennsylvania, advised by Chris Callison-Burch. At Penn NLP group, he works on human and automated detection of AI-generated content, particularly the technical limitations and societal ramifications of detection tools. He is the lead author of the paper that released the largest and most challenging dataset for comparing generated text detectors: RAID Benchmark. He also leads the team that maintains the Real or Fake Text website, where people can test how well they can detect generated text. He interned at Roblox and Nvidia and received research grants from Google Cloud and Roblox. His work has been featured by CNN, ABC News, and Vice as well as in a testimony to the U.S. Congress.

Sergey Sanovich is a Hoover Fellow at the Hoover Institution. Before joining the Hoover Institution, Sergey Sanovich was a postdoctoral research associate at the Center for Information Technology Policy at Princeton University. Sanovich received his PhD in political science from New York University and continues his affiliation with its Center for Social Media and Politics. His research is focused on online censorship and propaganda by authoritarian regimes, including using AI, and Russian foreign policy, politics, and information warfare against Ukraine. His work has been published at the American Political Science Review, Comparative Politics, Research & Politics, and Big Data, and as a lead chapter in an edited volume on disinformation from Oxford University Press. Sanovich has also contributed to several policy reports, particularly focusing on protection from disinformation, including “Securing American Elections,” issued by the Stanford Cyber Policy Center at its launch.


ABOUT THE SERIES

Distinguishing between human- and AI-generated content is already an important enough problem in multiple domains – from social media moderation to education – that there is a quickly growing body of empirical research on AI detection and an equally quickly growing industry of its non/commercial applications. But will current tools survive the next generation of LLMs, including open models and those focused specifically on bypassing detection? What about the generation after that? Cutting-edge research, as well as presentations from leading industry professionals, in this series will clarify the limits of detection in the medium- and long-term and help identify the optimal points and types of policy intervention. This series is organized by Sergey Sanovich.

Upcoming Events

Thursday, February 13, 2025
Fragmented Visions: Civil War China in Lost Films by American Jesuit Missionaries, 1947–1948
Fragmented Visions: Civil War China In Lost Films By American Jesuit Missionaries, 1947–1948
The Hoover Institution Library & Archives and the Program on the US, China, and the World invite you to a film screening Fragmented Visions:… Shultz Auditorium, George P. Shultz Building
Wednesday, February 19, 2025 10:00 AM PT
Tested: Why Conservative Students Get The Most Out Of Liberal Education | Reimagining American Institutions
The fifth session discusses Tested: Why Conservative Students Get the Most out of Liberal Education with Lauren A. Wright and Brandice Canes-Wrone on…
Thursday, March 6, 2025
The Chinese Economy in the Long Run
The Chinese Economy in the Long Run
The Hoover Institution hosts the Chinese Economy in the Long Run on March 6-7, 2025.  Hoover Institution, Stanford University
overlay image