Challenges and Safeguards against AI-Generated Disinformation

Challenges and Safeguards against AI-Generated Disinformation

Distinguishing between human- and AI-generated content is already an important enough problem in multiple domains – from social media moderation to education – that there is a quickly growing body of empirical research on AI detection and an equally quickly growing industry of its non/commercial applications. But will current tools survive the next generation of LLMs, including open models and those focused specifically on bypassing detection? What about the generation after that? Cutting-edge research, as well as presentations from leading industry professionals, in this series will clarify the limits of detection in the medium- and long-term and help identify the optimal points and types of policy intervention. This series is organized by Sergey Sanovich.

Sessions
Watermarking for AI-Generated Content

Watermarking for AI-Generated Content

The fifth session of Challenges and Safeguards against AI-Generated Disinformation will discuss Watermarking for AI-Generated Content with Xuandong Zhao and Sergey Sanovich on Wednesday, December 4th, 2024 at 4:00 pm in HHMB 160, Herbert Hoover Memorial Building.

Language Model Detectors Are Easily Optimized Against

Language Model Detectors Are Easily Optimized Against

The fourth session of Challenges and Safeguards against AI-Generated Disinformation will discuss Language Model Detectors Are Easily Optimized Against with Charlotte Nicks and Sergey Sanovich on Thursday, November 21, 2024 at 4:00 pm in Annenberg Conference Room, Shultz Building.

The Evolving Landscape of Misinformation & Strategies for Mitigation

The Evolving Landscape of Misinformation & Strategies for Mitigation

The third session of Challenges and Safeguards against AI-Generated Disinformation will discuss The Evolving Landscape of Misinformation & Strategies for Mitigation with Chris Bregler and Sergey Sanovich on Wednesday, October 30, 2024 at 4:00 pm in Annenberg Conference Room, Shultz Building.

Detecting Text Ghostwritten by Large Language Models

Detecting Text Ghostwritten by Large Language Models

The second session of Challenges and Safeguards against AI-Generated Disinformation will discuss Detecting Text Ghostwritten by Large Language Models with Nicholas Tomlin and Sergey Sanovich on Wednesday, October 16, 2024 at 4:00 pm in Annenberg Conference Room, Shultz Building.

DetectGPT: Zero-Shot Machine-Generated Text Detection

DetectGPT: Zero-Shot Machine-Generated Text Detection

The first session of Challenges and Safeguards against AI-Generated Disinformation discussed DetectGPT: Zero-Shot Machine-Generated Text Detection with Eric Anthony Mitchell and Sergey Sanovich on Thursday, October 10, 2024 at 4:00 pm in Annenberg Conference Room, Shultz Building.

CO-SPONSORED BY

Hoover, Center for International Security and Cooperation, Cyber Policy Center
overlay image