The fifth session of Challenges and Safeguards against AI-Generated Disinformation will discuss Watermarking for AI-Generated Content with Xuandong Zhao and Sergey Sanovich on Wednesday, December 4th, 2024 at 4:00 pm in HHMB 160, Herbert Hoover Memorial Building.
Read the paper here.
WATCH VIDEO BELOW
ABOUT THE SPEAKERS
Xuandong Zhao is a Postdoctoral Researcher at UC Berkeley as part of the RDI and BAIR, working with Prof. Dawn Song. He earned his PhD in Computer Science from UC Santa Barbara, where he was advised by Prof. Yu-Xiang Wang and Prof. Lei Li. His research focuses on the intersection of Machine Learning, Natural Language Processing, and AI Safety, with a particular focus on Responsible Generative AI. Xuandong has published papers in top-tier machine learning and natural language conferences, including NeurIPS, ICML, ICLR, ACL, EMNLP, and NAACL. Xuandong is a recipient of the Chancellor's Fellowship from UCSB and is recognized with the AdvML Rising Star Award (2024). He has interned at Microsoft and Google and holds a B.S. in Computer Science from Zhejiang University (2019).
Sergey Sanovich is a Hoover Fellow at the Hoover Institution. Before joining the Hoover Institution, Sergey Sanovich was a postdoctoral research associate at the Center for Information Technology Policy at Princeton University. Sanovich received his PhD in political science from New York University and continues his affiliation with its Center for Social Media and Politics. His research is focused on disinformation and social media platform governance; online censorship and propaganda by authoritarian regimes; and elections and partisanship in information autocracies. His work has been published at the American Political Science Review, Comparative Politics, Research & Politics, and Big Data, and as a lead chapter in an edited volume on disinformation from Oxford University Press. Sanovich has also contributed to several policy reports, particularly focusing on protection from disinformation, including “Securing American Elections,” issued by the Stanford Cyber Policy Center at its launch.
ABOUT THE SERIES
Distinguishing between human- and AI-generated content is already an important enough problem in multiple domains – from social media moderation to education – that there is a quickly growing body of empirical research on AI detection and an equally quickly growing industry of its non/commercial applications. But will current tools survive the next generation of LLMs, including open models and those focused specifically on bypassing detection? What about the generation after that? Cutting-edge research, as well as presentations from leading industry professionals, in this series will clarify the limits of detection in the medium- and long-term and help identify the optimal points and types of policy intervention. This series is organized by Sergey Sanovich.