Hoover Institution (Stanford, CA) — Some commentators fear that social media and generative AI drive polarization and misinformation online, while others hope that they might counter polarization and help to restore public trust, according to scholars at a gathering at the Hoover Institution.
The Social Media and Democratic Practice conference on March 17, 2025, sought to see how social media platforms and AI both positively and negatively impact public discourse and democracy. Earlier studies of legacy platforms like Facebook, Twitter (now X), and YouTube found little evidence of harmful effects produced by explicitly political content on social media. Filter bubbles were largely nonexistent, and misinformation and disinformation had little or no discernible effect. The reason was simply that most people pay little attention to politics on social media and thus are immune to potential effects.
In recent years, however, new platforms have proliferated. While not explicitly political, many of these might have important indirect political effects, as in the 2024 Joe Rogan podcast of a conversation with Donald Trump that has reportedly had more than 50 million downloads. In addition, nonpolitical content on legacy social media (such as vaccine skepticism) can have political implications. Scant research has addressed these new dimensions of the social media universe as yet.
Senior Fellow Morris P. Fiorina, who organized the event, said the gathering was aimed at encouraging research about the ways new developments in social media and other online applications help and hinder democratic engagement and practice.
“Today is a first step in measuring impact and reach of political content on social media, something academics have not paid enough attention to in recent years,” Fiorina said.
The conference was made possible by Hoover’s Center for Revitalizing American Institutions (RAI).
Research presented at the event ranged from how political campaigns can use apolitical social media influencers to reach low-engagement voters to how Chinese propaganda finds its way into large language model training data.
Two of the presentations involved the use of AI to counter pervasive conspiracy theories and the way in which fact-checking efforts on Facebook failed to prevent vaccine skepticism from spreading online.
Tom Costello, an assistant professor at American University in Washington, DC, spearheaded a study that examined the efficacy of AI in debunking conspiracy theories. The research revealed that traditional methods of debunking conspiracy theories were only up to 10 percent effective. Costello highlighted the challenges in mapping every potential claim and countering it effectively, suggesting that advanced AI could be the solution by generating persuasive arguments.
The study involved 761 participants who believed in various conspiracy theories, including theories about the 9/11 attacks, the assassination of US President John F. Kennedy, and COVID-19. Participants engaged with an AI agent known as DebunkBot, designed to summarize and counter their beliefs. Initial results showed significant declines in the strength of participants' beliefs, with an initial 40 percent reduction. Further follow-ups revealed a sustained 20 percent decline in belief two months later.
This finding indicates agents such as DebunkBot could serve as lower-cost, higher-yield methods to lift people out of errant, conspiracy theory-based worldviews.
Separately, Jennifer Allen, an incoming assistant professor at NYU, investigated the impact of misinformation and vaccine skepticism on Facebook, especially post-2016. Allen's findings highlighted the significant influence of social media on vaccine uptake in the United States. Despite measures like Meta's third-party fact-check program, misinformation persisted, with some viral stories receiving tens or even hundreds of millions of views online.
Allen's research differentiated between flagged misinformation (a relatively small share of posts found on the platform) and unflagged "vaccine skeptical" content. Vaccine skeptical content, which avoided Meta’s fact-check program, typically mentioned someone suffering an adverse event such as death or injury despite being otherwise healthy, temporally connecting it to the recent receipt of a COVID-19 or other type of vaccine without any mention or exploration of more likely explanations for the death or injury.
Surprisingly, the content not marked as misinformation on Facebook was found to be 50 times more impactful in reducing vaccine intentions than demonstrably false claims. For instance, a Chicago Tribune story about the death of a healthy doctor post-vaccine received five times more views than all flagged vaccine misinformation combined. Allen emphasized the need for targeted interventions to address this prevalent, yet unflagged, skeptical content.
Allen said that during the pandemic, many otherwise high quality news outlets published stories that contributed to vaccine skepticism, largely because the facts and science about the true efficacy, durability, safety, and reach of COVID-19 vaccines were still being formed due to the rapidly evolving nature of scientific understanding during the pandemic.
Participants in the day’s discussion expressed concerns about rising levels of incivility online, driven by the ease at which anonymous participants can lob insults at one another without social consequences.
Others voiced concern about efforts by government and other entities to censor speech, citing the example of the COVID-19 pandemic. Theories about the origins of the novel coronavirus or the efficacy of vaccines on curbing transmission were censored, although today they are considered valid or are gaining scientific credibility.
To cap off the day, representatives from Meta spoke about the challenge social media presents to both law and democratic practice, with appearances from election law expert and Distinguished Visiting Fellow Benjamin Ginsberg and free speech scholar and Senior Fellow Eugene Volokh. Nate Persily, a Stanford Law School professor and Cyber Policy Center founding co-director, moderated the discussion.