Hoover Institution (Stanford, CA) — Recognizing the rising power of artificial intelligence to shape public opinions and influence how information is created and consumed, scholars gathered at the Hoover Institution, to discuss ways to create generative AI models all Americans can trust in an increasingly polarized era.

Hoover Institution Senior Fellow Andrew B. Hall, who convened the gathering on March 21, 2025, said the meeting was meant to determine how the AI models and agents of the future would “reflect our values and earn our trust, given concerns that companies and governments might accidentally or intentionally engineer them to adopt some beliefs and preferences over others.”

Sponsored by Hoover’s Center for Revitalizing American Institutions (RAI), the conference is one of many efforts underway to push back on the root causes of declining public trust in institutions and the growing distance between the far points on America’s political spectrum.

To illustrate the challenge, Hall asked participants to gauge where leading AI models stand politically.

David Rozado, an associate professor of Otago Polytechnic in New Zealand, presented research that indicated many of the most popular large language models consistently espouse left-of-center views on political issues. He said they also tend to cite left-leaning figures more often than right-wing ones.

In response, Rozado developed three new large language models: a right-leaning one, a left-leaning one, and a third he described as “depolarizing GPT.”

The depolarizing GPT model was specifically built to offer centrist answers to political questions.

The possible effect of popular AI tools offering responses that consistently contain a political slant is also becoming clearer.

Jillian Fisher of the University of Washington presented the findings of her research, in which she surveyed 300 respondents who interacted with the three LLMs. The respondents were not made aware of the LLMs’ bias or lack thereof.

The researchers found the possibly slanted model successfully changed Democrat- or Republican-aligned participants’ minds on certain issues.

But respondents who indicated an above-average interest in, or knowledge of, how large language models work appeared less likely to change their minds after interacting with the chatbot.

The day also saw members of industry comment on how they have gone about designing their large language models to avoid undue appearances of ideological slant where possible, flagging many practical challenges in doing so.

Participants largely agreed that the answer will likely not come from government decree, such as a blanket order that all AIs be politically neutral, as the difficulties associated with doing so are virtually endless.

As a number of academics pointed out during the day, any attempt to moderate the content of the models raises extremely subjective questions. What does politically neutral mean? And who oversees the responses generated by models and conveyed to users?

What is more likely to emerge, several participants in the conversation said, is a marketplace of different LLMs with different political values. Research-based techniques for measuring the political slant of these models could allow companies to show users a menu of options of different models in a transparent format, allowing different consumers to choose those tailored to their values. While this approach risks creating “echo chambers,” they said, it also seems preferable to a world in which a small number of AI companies impose a particular set of political beliefs in a top-down manner.

Several participants also raised the likelihood of restrictions on use of certain AI models based on their country of origin.

The DeepSeek R1 model, allegedly produced and trained in China without using the most advanced general processing units (GPUs) and at a fraction of the cost of comparable US models, will likely be banned or heavily restricted in the United States and like-minded states well into the future.

Other speakers said that might not be such a bad outcome, as the DeepSeek models have already shown a reluctance to interact with queries that would lead to a response not acceptable in mainland China, such as the events of the Tiananmen Square massacre of 1989 or the mass detention of Uighur Muslims under Chinese President Xi Jinping.

The perception of political neutrality itself raises challenges. Several speakers said that achieving complete political neutrality in AI models will be difficult due to the intrinsic subjectivity involved in various aspects of model development and the "human touch" that can introduce bias.

Visiting Hoover Fellow Sean Westwood described the idea of political neutrality in technology as “a moving target,” showing evidence that actual users’ perceptions of different LLMs’ political slant vary based on their own ideology and the topic being discussed.

Ambiguity further clouds this picture. For instance, Valentina Pyatkin of Seattle’s Allen Institute for AI demonstrated that when pressed to offer a response to a political question, many leading models will instead offer ambivalence or refuse to answer the question when not ordered to do so or told of consequences if they do not comply.

Using the law to control or enforce neutrality among AI models would also be difficult, according to Senior Fellow Eugene Volokh.

First, any law mandating neutrality of AI models would likely violate the First Amendment, he said.

And the legal challenges brought on by AI don’t stop there. Volokh pointed to a case in Norway where an LLM claimed falsely that a man murdered his two sons. In the US, such an action would be libelous.

Volokh said he was aware of two cases active in US courts where AI models allegedly libeled people with their responses to queries.

In one case a Georgia-based gun rights activist alleges an AI chatbot accused him of embezzling funds from his nonprofit, a complete fabrication or “hallucination” on the part of the model.

Having said that, Volokh said that most other output by AI models would likely be protected by the First Amendment, even things that are factually incorrect but not libelous.

Interestingly, the public appears less likely to support legal protection for the output of AI models than they do for the speech of actual human beings.

Jacob Mchangama of the Future of Free Speech Project at Vanderbilt University said that when polled, people have a lower tolerance for AI creating controversial content than similar content made by humans.

Overall, support for free speech in the United States has dropped in the last four years, according to a major survey commissioned by the Future of Free Speech Project and Vanderbilt. Mchangama said this will impact how society regulates free speech, AI, and other forms of content in the future.

The day concluded with a discussion of how America might avoid a future in which a small number of AI models determine what ideological views are “acceptable” and which are not allowed to be expressed. Alice Siu, associate director of the Deliberative Democracy Lab at Stanford, and Bailey Flanigan, assistant professor of political science at MIT, discussed recent work to recruit users to make decisions on behalf of tech platforms so that users rather than tech executives are in charge of controversial, value-laden decisions.

The future path of AI remains uncertain, along with the consequences of ideologically misaligned LLMs for users and society at large. Nevertheless, the conference served as a valuable jumping-off point for generating more evidence-based policies around the ideological slant of AI that might encourage ideological diversity among these models without violating legal principles around free speech or unduly prioritizing some views over others.

Expand
overlay image