Hoover Institution (Stanford, CA) – A diverse group of Hoover and Stanford faculty, alongside leading industry voices, came together to produce The Digitalist Papers, an essay and public discussion series about how to ensure artificial intelligence best serves democratic societies.

Drawing from the experience and impact of Alexander Hamilton, James Madison, and John Jay’s writing of the Federalist Papers in 1788, the discussions on Tuesday, September 24, 2024, asked the collective question of how, if written today, the Federalist Papers would have to incorporate the promise and risks posed by artificial intelligence.

The contributors to the book say that just as in 1788, when American voters were considering whether to ratify the US Constitution, the world’s democratic societies are approaching a pivotal moment when they decide how to incorporate artificial intelligence into daily life.

Contributors to the book say they aren’t offering one unique path for adapting democracy to a world of ubiquitous AI but instead “an array of possible futures.”

“The (Digitalist) Papers do not advocate for a particular system of governance, as did Hamilton, Madison and Jay,” the book’s authors write. “Indeed, technological innovation in AI proceeds at such a rapid pace that no single prescription could capture the full set of alternative paths of this transformative technology.”

The project was led by Hoover Institution Director Condoleezza Rice; Nathaniel Persily of Stanford’s Cyber Policy Center; Erik Brynjolfsson, director of Stanford’s Digital Economy Lab; and Alex Pentland of Stanford’s Institute for Human Centered Artificial Intelligence.

The contributors to the project, including Hoover senior fellows Eugene Volokh and John H. Cochrane, offer differing views about how democratic societies should incorporate AI.

Cochrane’s piece in the book, AI, Society and Democracy: Just Relax, asks when in the history of the world regulators or experts have ever correctly predicted the social impact of a new technology. So, he contends that now that generative AI is in the regulators’ crosshairs, our response should not be to treat this newest technology as exceptional. To properly regulate AI, Cochrane says, we cannot be predictive. “Most regulation takes place as we gain experience with a technology and its side effects,” he writes.

Volokh’s entry in the book, Generative AI and Political Power, argues that the use of generative AI tools by the public to answer political questions, compounded by the fact that most popular search engines today are mated to some form of AI, will “subtly but substantially influence public attitudes, and therefore, elections.”

He writes that, in response, societies should persuade AI firms to return to a user sovereignty model, where generative AI applications write or do whatever a user tells them to do, instead of what Volokh calls today’s “Public Safety and Social Justice Model,” where AI applications are programmed with “guardrails” so that users aren’t able to gather information that the AI’s creator considers harmful.

Within these guardrails, the scope of which AI firms have never disclosed to the public, could be other, more ideologically shaped restrictions that don’t keep users safe but simply restrict what they see.

Asked about her contribution to the book and her feelings about AI in general, Rice told talk attendees that she prefers rules where firms have somewhat wide latitude to develop and experiment with AI, free of government regulation, especially while the technology is in its infancy.

“I’m in the ‘run hard and fast’ category and be a little bit light on regulations while we understand what’s happening here,” she said.

Rice contrasted her view with other jurisdictions, such as the European Union, which passed sweeping AI legislation in June 2023, and China, where the state is rigidly steering how AI is developed.

Meanwhile, Rice reminded attendees that even at a more egalitarian, comprehensive pace of AI development, there is a great chance that large swaths of America are left out of influencing or even having access to generative AI, and that is a problem.

“We need to make sure that these technologies are affecting parts of the US that might be left out of the conversation,” she said.

One way AI has major potential to improve ordinary people’s lives is by speeding up the pace of government tribunals, agencies, boards, and commissions.

From tax assessments to benefits means testing, AI could significantly improve how quickly state bodies make decisions or complete routine tasks.

“Social Security benefits appeals take a whole year to process,” Lawrence Lessig of Harvard Law School said during a session about leveraging AI technologies for democratic transformation. “That’s a ridiculously inefficient system.”

Moving the discussion to the immediate concerns of ordinary citizens, Saffron Huang, who works on AI firm Anthropic’s social impacts team, noted to attendees that generative AI models will need to be able to work and meet people’s needs in hundreds of languages.

There’s the need to fine-tune large language models, most of which are being developed in the United States or Britain, to the unique linguistic and cultural nuances of all the non-English-speaking countries the models will be used in.

For Huang, that meant using a citizen’s assembly in Taiwan to “train a (large language model) about the dialect and unique cultural context of Taiwan.”

Beyond that, panelists wondered if generative AI firms have a responsibility to provide a public space online to help foster civic discussions. Do the generative AI firms have a responsibility to build actual websites or applications to help connect and amplify concerns expressed by members of the public?

“Should AI firms have to contribute to a ‘public’ digital space the way land developers are required to contribute to public spaces when they develop a piece of land?” Lily Tsai, founder of Massachusetts Institute of Technology’s Governance Lab, asked.

Another important distinction speakers expressed was that when it comes to civic engagement and democracy, generative AI must always be used as a tool to assist human voters in decision making, not for the AI to become an autonomous decision maker itself.

But how to properly integrate the use of generative AI into the practice of democracy is still an open question.

Rice said the Federalist Papers’ authors with their well-reasoned arguments for why America needed to ratify a constitution “may have left us a number of institutional answers that will help us even through this set of radical technologies,” hundreds of years later.

Expand
overlay image