Hoover Institution (Stanford, CA) — The United States and its like-minded partners must start preparing a global coalition to develop capabilities to respond to the future weaponization of artificial intelligence (AI) by bad actors, argue authors in Defense Against the AI Dark Arts: Threat Assessment and Coalition Defense, a new report from the Hoover Institution.
Philip Zelikow, Hoover’s Botha-Chan Senior Fellow and leader of the institution’s Workshops on Urgent Security Choices, cowrote the report with Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace; Eric Schmidt, former chair and CEO of Google; and Jason Matheny, president and CEO of the RAND Corporation.
The report contains actionable steps US policymakers can take immediately to better prepare the nation for defending against AI weaponization and ensuring democracies maintain the edge in frontier AI capability.
The authors provide recommendations on managing the convergence of three vectors: private sector–led innovation, emerging threats, and international efforts. An essential starting point, the authors note, is to establish a national security agenda for AI.
“Many Americans assume the US is far ahead in AI development, but such complacency is dangerous,” said Schmidt. “The time to act is now, and it will require the involvement of policymakers, tech leaders, and international allies to tackle national security risks, drive global cooperation, build historic public-private partnerships, and ensure governments can independently assess the threats posed by powerful AI models.”
“The AI safety agenda is about far more than regulating private products,” said Zelikow. “We have to think about defense, with a roadmap to prepare for what the worst people in the world could do with frontier AI.”
The agenda also considers the special risks caused by the rise of powerful open-weights models—large language models whose parameters, or “weights,” are publicly available. The authors suggest that at a minimum, governments must have their own independent capacity to evaluate dangers before such models are deployed.
The agenda envisions a public-private partnership that would be forged on a historic scale, with at least ten major issues that will need to be addressed, as the government will rely on leading private industries for coalition defense.
Finally, the agenda calls for three circles of international cooperation: among core participants in coalition defense; among AI producers; and among the wider community worried about the risks.
Read the report here.
For coverage opportunities, contact Jeffrey Marschner, 202-760-3187, jmarsch@stanford.edu.