Jonathan Movroydis: Defense against the AI Dark Arts: Threat Assessment and Coalition Defense is a report that springs from a new initiative, the Hoover Workshops on Urgent Security Choices. The workshops engage veteran crisis managers and strategic thinkers to offer deep, practical analysis. Professor Zelikow, artificial intelligence is widely discussed these days, but people have been talking about this technology for years. In your view and that of your collaborators, why is this the time to start shaping the national security agenda for AI?

Philip Zelikow: The agenda for AI is broad and diverse. We’re focused on the national security agenda for AI, which we think will start taking shape this year. That agenda can be grouped in three different buckets.

The first of those buckets is the military uses of AI for things like target acquisition. That is not the concern of our paper. A lot of people are working on that—how to adapt it into the military and what rules to set on autonomous systems. That’s an interesting agenda, but it’s not our principal concern.

The second bucket is people checking the safety of private AI products like these new large-language models. That becomes a national security agenda when the safety risks are so large that they threaten catastrophic dangers—such as enabling something that might kill thousands of people. We are interested in that. We do think we are entering an age in which, over the next couple of years, models may develop that can clearly present such catastrophic risks, and it’s part of the national security agenda to evaluate those threats and mitigate them.

A third bucket is different still. The second was about evaluating private products; the third is about evaluating what the worst people in the world could do with frontier AI. What could our most dangerous adversaries, including China, do against us with frontier AI? Now, this overlaps a bit with private products. The private companies aren’t trying to do the worst things in the world; they aren’t trying to hurt the United States. But our adversaries might use AI to hurt the United States, and they might design models and use training data that have nothing to do with what the private companies are doing. They might use satellite telemetry or other things that are beyond the world of the private companies and the private models.

The US government and its most reliable friends have to evaluate those threats and develop countermeasures, and we think that those threats will come more clearly into focus in 2025. Also, we point out that the knowledge about frontier AI is being disseminated very fast. We shouldn’t count on having such an enormous lead that no one can catch up to us and we won’t ever feel threatened.

Movroydis: What are the threats posed by authoritarian actors? And how are the United States and our key allies equipped to deal with them?

Zelikow: As for the great dangers, we don’t know. Anyone who has studied the history of technology realizes that when game-changing technologies appear, all the predictions about what those technologies will do usually turn out to be wrong.

We don’t know what these technologies will be like in even one year or two, nor do we know what our adversaries will be able to do with them. What we can sense is that the technology is potentially historic on the scale of the discovery of nuclear energy, or even more than that. So we approach this problem with a sense of awe, but also humility and uncertainty. We have to rely on extremely sensitive, secret, carefully guarded research that understands the worst things that could be done, and can react before those things are done to us.

Then we have to be able to evaluate that threat to develop countermeasures. Often, it turns out that scientists need to develop the worst dangers in order to counter those dangers. For example, in biology, scientists have to develop the virus in order to develop a vaccine for it. I think that analogy may work very well in frontier AI too. That means the government has to be able to push the outer margins of what our adversaries could do to us in order to be ready to counter that. That requires an agenda and level of work that right now simply does not exist in the government of the United States.

That’s why I wrote this paper with Eric Schmidt and Jason Matheny and Tino Cuéllar. We think this program of work needs to get on the agenda now. And we wrote it in a way that’s very practical-minded to offer a blueprint for action.

Movroydis: You led an influential report into the COVID-19 pandemic and its policy responses. In this AI report, you wrote that the United States remains poorly prepared against biological dangers and that these risks are likely to escalate as AI models become linked to biotechnology. Could you provide perspective on such a scenario?

Zelikow: Sure. People in the sciences will have noticed that AI scientists just won a Nobel Prize in Chemistry. Why did someone involved with artificial intelligence win a Nobel in a field like biochemistry? It’s because they discovered that AI can be used to develop new kinds of molecules to do protein folding and other things that can actually invent new biology. That’s remarkable because the genome is extraordinarily complicated and you need very high-level computation to solve some of these problems.

This can create absolutely unique, historic dangers in a situation where the world’s biodefenses are already obviously pretty tattered. There are dangers also in cryptography and cybersecurity which are potentially game changing.

Several leading scientists wonder about the possible dangers having to do with misaligned frontier AI—artificial general intelligence that goes rogue, where you create autonomous agents that become too autonomous. There are people who think that’s a very serious danger. Others think it’s not. The answer from our group, which is pretty well-informed, is that we don’t know who’s right. So, the rational response of government is to be on top of this and evaluate the threat so that if it turns out to be very serious, we can anticipate that and get ready.

Movroydis: Let’s talk about coping with technological uncertainty. At a recent event for the Stanford Emerging Technology Review, I heard Marc Andreessen protest the idea of the precautionary principle. And Hoover fellow John H. Cochrane, writing in The Digitalist Papers, said we should avoid pre-emptive regulation of artificial intelligence. How do we stay on guard but avoid stifling innovation?

Zelikow: Those are exactly the dangers our group seeks to counter. We did not call for regulatory mandates to try to ban research. We didn’t go down that road. But if you’re not going to ban the research, which would probably be futile, well, then you’d better build up the ability to evaluate the risks and develop necessary countermeasures if the risks turn out to be extremely serious.

Compare this to the discovery of nuclear fission. The response was not to try to ban work on nuclear fission. But that meant the government had a very grave responsibility to be at the very frontier of the research in fission, including how it could be used in weapons, so that our adversaries could not obtain a world-historical advantage, and so that others—maybe even nonstate actors—could not use these innovations to kill millions before we even realized the danger was there.

So, our approach therefore takes the concerns very seriously but tries to address them practically, without relying on preventive mandates.

Movroydis: You and your colleagues write that “serious national security risks may arise not that far in the future, so a massive public effort to build at least a base capability for high-level national security evaluation and countermeasures development must start immediately.” What would that look like?

Zelikow: The report gets into the nuclear energy analogy, and some parts of that analogy are suggestive and some of them are not. There are a lot of differences in this case. Let me call out an example of one absolutely critical difference.

In the nuclear energy case, the big work was done by governments and the big capabilities were in governments. In this case the big work is being done by private companies and the big skills are in the private sector. So, this is a case where the private sector has built the capability that the government needs to tap and evaluate and manage to protect the country. In this case, you need to imagine a historic private-public partnership quite different from anything we did in the nuclear energy case.

But the government needs to be able to tap the computing power, the skills, and the capital to protect our national security without relying mostly on taxpayer appropriations and government hiring. In our paper, we try to envision how such a historic private-public partnership might work. We lay out the issues that would have to be resolved in fashioning this. This whole agenda of how to fashion this partnership is barely even coming into focus as we get into 2025. Yet to be on top of this, the government is going to have to make rapid progress with the leading private companies over the course of the next year, or we risk falling behind, perhaps fatally.

Movroydis: The scale of such an effort would be unprecedented.

Zelikow: There are, however, some instructive precedents. For example, we created a historic private-public partnership to find and bring to market oil and gas in America. We created a historic private-public partnership to build a telecommunications industry in America and string wires all over the country. We had a historic private-public partnership to build an aerospace industry that relied for about the first generation principally on government contracts and leveraging government skill sets in order to not only get off the ground but get into space. Indeed, the private-public partnership in American aerospace is a lot of what built Silicon Valley in the first place.

We now need to envision another historic private-public partnership on that scale and even beyond it. And our report lays out what that vision might look like.

Movroydis: You write that an essential starting point for this AI national security agenda is an international coalition in which expertise, research, energy resources, and capital can be pooled. What is the current composition of this effort and where should it go?

Zelikow: I’m really glad you raised that, because I think most Americans, even in the Trump administration, may at first glance think this is an American story. The national security problem of frontier AI is born multinational, far more than most Americans realize, and it has to be addressed multinationally. The efforts are still embryonic. The British were pioneers in developing the first and best-funded AI Safety Institute, whose main purpose is evaluating the safety of private products. But as I pointed out earlier, that mission overlaps at the extreme end with the national security space. The British have understood that point from the start, and have created relationships with the right government agencies to build that up. Our version of the AI Safety Institute has been working with the British, pooling our efforts to learn how to evaluate AI models and to work with some of the frontier companies.

There’s a danger that in the anti-regulatory impulse of the Trump administration, government may want to trim back regulation of the private sector. I understand that. I understand that some of the Biden administration’s push addressed concerns that are not concerns in this administration. But it’s crucial that the new administration brush past some of those irritants to see that there’s an important national security dimension to these skills that we are building together with our partners. We need some of those skills. We shouldn’t throw the baby out with the bathwater.

On that note, moving back to the nuclear weapons example, you raised an interesting analogy about President Eisenhower’s Atoms for Peace initiative in 1953.

Movroydis: It involved the United States and the Soviet Union cooperating for peaceful purposes in nuclear energy. These exchanges were monitored by the International Atomic Energy Agency, which was created a few years later. And as you know, this effort met with some notable successes and failures. Can something analogous be applied for safe AI development with even competitors such as Russia and China? Could we even trust that process?

Zelikow: I think that’s an idea that’s not about the coalition for AI defense. It’s an idea about a different coalition, one that mobilizes suppliers of vital technology to be sure that the technology is at least used safely. I think we can talk to the Chinese about a producers or suppliers group. We can talk to the Chinese about general AI safety because they share a lot of those concerns. But I don’t think we can include the Chinese in an inner group that is concerned with the most dangerous capabilities and how to counter them.

Expand
overlay image