It is a generally accepted proposition that artificial intelligence will transform human existence. It will enable every facet of human life and will likely surprise us, too, due to its capacities to learn and evolve. AI’s impact on national security and waging war are obviously serious matters. Advanced technologies have already diminished the need for traditional human operated planes, ships, and vehicles. Operators can fly, sail, and drive these systems remotely from thousands of miles away in a comfortable and secure environment. AI can and will likely eliminate the need for the remote operators too. One obvious question is: How will AI, robotics, and autonomous weapons systems change the nature of war?
It is important to remind ourselves that war and warfare have distinct meanings. War is fundamentally political, violent in nature, and takes place in and among societies. It has an enduring and unchanging “nature.” One only has to watch the nightly news to see war up close in Ukraine and Gaza to validate war’s enduring nature. Warfare, however, is merely the way war is made, its “character.” Warfare is shaped by technology, law, ethics, culture, methods of social, political, and military organization, and other factors that change.
If we accept Clausewitz’s distinction between “nature” and “character,” is the world on the cusp of a generational turning point for warfare today? History is not a perfect indicator of the future, but its lessons should not be ignored. History should dampen our inclination to overestimate how technology will affect war’s “character” and more importantly, its outcome, the only measure that matters.
The well-known military historian and army officer who saw combat service in Burma during World War II, Trevor Dupuy, studied war in all its dimensions. In his book, Understanding War, he documents that the rates of advance of an army in contact with the enemy have, at best, increased very slightly since the early nineteenth century. Napoleon’s advance rate to Moscow in 1812 was about 14 kilometers per day while Hitler’s armored blitzkrieg to the same objective in 1941 was 10 kilometers per day. This is in sharp contrast to what improved technology would seem to permit. While Dupuy is hesitant to draw firm conclusions, he does suggest that advanced technologies do not guarantee better outcomes. The ability of the enemy to counter advanced technologies comes quicker and cheaper than their development and employment. Dupuy’s insight should be instructive.
Stephen Biddle, a well-respected contemporary military analyst, argues in his book, Military Power: Explaining Victory and Defeat in Modern War, that military doctrine and tactics are far more important to battle outcomes in modern warfare than technological advantages. Technological and resource advantages matter only if they can be harnessed in a manner to achieve political outcomes through battlefield success. Biddle reminds us of the critical nature of nonmaterial factors. Smart force employment and purpose can negate the potential effects of advanced technology. The evidence of this has been demonstrated in various conflicts from the American Revolution through Ukraine’s defense against Russia. Biddle’s work reinforces Dupuy’s analysis. Dupuy cautions against expecting too much from advanced technologies and Biddle reminds us that nonmaterial factors like doctrine, tactics, and purpose can compensate for substantial technical and material disadvantages.
The “nature” and “character” of future armed conflict will likely be predictable and recognizable. However, emerging AI and CYBER technologies will generate profound and elusive challenges for deterrence and wartime decision-making. It is necessary to outline a rudimentary baseline to fully understand the new challenges. The two World Wars were contests between advanced economies that harnessed the industrial revolution to build highly organized modern militaries. Prior to the beginning of the wars, countries were building military arsenals to counter the arsenals being built by opposing powers. Mobilization, training, and logistics structures were also developed to optimize the employment of these capabilities. Alliances were formed to create an equilibrium against potential foes to deter aggression. But great wars were fought and new technologies to generate greater security were sought that were increasingly more destructive.
The quest for ultimate security led to the development of the atomic bomb along with long-range delivery systems. Hiroshima and Nagasaki showed the destructive capacity of nuclear weapons creating a situation where almost any future use would be out of proportion to any objective other than, perhaps, survival. As a result, during the Cold War the United States and the Soviet Union recognized the destructive capacity of nuclear weapons, worked to limit their existence, created as much transparency as possible, and very deliberately worked to avoid any strategy that would couple the achievement of a political or military goal to the use of a nuclear weapon. The understanding between superpowers, official and unofficial, worked to avoid nuclear war.
Towards the end of his long and productive life, Henry Kissinger applied his great intellect and experience to investigate the implications of AI and CYBER weapons on security and world order. His observations are predictably insightful, highlighting significant challenges for maintaining some semblance of global power equilibrium in the AI and CYBER era. Political power and influence have generally been associated with military power. States continuously assess their relative position within the international system mostly to maintain order and deter aggression. The system is dynamic and normally responds when equilibrium is lost. But maintaining an equilibrium and knowing when it is lost require a clear and accepted recognition of the elements that constitute military power. Miscalculations and the risk of conflict increase when assessments of relative power are faulty resulting in a real or perceived change in the system’s equilibrium.
CYBER weapons perplex the ways power calculations have previously been made. CYBER capabilities, much more so than nuclear capabilities, have civilian applications that make their status as weapons ambiguous. More importantly, a CYBER weapon’s effectiveness lies in its user not disclosing its existence or minimally, not disclosing its full capability. Furthermore, recognizing when CYBER warfare began, and who the belligerent is, may not be clear. The notion of a CYBER Pearl Harbor would constitute an unambiguous attack, but what about a more likely action of a distributed denial of service attack, or CYBER enabled industrial sabotage, or espionage stealing state and industry secrets? While nuclear and conventional weapons exist in physical space, and their aggregate potential to do harm can be calculated, CYBER weapons deny any similar calculation and perhaps, an appropriate response to their use.
The attributes of CYBER weapons also make controlling their proliferation difficult. Unlike a nuclear weapon whose disclosure does not negate the weapon’s capability, disclosing the nature of a CYBER weapon would almost certainly forfeit its utility. Therefore, a state has an incentive to hide its CYBER capabilities, thus destabilizing the international security order. Complicating this problem is the relatively low cost of acquiring CYBER weapons and the deniability associated with their use. Instability, unpredictability, and the increased number of CYBER-capable actors may generate policies that value preemption over patience to avoid a CYBER knockout blow. Maintaining world order in the CYBER era will present a formidable challenge.
Artificial intelligence, joined to the destructiveness of nuclear weapons and the capabilities and destabilizing nature of CYBER weapons, will have a profound impact on security policy and military strategy. The current decision-making calculus for civilian and military leaders, friend and foe, is generally predictable and beneficial because it is based on a cognitive framework that has been defined by decades of experience and often during crisis. Integrating AI into decision-making and weapons systems will introduce a separate logic that is incomprehensible and operates faster than human thought. AI can certainly be designed with human override but doing so slows the response and potentially opens windows of vulnerability, especially if the enemy’s military systems are functioning at machine speeds.
Imagine an adversary who has engineered its AI to make independent decisions about targeting and employing weapons systems, including nuclear weapons. Complicating this scenario are the capabilities of AI to quickly generate vast amounts of false, but plausible information. This AI variant of psychological warfare will create realistic pictures and videos of public figures making statements that were never made and recording events that never happened. What types of policy and strategy changes would this produce? Would patience prevail or would the rapid logic and direction generated by AI be viewed as essential to act quickly against a perceived imminent enemy attack?
The reliance on AI will likely be reciprocal to the perceived use of AI by adversaries. Concepts of deterrence and arms control necessary for an equilibrium in the international system will be elusive. Even more than CYBER, AI technology has dual civilian and military applications. Equally important, and unlike nuclear capabilities, the proliferation of AI technologies is rapid, undetectable, and inexpensive thereby increasing the number of actors who possess destabilizing capabilities.
The United States appears to want its AI systems to have humans involved to control AI actions, in other words, AI-assisted weapons and defense systems. While human involvement may reduce the risk of faulty AI logic, it introduces the risk of human error. It also slows response time which can be beneficial for diplomacy to diffuse a crisis but can also lead to disaster. While the pressure to act quickly and avoid human error by delegating critical decisions to machines will be great, the requirement for human involvement at least ensures moral agency and accountability. Will moral agency and accountability trump responding quickly to AI-deciphered threats?
All nations are obligated to act in a manner that maximizes their own security. The military value of AI-enhanced autonomous weapons systems is enormous. Equally important, AI-enhanced information processing will shorten decision-making cycles making finding and striking targets easier. Adversaries will speed up their own operations with autonomous reactions at machine speed that will vastly increase the tempo of operations. This tempo will increase the pressure to eliminate humans from decision cycles and turn over control to machines for both tactical and operational decision-making. The result will be greater automation and less human control leading up to and during conflict.
While nations are obligated to act to secure their own interests, they also have an obligation to maintain peace. Attempts to control their development and deployment will be challenging but countries have a choice about how autonomous weapons will be used and how decisions will be made. Without effective restriction, international stability will be weakened. Reduced human control over warfare poses increased danger to everyone, everywhere. Actions must be taken to address the worst dangers of machine-controlled decision-making and use of autonomous tools of war. International cooperation is needed to establish rules that limit how AI-enhanced military weapons and decision-making technologies are used. In addition, developing frameworks for some level of transparency for future and perhaps more consequential AI advances will also be required. In other words, establishing an international regime to ensure mutual restraint in employing AI-enabled defense systems will be key to maintain global order and peace. Policymakers around the world have begun to understand the risks and challenges posed by AI. Let’s hope they follow through to erect safeguards for everyone’s sake.