This essay is drawn from visiting fellow Matt Turpin’s Substack, China Articles. Subscribe for free at https://chinaarticles.substack.com.

Having just re-read Nelly Lahoud’s August 2021 Foreign Affairs article, “Bin Laden’s Catastrophic Success,” I was reminded of two truisms:

One, misperception is endemic in humans.

Two, despite all our progress, uncertainty reigns supreme.

Everyone makes assumptions about the future based on their own perceptions of how the world works. Sometimes those assumptions and perceptions are correct, but more often they are wrong. The world doesn’t work the way we construct it in our minds or in the models and algorithms we design. Neither more perfect theories, nor additional computing power, will change that.

During the raid in May 2011 that killed Osama bin Laden, Navy SEALs collected a trove of letters, recordings, and documents from his compound in Pakistan (thank you, Pakistan, for letting bin Laden hide in the retirement community of your senior military and intelligence leaders; we won’t forget that). Years later, the US government began declassifying those documents, which allowed researchers like Lahoud a glimpse into bin Laden’s thinking before the 9/11 attacks and all the way up to his death.

The thing that jumped out at me in her analysis is that bin Laden believed that the 9/11 attacks would inspire the American people to rise up against their government and demand the withdrawal of US military forces from the Middle East, and that this would leave the corrupt dictatorships of that region defenseless and enable bin Laden and his followers to mount a political takeover.

We all know what happened next. Rather than precipitate a withdrawal, the attacks activated what one senior US military leader described to me as the “American OODA loop” (observe, overreact, destroy, apologize). (The original OODA loop is a Pentagon  decision-making model whose pillars are “observe, orient, decide, act.”)

So why was bin Laden mistaken in his assumptions and perceptions? As with most questions about cause and effect, the answer is almost certainly multicausal. He was seized by an ideology that blinded him to alternative interpretations. He wasn’t exposed to a wide variety of opinions and perspectives. He cherry-picked the data that confirmed the course of action he wanted to take, falling victim to confirmation bias. He believed he was a man of destiny, endowed by a higher power with the mission of changing the world.    

We often think of folks who make decisions like the one bin Laden made as irrational. But I think that is wrong. Bin Laden’s decision to conduct what would become the September 11 attacks was rational, based on his perception of how the world works and his assumptions about what would happen. Those assumptions and perceptions were mistaken. 

Bin Laden’s other failure was in planning only for success. Lahoud’s research demonstrates that Al-Qaeda was completely unprepared for the American invasion of Afghanistan and the global effort to hunt down the 9/11 plotters, even though one might reasonably assume that an attack significant enough to compel an American withdrawal from the Middle East might also be significant enough to compel an American attack against the organizers.

Being wrong, getting better

So, what should we conclude from this analysis? Well, I hope it helps us consider that the assumptions and perceptions of our rivals might not be the same ones we have—and that there’s a likelihood our assumptions and perceptions might be wrong (mine included). Our rivals likely possess a different mental model for how the world works and make rational decisions based on that model, just as we do. They construct formulas, algorithms, and strategies based on these mental models and interpret data and signaling based upon preconceived notions—again, just as we do. They too are prone to making mistakes and falling victim to psychological biases. These dynamics, along with the countless decisions individuals make, create what Clausewitz described as the “fog of war” or uncertainty.

Because these activities involve humans and are a contest of will among humans without a set of hard rules accepted by all the players, uncertainty reigns supreme. This means that to be effective under these conditions, one must internalize the “fog of war” and account for it. Things might turn out as one expects, and one should be ready and able to seize the opportunity to achieve their objectives—while simultaneously being prepared for one’s assumptions and perceptions to be wrong. This outlook forces one to conduct sufficient contingency planning to account for this and have the resources on hand to execute against undesirable, but feasible, scenarios. 

This kind of thinking flies in the face of management practices focused on achieving efficiency (concepts like “just in time” logistics), in which current conditions are assumed to be permanent, with known and predictable rules. (These management practices induce leaders to design their “business models” and organizations to optimize to those “rules.”) But it also flies in the face of thinking based in mathematics and engineering, which assumes that uncertainty can be erased through the accumulation of more data, as well as by possessing ever greater computational power with ever more accurate formulas and algorithms.

This doesn’t mean we should go full Luddite and smash the machinery, throw away our smartphones, and abandon spreadsheets. But we should be circumspect about what these concepts and technologies can achieve and realistic about what they can’t. 

Side-eye here at Leopold Aschenbrenner and his manifesto about artificial general intelligence (AGI) called “Situational Awareness: The Decade Ahead.” A piece of advice: history is rarely kind to folks who “trust the trendlines.”

Overconfidence

Will anyone lift the “fog of war”? Probably not.

In December 2001, William A. Owens’s book Lifting the Fog of War hit bookshelves just after the 9/11 attacks and offered an enticing vision: rather than depending on costly “obsolete” weapon systems, the United States could embrace the new information age and achieve a long-held dream of erasing uncertainty with technology. 

The vision of knowing all things, everywhere and immediately, was within grasp, a premise that opened the portal to sweeping, confident predictions (to be fair, the proponents of this tech prowess rarely claimed this extreme position, but this argument certainly did encourage folks to imagine an unrealistic future).

The book fit within a wider debate in defense circles over the “revolution in military affairs” (RMA) idea and a warfighting concept called “effects-based operations” (EBO). It also helped advance arguments about achieving greater efficiencies in the Department of Defense and the military services through better top-down planning. Technology, namely information technologies, could reduce or eliminate uncertainty and therefore defense leaders could “trim the fat” from military planning and programming. Wartime contingencies wouldn’t require those extra resources, which were needed only “just in case.” By erasing uncertainty and lifting the fog of war, the country could do more with less while still shaping the future in a direction favorable to the United States. 

This kind of thinking was on display in 2002 and 2003 in the run-up to the Iraq War: if it took 700,000 troops to expel Sadam Hussain from Kuwait in 1991, then a little more than a decade later, with all the new information technologies that had been introduced, it should require only a quarter of that number to topple Saddam and bring about a new democratic regime in Iraq.

Erasing uncertainty is a perennial dream of social engineers everywhere seeking to perfect the human condition. Progress, whether through perfecting social institutions or through leveraging technology, promises to deliver what folks desire. Never mind that folks don’t agree on what they desire. It allows people to disregard messy facts and conflicting signals while exuding a confidence that would be unwarranted based on the historical record.   

The danger we face today is that our rivals in Beijing, Moscow, Tehran, and Pyongyang each have their own models for how the world works and what they want to achieve, their own perspectives on how we might respond to their efforts to achieve what they desire, and their own assumptions about possible scenarios. No matter what we do or how good we are at collecting and analyzing intelligence, there will be significant uncertainty about those factors and the dynamics of how they might play out.

At the same time, we have very little slack in our own systems (whether military, commercial, or financial), should that uncertainty lead to scenarios we hadn’t anticipated or that we chose to ignore because they were undesirable.

We should adopt tools and methods to better understand these dynamics, but at the same time, we should be humble about what we can know. And we need a lot more resources dedicated to hedging against those risks. 

Downplaying the risk

The author of Lifting the Fog of War gives us a postscript. Admiral William Owens, a Rhodes Scholar and nuclear submarine officer, was vice chairman of the Joint Chiefs of Staff during President Clinton’s first term. He left the military to run a series of information-technology companies, as well as serve as CEO of AEA Holdings Asia, overseeing the company’s private equity and real estate investments in Asia, mostly in China. While advocating strongly for the adoption of the technology that his companies were selling, Owens also was an outspoken promoter of closer US-China ties and sought to downplay the security risks posed by Beijing (by 2009, most of his technology business models depended upon selling Chinese information-technology equipment to the United States). He served for nearly two years as the CEO of the Canadian telecom equipment manufacturer Nortel Networks (2004–5) just as Huawei was stealing its technology.

Four years after Owens left, Nortel collapsed in Canada’s largest bankruptcy (some called it Canada’s Enron). That same year, Owens started lobbying heavily on behalf of Huawei Technologies through a partnership with a company he founded called Amerilink Telecom Corp., lending his bona fides as a national security professional and retired US Navy admiral to assuage security concerns over the use of Huawei equipment. Luckily, he failed to convince his former colleagues.

While pursuing these business ties with Beijing, Owens also started and led the American side of the Sanya Initiative at the EastWest Institute; this project collaborated with the People’s Liberation Army and United Front organizations like CAIFC (China Association for International Friendly Contact) to bring together retired US four-star generals and admirals with their Chinese counterparts to promote greater understanding. His work there culminated with his 2020 book, China-US 2039: The Endgame? Building Trust Over Future Decades, which advocated policies his Chinese counterparts had pushed for years.

Owens spent much of the past twenty-five years going out of his way to condition senior US military leaders (some of whom he had mentored) to view China as a partner, not a rival and potential adversary. I suspect he thought he was doing something noble, but his efforts, which clearly overlapped with his financial interests, complicated US policy making during a critical period. 

Expand
overlay image