decision2020.jpg

Artificial

In the twentieth edition of the Decision 2020 Report, Hoover fellows assess the economic, national security, and geopolitical implications of innovations in artificial intelligence (AI).

AI a New Measure of Global Competitiveness

Senior Fellow John Villasenor argued that countries that maintain an overreliance on legacy infrastructure will face increasing challenges in sustaining global competitiveness.

“Why?” Villasenor asked readers. “Because geopolitics is determined in large part by many of the same domains that AI is poised to revolutionize.”

He explained that AI technology presents enormous economic growth opportunities because it can vastly improve efficiency in such sectors as agriculture, manufacturing, transportation, and trade. He added that AI will also force a change in thinking about how nations manage their security challenges and adapt their militaries.

Villasenor contended that while China has shrewdly bet on AI as a key component of its economic growth, the US still remains a global AI leader with an ecosystem of companies pioneering such technology, including Google, Facebook, Amazon, and Apple. The US government, he added, has also invested billions in developing AI infrastructure, and universities have bolstered the research and teaching on the subject in efforts to maintain and enhance America’s advantage in AI human capital.

Villasenor maintained that America’s biggest challenge as a competitor in the AI domain is the threat of burdensome government regulation that reduces individual incentive and slows the engine of innovation.

“Maintaining AI preeminence is a multi-decade endeavor—a far greater time scale than the term lengths of elected officials,” Villasenor explained. “This lowers the incentives to implement AI-focused policy strategies that might take several years or more to bear fruit.”

The Promise and Peril of AI Innovation

In an interview published by Stanford News, James Timbie, a Hoover distinguished visiting fellow and Stanford-trained physicist, explained that though society as a whole will benefit from increased productivity and lower costs as a result of artificial intelligence and advanced robotics, research indicates that half of all US workers will be adversely effected in the near term.

He believes that in some cases machines will replace the jobs of manufacturing workers, drivers of heavy transport, and even educated and well-paid workers including tax preparers, radiologists, software engineers, paralegals, and financial analysts.

In other fields, such as medicine and education, the labor force will be transformed with skilled humans working in close coordination with machines.

“Machines would continue to do the computational work they do well, while leaving other tasks to humans who see the big picture and have interpersonal skills,” Timbie explained.

Timbie said that the current technical revolution is unfolding at an even faster rate than in the last two centuries, a pace that will enable the creation of more national wealth, but that will also lead to greater income inequality.

Timbie said that the best way to adjust to these evolving economic realities is not to provide guaranteed basic income, as some policy makers and social scientists have advocated. Rather, his view is that society should help support people in developing skills for new types of jobs.

“There is no shortage of work that needs to be done. . . . There are potential opportunities for displaced workers with appropriate training,” Timbie said. “The rapid pace of change reinforces the benefit of a habit of life-long education.”

Rules of Engagement for AI

In an article for the Bulletin of Atomic Scientists, Herbert S. Lin, the Hank J. Holland Fellow in Cyber Policy and Security, wrote that the emergence of AI-integrated weapons systems and their numerous capabilities may cause a dilemma for state actors in determining whether to comply with the laws of war.

Lin asked readers to assume opposing militaries have the same technological capabilities and equal numbers of fielded systems that engage in combat. By this assumption, Lin argued, one can deduce that the advantage would be for the combatant that practices less restraint on the use of force and does not comply with their country’s laws and international treaties.

As an example, Lin provided a lesson from World War II, when German U-boat commanders violated Article 22 of the 1930 London Naval Treaty, which forbade the sinking of neutral civilian merchant ships. Since many of these merchant ships carried war materials, the Germans were not providing warnings before attacks, because they believed it reduced their operational effectiveness especially if those ships were armed.

Furthermore, Lin explained, at the Nuremberg trials following the war, a tribunal ignored these violations because Great Britain and France had also been engaged in unrestricted submarine warfare.

“As the history of unrestricted submarine warfare demonstrates, humanitarian motivations were ignored when observing those restrictions compromised combat effectiveness,” Lin said. “It’s not unimaginable that a similar fate might await the laws of war when AI-enabled weapons become ubiquitous.”

The Emerging AI Challenge from China

The Hoover Institution’s new director, Condoleezza Rice, provided opening remarks on Tuesday, September 29, to a Hoover-partnered virtual conference about the Chinese government’s application of artificial intelligence and other cutting-edge technologies, arguing that they pose a unique and far greater challenge than Soviet military ambitions during the Cold War.

“[China has] pushed its own technological frontiers by a very determined desire to be the leader,” Rice said.

She maintains that for authoritarians like the Chinese Communist Party, an unrestrained application of such technology allows them to “dream big.” Beijing has not only created an Orwellian state apparatus that oppresses its people and leaves them no place to hide but also aims to export this mode of governance and interfere in the political affairs of democracies across the world.

A lack of institutional constraints, Rice holds, provides the CCP and other authoritarians strategic advantages over the free world. However, she cautions that Americans should avoid imitating China and surrendering its democratic values to meet this challenge.

“We need to have a concerted effort on behalf of free peoples to make sure that the digital authoritarians don’t win,” Rice said. “They can’t win the race for this technology, because whoever wins this race is going to have a leg up on shaping the international system.”

More on the Policy Implications of Artificial Intelligence (AI) and Other Emerging Technologies

Watch the entire virtual conference The Rise of Digital Authoritarianism: China, AI, and Human Rights,” co-presented by the Hoover Institution, the Stanford’s Global Digital Policy Incubator, the Stanford Institute for Human-Centered Artificial Intelligence, and the Human Rights Foundation.

Read Beyond Disruption: Technology’s Challenge to Governance, by James Timbie, Jim Hoagland, and George P. Shultz.

Watch “Ensuring America’s Innovation in Artificial Intelligence,” a conversation between Condoleezza Rice and Fei Fei Lee, co-director of the Stanford Institute for Human-Centered Artificial Intelligence.

Watch “Governance in an Emerging New World: Emerging Technology and America’s National Security, a Hoover symposium chaired by Distinguished Fellow George P. Shultz, featuring national security experts on how new military technologies might change the strategy dynamic in Europe and the Pacific and what these weapons may mean for non-state actors.

Watch “Perspectives on Futures of Artificial Intelligence” a conversation about challenges and opportunities of AI, featuring LinkedIn co-founder Reid Hoffman, co-director of the Stanford Institute for Human-Centered Artificial Intelligence John Etchemendy, and Peter and Helen Bing Senior Fellow Michael McFaul. This program was part of the August 2019 Cyber and Artificial Intelligence Boot Camp for congressional staffers, co-presented by the Hoover Institution, the Freeman Spogli Institute, and the Stanford Institute for Human-Centered Artificial Intelligence.

overlay image