This excerpt from the new issue of the Stanford Emerging Technology Review (SETR) focuses on artificial intelligence, one of ten key technologies studied in this continuing educational initiative. SETR, a project of the Hoover Institution and the Stanford School of Engineering, harnesses the expertise of Stanford University’s leading science and engineering faculty to create an easy-to-use reference tool for policy makers. Download the full report here and subscribe here for news and updates.
Artificial intelligence (AI), a term coined by computer scientist and Stanford professor John McCarthy in 1955, was originally defined as “the science and engineering of making intelligent machines.” In turn, intelligence might be defined as the ability to learn and perform suitable techniques to solve problems and achieve goals, appropriate to the context in an uncertain, ever-varying world. AI could be said to refer to a computer’s ability to display this type of intelligence.
The emphasis today in AI is on machines that can learn as well as humans can learn, or at least somewhat comparably so. However, because machines are not limited by the constraints of human biology, AI systems may be able to run at much higher speeds and digest larger volumes and types of information than are possible with human capabilities.
Today, AI promises to be a fundamental enabler of technological advancement in many fields, arguably of comparable importance to electricity in an earlier era or the Internet in more recent years. The science of computing, worldwide availability of networks, and civilization-scale data—all that collectively underlies the AI of today and tomorrow—are poised to have similar impact on technological progress in the future. Moreover, the users of AI will not be limited to those with specialized training; instead, the average person on the street will increasingly interact directly with sophisticated AI applications for a multitude of everyday activities.
One estimate forecasts that generative AI—which can create novel text, images, and audio output—could raise global GDP by $7 trillion and raise productivity growth by 1.5 percent over a ten-year period if it is adopted widely. Private funding for generative AI start-ups surged to $25.2 billion in 2023, a nearly ninefold increase from 2022, and accounted for around a quarter of all private investments related to AI in 2023.1 Some of the core subfields are the following:
- Computer vision, enabling machines to recognize and understand visual information from the world, convert it into digital data, and make decisions based on these data.
- Machine learning (ML), enabling computers to perform tasks without explicit instructions, often by generalizing from patterns in data. This includes deep learning that relies on multilayered artificial neural networks—which process information in a way inspired by the human brain—to model and understand complex relationships within data.
- Natural language processing, equipping machines with capabilities to understand, interpret, and produce spoken words and written texts.
Most of today’s AI is based on ML, though it draws on other subfields as well. ML requires data and computing power—often called compute—and much of today’s AI research requires access to these on an enormous scale.
Endless promise
AI can automate a wide range of tasks. But it also has particular promise in augmenting human capabilities and further enabling people to do what they are best at doing. AI systems can work alongside humans, complementing and assisting their work rather than replacing them. Some present-day examples:
- Medical diagnostics: An AI system that can predict and detect the onset of strokes qualified for Medicare reimbursement in 2020.
- Drug discovery: An AI-enabled search identified a compound that inhibits the growth of a bacterium responsible for many drug-resistant infections, such as pneumonia and meningitis, by sifting through a library of seven thousand potential drug compounds for an appropriate chemical structure.
- Patient safety: Smart AI sensors and cameras can improve patient safety in intensive-care units, operating rooms, and even at home by improving health care providers’ and caregivers’ ability to monitor and react to patient health developments, including falls and injuries.
- Robotic assistants: Mobile robots using AI can carry out health care-related tasks such as making specialized deliveries, disinfecting hospital wards, and assisting physical therapists, thus supporting nurses and enabling them to spend more time having face-to-face human interactions.
- Agriculture: AI-enabled computer vision helps some salmon farmers pick out fish that are the right size to keep, thus offloading the labor-intensive task of sorting them. Some farmers are using AI to detect and destroy weeds in a targeted manner, significantly decreasing environmental harm by using herbicides only on undesired vegetation rather than entire fields, in some cases reducing herbicide use by as much as 90 percent.
- Logistics and transportation: AI enables some commercial shipping companies to predict ship arrivals five days into the future with high accuracy, thus allowing real-time allocations of personnel and schedule adjustments.
- Law: AI enables the real-time transcription of legal proceedings and client meetings with reasonably high accuracy. AI-based systems also can reduce the time lawyers spend on contract review by as much as 60 percent.
Uncertainty
Dominating the AI conversation in 2024 were foundation models, which are large-scale systems trained on very large volumes of diverse data. Such training endows them with broad capabilities, and they can apply knowledge learned in one context to a different context, making them more flexible and efficient than traditional task-specific models.
Large language models (LLMs) are the most familiar type of foundation model and are trained on very large amounts of text. LLMs are an example of generative AI, which can produce new material based on its training and the inputs it is given using statistical prediction about what other words are likely to be found immediately after the occurrence of certain words. These models generate linguistic output surprisingly similar to that of humans across a wide range of subjects, including computer code, poetry, legal case summaries, and medical advice. Specialized foundation models have also been developed in other modalities such as audio, video, and images.
LLMs have generated considerable attention because of their apparent sophistication. Indeed, their capabilities have led some to suggest that they are the initial sparks of artificial general intelligence (AGI). AGI is AI that is capable of performing any intellectual task that a human can perform, including learning. But, according to this argument, because an electronic AGI would run on electronic circuits rather than biological ones, it is likely to learn much faster than biological human intelligences—rapidly outstripping their capabilities.
The belief in some quarters that AGI will soon be achieved has led to substantial debate about its risks. Scholars have continued to argue over the past year about whether current models present initial sparks of AGI, although there hasn’t been substantial evidence presented that proves they possess such capabilities.
Potential positive impacts of new AI technologies are most likely to be seen in the applications they enable for societal use. On the other hand, no technology is an unalloyed good. Potential negative impacts from AI will likely emerge from known problems with current state-of-the-art AI and from technical advances in the future.
Some of the known issues with today’s leading AI models include explainability—today’s AI is largely incapable of explaining the basis on which it arrives at any particular conclusion—bias and fairness, vulnerability to spoofing, data poisoning, deepfakes, privacy, overtrust and overreliance, and hallucinations.
The primary challenge of bringing AI innovation into operation is risk management. Because there have been significant recent advances in AI, the people who would make decisions to deploy AI-based systems may not have a good understanding of the risks that could accompany such deployment. Consider, for example, AI as an important approach for improving the effectiveness of military operations. Despite broad agreement by the military services and the US Department of Defense that AI would be of great benefit, the actual integration of AI-enabled capabilities into military forces has proceeded at a slow pace. For new approaches like AI to take root, a greater degree of programmatic risk acceptance may be necessary, especially in light of the possibility that other nations could adopt the technology faster, achieving military advantages over US forces.