The history of the rise and decline of horse populations provide a framework to understand how humans could initially benefit from AI, only to become obsolete later on, challenging optimistic forecasts about AI's impact. The paper is divided into three main sections: 1) introduction, including a brief summary of the premises of the horse analogy, 2) an account of human and horse interaction over approximately 6,000 years, highlighting how technological advancement led to a rise in horse populations, followed by collapse, and 3) a theoretical exploration of AI existential risk, using the eventual collapse in horse populations as a proof of concept.
By drawing parallels between the human domestication of horses and a potential future dominated by Artificial Superintelligence (ASI), the paper shows specifically why neither Ricardian trade nor competition amongst different ASIs are likely to protect humans from existential calamity. The paper encourages a critical approach to future AI-human dynamics, drawing upon lessons from past human-animal relations. Though the analogy has limitations, it provides insights into any scenario where a more intelligent agent significantly impacts a less intelligent one.
>> Manny Rincon-Cruz: Hello, my name is Manny Rincon-Cruz. I'm the executive director here for the Hoover History Working group. Today, we had a delightful opportunity to host Matthew Lowenstein, who presented to us a wonderful working paper on the obsolescence of the horse, predicting the future of humanity in a world dominated by artificial intelligence.
It is a special joy for me to introduce Matthew because Matt and I have a long history. We began our PhD programs at the same time when I foolishly turned down the opportunity to go to UChicago and work with Ken Pomeranz and instead, came out here. But it's been quite fortunate that Matt eventually also came out here west and joined us at Hoover.
Matthew is a Hoover fellow here at the Hoover Institution. He studies modern China from the late Qing to the early people's Republic. He currently has a book on the way, which focuses on China's indigenous financial system from about 1820 to about 1911. He has other interests, of course.
Those include chinese accounting, the political economy of warlordism, and the history of central economic planning. He used to work as a securities analyst in Beijing and New York, uncovering corporate Malfeans and he has written before for the diplomat in foreign policy. Lastly, he's the winner of the 2022 Stellar Prize which is given to the best dissertation in UChicago's social sciences division.
So without further delay, Matt, welcome again. I think this is your second appearance here at the Hoover History Working group. I wanted to ask you very quickly, what motivated you to work on artificial intelligence? This is quite far afield from your usual work. What was the motivation for that?
>> Matthew Lowenstein: Yeah, well, first, great to be here, Manny. We have known each other a long time and it's always a pleasure to make a public appearance together and to be back on the podcast. My motivations are just concern and indeed fear about what extremely powerful AI might hold in store for us all.
Like many others, I read Nick Bostrom, Superintelligence and then I some other similar writings on it. And the scary thing is that people who say AI could pose an existential threat seem to have really, really powerful arguments. And then digging deeper, when I look at what people really close to the technology are saying, looking for some reassurance, I found none.
And I found that everyone who works on this is actually very concerned. And I thought, this is really bad. This is really serious and I thought, maybe there is a way I can write something that will illustrate the nature of the threat and persuade people to take it seriously.
>> Manny Rincon-Cruz: Fantastic, I will say I've had the same experience of all the engineers that I know in the valley here. The ones that do work on this technology are all pessimistic and the people that are optimistic happen to just never work in AI. So walk us a little bit through the argument.
So you traced the relationship of humans and horses since their history, I think began with the domestication of the horse. What was that initial relationship like? What did it encompass and how long did it last?
>> Matthew Lowenstein: So that's a great question. There are a couple of different ways you could answer it.
In some ways, that relationship lasted nearly 6,000 years possibly even longer, depending on when horses were in fact, domesticated which was at least something like 3500 BC, but quite possibly significantly earlier. And from then until the mid-20th century, horses were an essential element in human economic life. So that's one way to periodize it.
Another way is to look at how horses were essential. So for the first three or four millennia, they were essential mostly for military purposes and to a lesser extent for transportation. And they had some other uses in agriculture too, but oxen were the main beast of burden for agriculture.
And then so that relationship changed over the course of the Middle Ages between 1000 AD and the eve of the industrial revolution, horses became the essential draft animal for farmers. And this is driven largely by technological changes, also to some extent by social changes and by growing economy, which made it more cost effective to use horses, which are more powerful and energy efficient for hauling.
And then this changed again over the course of the Industrial Revolution when horses became used in industry. And what's really interesting about the Industrial Revolution is that a lot of the things that industrial technology did were substitutes for the horse. So horses were used in carrying freight and passengers across long distances.
And once you get a steamship or a railroad, you don't need horses to go along where the rail line goes along. And so you might think that, well, this is going to cost horses a lot of jobs, but the contrary is what happened. So because the overall volume of people increased so much that the increase in demand for horses for short haul, well overcompensated for the decrease in demand for horses over long haul.
So the early industrial revolution was excellent for horses. And so yeah.
>> Manny Rincon-Cruz: I was going to ask very quickly, so just to characterize this development.
>> Matthew Lowenstein: Yes.
>> Manny Rincon-Cruz: And primarily as instruments of war, instruments of transportation. And I think we see this happen, I think in the Central Asia and kind of Chinese home provinces dynamic.
>> Matthew Lowenstein: That's right.
>> Manny Rincon-Cruz: But you're saying that this changes and they become primarily inputs into agricultural production and then short haul transportation.
>> Matthew Lowenstein: That's right, but you can categorize the human horse relationship during this entire sweep of history is marked by gradually increasing demand for horses as a result of growth in human capabilities.
So human technology grows. Humans are more prosperous, we need more horses.
>> Manny Rincon-Cruz: So this is something you address in the paper which is that there were in fact, what people would say are horse doomsayers.
>> Matthew Lowenstein: Yeah.
>> Manny Rincon-Cruz: Or horse doomers, parallel AI doomers, who were warning about the coming obsolescence of the horse at the beginning of the Industrial Revolution and they worried that horses would be wiped out.
Why did that not happen?
>> Matthew Lowenstein: Yeah, that's right. So some worried that bicycles would replace the horses. It seems a bit funny now, but that was a fear and others worried that canals and railroads would and the doomers were right in the long run that technology could and would replace the horse.
They were wrong about how easy that would be. They didn't see that even when you replaced the horse for some uses as long as you grew the economy enough, it would increase demand for horses for other uses. And the railroad is as you say, the best example of this.
A railroad companies required horses just for moving railroad cars around anywhere that there wasn't a railway. And so horse demand continued to grow, but you did eventually hit that inflection point. And so the relationship between human and horse hits this an inflection point in a round, well, somewhere in the 20th century, but certainly by 1920.
And at that point, what you get is human technology can now do almost anything that a horse can do better than a horse can. And so no matter how much the economy grows, it does not redound toward greater demand for horses. Even this took a little time to build out.
You needed infrastructure of roads, going to farms so that you could get produce to the market without a horse. And you don't really get that until after the war. You need combined mechanized agriculture, combined threshers, so that you don't need horses to help you plant and harvest food and then you need trucks to replace horse in freight hauling.
>> Manny Rincon-Cruz: I was looking over the graph I think that you presented in the paper. You compiled some statistics. The horse and pony and donkey population in the United States peaks above 25 million in 1920. But by 1970, it is, it looks here at around a million or, sorry, roughly stable at around 3 million.
So the collapse in population is quite dramatic.
>> Matthew Lowenstein: Yeah.
>> Manny Rincon-Cruz: So zooming a bit back from the history of the horse and humans though, I'm sure we could dive. We could talk a lot more about industrialization and draft animals for a long time. What is the analogy that you're drawing here with the humanity AI relationship?
>> Matthew Lowenstein: Yeah.
>> Manny Rincon-Cruz: Are horses a stand in for humans? And what's the argument? And what kind of arguments in the public sphere are you kind of addressing with this example?
>> Matthew Lowenstein: Absolutely, yeah, great, great question. So yes, the population collapses dramatic and horses are a standard for humans in this model.
They're kind of analogy for humans under AI. And what the analogy is meant to illustrate is that the opportunity cost of preserving something that is no longer economically useful to you and is not something that you value in and of itself is extremely high. So if you have an AI that isn't specifically seeking to maximize human welfare, but has some other set of goals, it will be pushed to consume resources that humans need and not leave enough left for us to survive and we see this in the case of the horse.
And what's interesting about the horse is people actually do value horses today. We still keep quite a few horses around because we like to have them as pets, but we don't value them enough. We didn't value them enough to devote in perpetuity huge amounts of land to growing horse feed.
And so there were these incredibly powerful economic incentives that meant even though people are quite fond of horses, horse populations had to shrink and those incentives will be there for AI's too as well. And so even if they have no particular animus toward humans, AI's will face, as long as they have goals of their own, they will face intense economic incentives to not leave us with sufficient resources.
>> Manny Rincon-Cruz: And what I found most striking about your paper is that it addresses this argument, I think that's popular in Silicon Valley that technological improvements are always beneficial to humanity because other technologies and technological breakthroughs have been beneficial. What you did was draw a different type of analogy which is one of interspecies cooperation or symbiosis, really?
>> Matthew Lowenstein: Yeah.
>> Manny Rincon-Cruz: And what you show is that a relationship can be symbiotic and mutually beneficial for a long time and then can abruptly turn to one of indifference. And one could argue that humans are mostly indifferent to most animals on the planet and that hasn't done very much for all these animals.
So we'd like to ask one last question, which is what kind of recommendations or advice would you have for people thinking about AI policy or people that might be tuning in how to think about this?
>> Matthew Lowenstein: The big takeaway from this paper is just to take the threat of existential risk seriously and to think about dramatically slowing down or halting advances in sophisticated AI systems.
It may seem like a science fiction concern, but there's pretty good reasons to suspect that sufficiently powerful artificial intelligence would be extremely dangerous and very, very likely deadly.
>> Manny Rincon-Cruz: Well, thank you, Matt, again, for a fantastic paper and for this interview. This is a historian's warning, using the past to tell us about the dangers of the future.
Again, my name is Manny Rincon-Cruz, here at the Hoover History working group.
>> Matthew Lowenstein: Manny, a pleasure.
>> Manny Rincon-Cruz: Thank you, Matt.
ABOUT THE SPEAKER
Matthew Lowenstein is a Hoover Fellow at the Hoover Institution, Stanford University. He studies the economic history of modern China from the late imperial period to the early People’s Republic. His dissertation, which he is currently turning into a book, is a study of northern China’s indigenous financial system from the late Qing to the early Republican period (ca. 1820–1911). Other interests include the history of traditional Chinese accounting, the political economy of warlordism, and the history of central economic planning.
Lowenstein received his PhD in history from the University of Chicago and an MBA from Columbia Business School. Lowenstein previously worked as a securities analyst in Beijing and New York covering the Chinese financial sector. His nonacademic works have appeared in the Diplomat and Foreign Policy.