The Hoover Prosperity Program held its conference, Challenges Facing the US Economy on January 21, 2025.

The economy was the top issue for Americans voting in the 2024 presidential election. While there is a widespread desire to secure a prosperous future for America, there is broad debate about how to get there. At this conference, world-class scholars and policymakers discussed insights from evidence-based research that can inform public understanding of a set of pressing issues, including the federal deficit, the impact of AI on jobs, the state of inequality in America, and more.

>> Stephen Haber: It's my pleasure to introduce the third panel for the day, how should the US Economy adapt to the AI boom? It's going to be chaired by my friend and colleague Amit Seru, who will introduce the panelists as well as moderate the panel. Amit is a Senior Fellow at the Hoover Institution and Steven and Roberta Denning professor of Finance at the Stanford Graduate School of Business.

He codirects Hoover Initiatives on Corporate Governance and Long Run Prosperity. Amit's research focuses on corporate finance with an emphasis on financial regulation, technological innovation and the financing of firms. He has presented his research to US and international regulatory agencies including the European Central bank, the Federal Reserve of the United States, the fdic, the International Monetary Fund and the securities and Exchange Commission.

Amit, the floor is yours. Thank you.
>> Amit Seru: Thanks, Steve, and welcome everyone back from lunch. So, it's a privilege to moderate this session. I don't need to motivate this session too much. It's about one of the most transformative technologies that we can remember, AI, Artificial intelligence. It's something that we all see and are breathing on a daily basis.

If you open your phone or you put in maps in your navigation in your vehicle or you're getting recommendation for products, you're already interacting with AI. There's a reason why it's being called electricity of our generation. Why is that the case? Because the effects are supposed to be very transformative.

Just to give you some numbers which should get us all excited, it's already contributed to 500 billion in terms of GDP last year. In the next five years it's supposed to be contributing. Estimated suggest, about $4.5 trillion. That's the GDP of Japan. Another way of thinking about it is that some economists project that the productivity gains from AI are going to be 1 to 1.5% every year for the next 10 years.

If we do that, the theme of the conference is about prosperity. We'll take care of it. We'll also take care of the previous panel which was about the fiscal situation. No one mentioned AI as much in that panel, but I think that will take care of everything if we were to believe these numbers.

But there is also another issue which relates to the potential risks this brings in. And there's a lot of discussion about, hey, this is going to affect 50% of jobs in certain ways in the next five years. And people worry about job displacement, potentially reskilling, and also the potential for increasing income inequality, which relates to the first panel that we have talked about.

And the debate really here is that on the one hand, it's going to be transformative for productivity. And you're talking about things like, you know, getting a loan application approved or a job interview to more serious things like accelerating medical breakthroughs for diagnosis or other challenges that we have faced because we can now get to these things.

So that's one side of the story. The other side of the story, of course, is the risks it brings in, the ethical dilemmas it brings in. There are biases in algorithms which are trained on data which exist. There are things that we worry about related to privacy of individuals.

So where do we draw the line? And like I mentioned, inequality. The worrying thing in all of this is that AI is not something that's going to happen in the future. It's already here. So the question is not if the US Economy is going to be affected by AI.

The question is, how is it gonna be affected and what's the direction going to be, right? So that's the debate that we are hoping to have here with my panelists here. Now, one could argue that one thinks about a technological breakthrough like AI. There are always winners and losers.

So what's new? So we are going to be discussing some of that. And what we are also going to discuss is if the US Economy is really ready to address these opportunities and challenges in an effective manner. And I would not be more happier to introduce the panelists who are going to be talking about this because they are well situated to talk about this.

So let me just introduce the panelists and you'll see what I mean. So first, here is Jonathan Levin. John became Stanford's 13th president in August of 2024. Previously, yes. I know John previously from the Graduate School of Business where he served as the Philip H Knight professor and Dean for eight years.

John also has another job, which is being an IO economist who has thought about market design. He has won many prominent awards and most recently has served in the White House Advisory Panel on Science and Technology. Next to John is Steve Davis. Steve is a Thomas W and Susan B Ford Senior Fellow and Director of Research at Hoover Institution.

Before Hoover Institution, he was on the faculty at University of Chicago for almost three decades. He's an expert on matters related to labor, future of working arrangements, business dynamics, policy uncertainty, and when he has nothing else to do, he also hosts now a podcast, Economic Supplied, which hopefully many of you hear.

And on the extreme left there is my friend Justin Grimmer. He is a Senior Fellow at the Hoover Institution and Morris M Doyle Centennial professor for Public Policy in the Department of Political Science. Here his research develops new machine learning and natural language processing algorithms for study of politics, which he then applies to study elections and US Political institutions.

Now, why are they here? Because AI really affects all of these themes that you've heard. Markets, market design, labor, ethical issues, institutions, and politics. So what I want to do now is structure the panel exactly like we did earlier. Each of the panelists will have about five to seven minutes to present their view on the opportunities and challenges that US economy faces on these margins.

And then once they have all put their views on the table, I'll moderate a discussion and then we'll open it up for questions from all of you, which I'm sure you will have many. So with that, let's welcome John.
>> Jonathan Levin: Great. So thank you, Ahmed, and great to see everyone here.

It's always hard to be on a panel about the future because I acutely conscious of how hard it is to predict what's coming. And certainly if one goes back to the beginning of any transformative technology, electricity or modern manufacturing or computers, there's a huge host of terrible. Predictions that look really bad in retrospect, some, you know, utopian and some catastrophic, just as you hear today about AI.

And so we're going to get to engage in all of that today, but hopefully we won't totally regret the predictions that we, that we make. So let me say a few things. I'm gonna just start on a sort of optimistic note to say I think the opportunities from AI, from computation, from machine learning are extraordinary, including here at Stanford, when you think about the potential for advances in scientific discovery.

And more broadly in the economy, when you think about the potential for productivity advances, I mean, we have been lamenting for quite a long time in this country the slowdown in productivity growth. And Steve can talk about that because that's something he's expert in. And if you were to point to one thing that might set off a resurgence of productivity growth that would elevate the whole economy and improve people's lives, this is it.

And similarly, when it comes to scientific discovery, we've been worried for some time that ideas are getting harder to find, that it's harder to have scientific breakthroughs. That's been a theme of research here at Stanford, actually. And if you were to point to one thing that would unleash a new revolution in scientific discovery, this is also it.

And so those opportunities are extraordinary. We shouldn't lose sight of them in all of the concerns that people have about the effects of that and how rapidly it might affect the economy. Let me say a few things about the topic, we're trying to talk about challenges facing the US economy.

So let me say a few things about the economic impacts. Steve is going to talk in a minute about the labor market effects, which are important and I'm sure we'll get into that discussion. I want to observe also that as Amit already mentioned, AI is triggering an enormous investment boom.

I mean, just Today you saw OpenAI announced they would invest $100 billion in the next year or so in data centers and maybe 500 billion in the coming years. The Goldman Sachs estimates there'll be about a trillion dollars in investment. That's a big number, actually. I mean, the US private investment on annual basis is only about 4 or 5 trillion dollars.

So this is a huge, huge, huge amount of money going into to, to building infrastructure to, to, for this technology. And why are they doing it? They're doing. Because it's a race to basically get ahead. And some of that comes down to the market structure is uncertain. So let me just say a few things about that because I think it's a very, there's a very interesting set of economic questions about who will control the technology, who will emerge as winners from the technology.

We don't have the answers, but this is a very interesting set of questions. So here's a just an observation to start with. If you look at the largest 10 companies in the world today, okay, five of those companies are the tech giants the Apple, Microsoft, Amazon, Meta, Google, three of them are chip companies.

Broadcom, TSMC, which produces the chips, and Nvidia which makes the AI chips. And then the other two are Tesla, which many people would think of as a big AI company and Saudi Aramco, which by the way stands to benefit from an increase in energy consumption which is going to come with all this data center investment.

These companies are enormous. They are. Nvidia, its market cap is about 12% of US GDP as of this morning. That is much bigger than the largest companies have been at any point in US history. Even in the 1920s after the rise of the giant standard oils and manufacturing those companies their market cap was probably less than 1% of US GDP.

In the 1950s, General Motors based company, 1% of US GDP. Even in the 90s when you had companies like Microsoft that were global at 5% of US GDP. So these are huge companies. So the market does think there's going to be huge, huge winners from the future technology.

And that does raise a bunch of interesting questions. Very quickly I'll point to three and then return to my fellow panelists. One of them is traditional concerns about economic market power. That is pricing, the effect and the innovation and so forth. Lots has been said about that. A second is political power.

That is the relationship, the tendency of government to wanna coerce the largest firms and the tendency of the largest firms to try to wanna influence government. It's not really a surprise who was sitting behind the president in the rotunda yesterday in Washington? It was the representatives of the companies I just, just mentioned.

And then the third, which I think is a little different and, and probably the least studied in the social sciences, although there's some work here at Stanford on. Is basically market power of people's attention and information flow. And that to me is actually a very interesting issue. And, and one of the reasons I think that's an interesting issue is even if you take something from recent years like Internet search, which was very, very concentrated, Google had a huge position, has still huge position in their search.

They were surfacing navigational links to people. And in the future, and even today, 50% of the searches already do this. They're just giving answers. They're just telling people, here's the answer to your question, and maybe it'll be 90% in the future. And that, to me raises a whole interesting set of questions about, you know, who is it who will get to decide what type of information is provided to people, gives them the answers to things, particularly when those are not factual questions, but they're questions about how should I think about a particular topic and so forth.

And, and I think, you know, I don't think that's a call for regulation, actually, but I do think that's a call for a lot of thought. Hopefully some of it'll happen here at Stanford cuz it's a hugely important topic. I'm gonna turn it over to Steve for the
>> Steven Davis: Okay, thank you.

I want to thank the organizers for putting me on this panel rather than the one before lunch. It's much easier to be upbeat and sketch an optimistic future under this topic. And that's largely what I'm going to do. So you've already heard from John that one of the exciting things about AI is its potential to accelerate the innovation process.

We already have some clear examples of that. In the case of protein folding and materials discovery, where it's kind of well documented how that can happen. And then back to something that Amit said. There are several credible studies that have concluded just based on the more mundane applications of AI tools to things like software coding, materials, document editing and the like, these kind of things that are already very much a part of the everyday work, work life for some of us, that these can add one to one and a half percentage points per year to labor productivity growth over the next decade.

That's an enormous blessing, okay? And I think it, it, it's an enormous blessing. It's an enormous opportunity. We want to make sure that we take advantage of that. One thing I worry about is that the scope for misguided regulations and lack of clarity around property rights and. Liability risks in the legal sphere can both slow these breakthroughs, but also stymie their commercial applications.

So that that is also a growth industry of sorts. People tried to consult on how to secure property right, clarify property right, protect themselves from the possibility that you develop some AI tool that gives wrong or politically inappropriate answers and you're then subject to lawsuits. So I think there's a real need to clarify property rights and liability risks.

Okay, that's kind of one policy prescription. There's good reasons to foster basic research in AI as there is in other areas. But beyond that, my primary stance towards policy around AI, I think should be first, do no harm. Okay? It's easy. And academics are very good at envisioning potential bad paths that technologies might go down, including AI.

And then imagine we have to undertake lots of preventative steps to prevent. To head that off, I'm not in favor of that approach. I have close to zero confidence in the ability of my esteemed colleagues to predict exactly how AI will play out. It may well play out in some ways that create problems, and I think if it does, we should address those.

Ex post. And here I'm speaking about the economic sphere. And I just want to digress for a minute and pick up. It's somewhat related to something John said. There are huge concerns about the potential to use AI tools for political repression, for propaganda, for control of the information flow, and so on.

I want to acknowledge those concerns. They're somewhat intersecting with economics, but also venturing into other territories like national security. I'm going to acknowledge those concerns, but I'm not going to address, I'm not going to dwell on them today. Back to the narrowly economic sphere. And here I want to talk about the labor market consequences in particular, as I suggested, we don't really know how AI is going to play out in the labor market in aggregate.

We have some well documented cases studies where it looks like the application of AI tools largely serves to bring up the productivity at the low end of the worker productivity distribution and bring people up to something like the average. So customer support services might be an example how to help the least productive customer support staff be more like the person who's kind of in the middle of the distribution.

We have other examples going back to the materials discovery case is one where it looks like the application of AI tools primarily boosts the productivity of the most skilled practitioners in that field. And in the case of materials discovery, the AI tools can throw up all of these suggestions and then somebody's got to sort the good from the bad.

And it appears that it's the highly skilled, highly experienced scientists who are most able to do that. So it's too soon to say whether or not the application of AI technologies are going to compress the productivity in the weight distribution or expand it. Now, based on previous experience with so called general purpose or broad application technologies like AI, it's reasonable to conclude that the rollout of AI technologies in the world of work will happen gradually over many, many years, despite the many examples you can point to a particularly potent applications in certain settings.

Why is that? Well, there are adoption costs, there are the regulatory barriers that I alluded to earlier. There's the need for people to learn how to use these tools effectively. There's adaptation challenges within organizations, and there's a slow reallocation of labor and capital in response to new opportunities.

That's how it played out with steam power, that's how it played out with electric power. That's how it played out with microprocessor. And I think it's most likely to play out that way in the AI space as well. But suppose I'm wrong. Suppose I'm wrong that there really is a tremendous disruption and transformation in the labor market in the next five or ten years.

Even then, I'm not too worried about the potential for AI advances to generate huge disruptions in the labor market. And I want to explain why, because I think the mental model that people often have in mind when they think about the potential for AI to create havoc in the labor market is they think about what's happened in the manufacturing sector in the United States in recent decades.

Okay, that's not a very good model for how the labor market effects of AI are likely to play out, even though it's a natural one to turn to. Let me explain why. First, if you look basically every post war recession in the United Every, every recession in the United States since 1945, Covid's a bit of an exception for obvious reasons.

But the rest, they're all characterized by highly concentrated contractions in the manufacturing sector. Manufacturing is a cyclically sensitive industry and manufacturing production tends to be highly localized. Certain kinds of production autos in Detroit and so on would take place in certain areas. So what happened in the manufacturing sector is when there were big fallouts, big job losses, and there were.

They tended to be very concentrated in time and space. So you have lots of job losing workers looking for the same kinds of jobs, they brought the same kinds of skills at the same time in the same places that made it much harder for the manufacturing job losers to find new good jobs, made it harder for their families.

It made it harder for the communities that were impacted by, say, the shutdown of a steel mill. AI job displacement effects are not likely to have that character. AI technologies, as near as we can tell so far, are going to have effects across many industries, across many occupations.

They're not particularly concentrated in cyclically sensitive sectors like the manufacturing sector. And the economy as a whole has moved away from goods producing, cyclically sensitive sectors towards more service producing sectors. So that's one reason that I don't think the manufacturing experience in the last several decades is a good model for how AI is likely to play out in the labor market.

A second and distinct reason has to do with the big shift towards remote work that we've all lived through, or many of us have lived through, we've all seen since the pandemic struck. So think about what that does to the labor market for jobs and for workers who are suitable for either fully remote work, or more often, some kind of hybrid working arrangement.

The advance of remote work, basically partly untethers where people live from where they work. And for people who are in jobs that have some remote capacity, it effectively enlarges the geographic reach of the labor market. Okay? That means for the jobs that are on offer and for the workers who are seeking those jobs and have those skills, the labor markets effectively become larger in size.

And so there's a large body of research in economics that kind of documents what might seem obvious. But it's good to document it nonetheless, that if you lose a job in a large, thick labor market, say, a major urban center where there are many jobs of the sort, you might want to.

To have. It's easier to find another job. And hence unemployment spells tend to be shorter. Earnings losses tend to be smaller for those people. Well, we've moved in that direction in the economy as a whole, and especially in many of the AI exposed jobs, they are also often, think about it, sector jobs, finance sector jobs, and so on.

Business, services, scientific activities, they lend themselves often to some kind, some hybrid working arrangement, if not a fully remote working arrangement. One last point on this. There's another shift in the way labor markets operate that has been unfolding in the wake of the pandemic. It's still very much unfolding and it's also driven by the shift or remote work or associated with it.

But it's a little bit different than what I described before. If you look at the average large employer or the average industry around some large employer, the industry network around some large employer, their workforces are becoming more geographically dispersed. Okay, so you might have a tech firm that's headquarters in San Francisco.

And before the pandemic, the vast majority of its workers might live very close to San Francisco. But now, because some of them are on hybrid or fully remote working arrangements, their workforces are spreading out. So on average, each individual employer in the United States is moving towards a more geographically dispersed workforce.

What does that mean? That means if that employer or some particular narrow industry really tanks the first line, geographic effects, the fallout from all the job losses will also be more geographically dispersed than would have been the case before the pandemic. And again, same idea. If you're going to lose your job, I'd rather lose my job in a situation where not very many people in my locality are losing the job than in a situation where everybody in my locality or many of them are losing a job.

So that's another reason to think that the economy is going to be more able to adjust and adapt to the kinds of transformations that are likely to come with AI than would be suggested by the experience that we had with the deindustrialization of the parts of the US Economy in terms of jobs.


>> Amit Seru: That was great. You're very persuasive. I'll strike off all labor questions now.
>> Steven Davis: All right. Well, I'll be quiet.
>> Amit Seru: No, no, thank you.
>> Justin Grimmer: So I want to pick up on some of the themes that we've already heard about. So I was asked as a political scientist to think about some of the regulatory barriers to effective policy and AI.

I'm gonna say it's a collision of two things. We have a classic regulatory trade off. How do we facilitate Experimentation while preserving safety. Where do we fall on that spectrum? And then there's going to be a particular problem with AI similar to some other sectors. But I think it's particularly acute here and that there's going to be some real expertise asymmetry.

There's going to be a lot more expertise in the private sector than there's going to be in the public sector or even at a university like Stanford. That's going to create some trouble. So I want to think about AI. I usually say it's too broad of a category.

If I want to tell you what's going to be a barrier to effective regulation, we need to think about types of AI. So I wanted to spend a little time making sure I had some types in mind. So here's one example. A big chunk of what's going on in AI right now are prediction algorithms.

It is what it sounds like. They sort of take some facts about the world and they try to make some forecast. This is being used within government to do things like make bail decisions. Should we allow someone out before they go to their criminal trial? It's also being used in healthcare companies, both inside and outside of government.

Outside of government with companies and inside government with Medicare. That is given a patient profile, how long should someone stay in the hospital? What sort of care should we recommend? And within companies we see that these prediction algorithms are being used for things like HR decisions. Who should we hire, who should we promote?

Is this person likely to be a productive employee? I think in this instance we can see this collision of experimentation and safety, for example, with a bail decision as just an example. But obviously this appears many other places in the economy. To run the best experiment, we should randomly be letting some people out.

We should have like some prediction and we should just see, well, if my prediction says this person's gonna go commit a bunch of murders, let's let them out. Let's just see how that shakes out. That's a bad idea, right? So I think we can all see pretty clearly on that experimentation, safety, trade off.

We can just say maybe we're doing okay. In many other areas, it's going to be much less clear. It's going to be substantially more murky, and it's going to be all the more murky because there's going to be technical questions about how we use the data that we obtain in order to make a determination about where we even are on that spectrum.

And it's possible that government can come in and weigh in on that. It's possible there could be other remedies to figuring out how to use that information. A second big area of AI is broadly what we think about is processing. This is about using information in order to pilot autonomous machines, vehicles.

I'm thinking drones, cars, robots. Here again, I think that safety, experimentation, trade off is pretty clear. If you're the CEO of Waymo or you own a driving autonomous car company, it's like, open up the streets, let's go. For some reason, on my block for many months, every Saturday, an autonomous car comes and simulates like they're doing a drop off at my house specifically.

So I'm not sure how I ended up on that list. But that is again, I think a place where it's pretty clear where we fall on that spectrum. That's pretty safe. You know, they haven't run over any of my kids or any of my dogs yet, so hopefully they keep that going.

It is much more murky as we think about deploying this technology in a city. And all the more murky as we think about catastrophic things that could happen as we move to autonomous vehicles. Who is responsible if a broad collection of drones or cars or ships or robots are hacked and they perform some very destructive task?

Okay, that's going to be a thing. I think Steve alluded to that. That's going to be something that's, that potentially could be figured out at the governmental level. And the last category is the one that I think has the people most excited, what I'll call it a generative category.

So this, this category folks will use AI or AI like tools to create text software. They'll discover potentially new proteins that might be folded, new discoveries in physics, maybe, you know, they'll prove some theorems and mathematics. There's a lot going on here and the regulatory issues abound. So, one big issue that maybe doesn't fall on this safety, experimentation, trade off is on copyright.

How is it that we can incentivize people to still produce real text if all that's just going to be consumed by an AI and those rewards will just be allocated to a company. But there's other interesting issues which maybe we get into in the Q and A about how do we handle competition between international belligerents or allies in the use of AI?

For example, how do we think about competition between the U.S. and its adversaries like North Korea, in deploying various models to hack the other governments? How should we think about counteracting those hacks? This could be the subject of some international protocol, or there could be other ways in which we think about navigating.

Navigating that, including letting models proliferate and allowing there to be competition. Okay, so to get back to this, how do we think about where we fall in this experimentation in safety, and how do we get government to get this right? Because if it gets it wrong, it could have real economic consequences.

At the moment, I think we have to solve the asymmetry. The asymmetry is tech companies, OpenAI have a lot more money to hire the very best people and they have a lot more resources in order to hire those people. So that's one big problem. We have to figure out how government can hire the very best people.

But there's going to be a second problem that's a bit more of a type problem. And to give you a sense of this problem, I want to relay a conversation I had with one of my colleagues the other day. We just saw a truly excellent presentation about privacy.

And it was a negative take though. The person was very negative about the current state of privacy in the United States. And I remarked, you know, I just once would like to go to a presentation that started off, at least initially, like, hey, last 25 years we've gotten some pretty cool stuff, now there's trade offs.

But my life is better because of the things we've had for the past 25 years. And this person who is a academic here at Stanford, looked at me and goes, what's better about the last 25 years in technology? Like, my jaw's on the floor. I'm thinking a variety of things.

The first thing I could think of is Google Maps. But this gets to the point, I think there's going to be this experimentation, safety, trade off, and who's attracted to do what. So if you want to build, if you want to make AI do things, I think there's an attraction to go into the private sector and build those tools and see them be deployed.

And you may not think about safety enough. And certainly it's going to be hard for you to internalize those collective costs. On the other hand, there are a lot of people who don't know much about how to build, but feel like they know a lot about how to regulate and promote safety, even if they don't have a lot of evidence for that safety.

And they're exactly the kind of person who could be attracted to those government jobs. And we really need to figure out how to solve that sorting problem in order to create effective AI policy.
>> Amit Seru: Did anybody want to respond to anything you heard?
>> Steven Davis: Can I pick up on two things?

Two aspects of one of Justin's examples which I found quite helpful, thinking about the Waymo example and two things about the Waymo example. First, I worry that in the Waymo example we will require levels of safety that far exceed what human beings can deliver before we let these vehicles loose on the street in a big way.

And that strikes me as probably the wrong standard. So that would be an example of overregulation, in my judgment. But you also mentioned, I think what is a huge issue where there is a vital role for policy, which is the scope for nefarious actors, whether it's other countries or terrorists or whatever, to hack, say, the system that guides way more vehicles and instructs them in some way to start behaving dangerously or something like that.

That's a real problem. And it cuts across many different areas. And it's pretty clear from constant news reports that many parts of the government, many private organizations, have insufficiently invested in protecting their internal IT systems. And you can easily see why no particular private organization is likely to put enough weight on the need to protect our IT systems from nefarious external actors.

So there's a public goods problem there. And I would be very much in favor of efforts to encourage, whether it's through subsidies or other ways, investments in how it is that we protect ourselves from these nefarious actors who want to use these, these technological advances, these new systems, to actually harm us.


>> Amit Seru: So can I ask something that I heard from the three of you in different ways? So it seems like who controls the information is going to be very important, whether it's the government, whether it's big tech, whether it's big firms. Leveling the playing field seems like something that somehow has to be brought on the table.

How do you think we do that in the current environment where, you know, there are a few firms which have a lot of resources, like you said, Justin, and at the same time, AI is around us, it's moving at a fast pace. So sure, we don't want to slow this, but will it be too late to reverse anything when all is said and done?

So how do we think about this? And I'm lucky that I get to ask the questions, so I'll start there.
>> Justin Grimmer: There's an interesting model. It's the Meta model. Zuckerberg talks about this a lot, where he's going to he, in his view, solve both the international conflict issue and some of this leveling of the playing field through an open source model of many, many of these technologies.

And the idea with the open source model is that we're going to proliferate code. It's not going to sit on a code base in my company privately, I'll let anyone see it. And that's true of Meta's large language models, which are the things that power their chatbots. You can go out right now, you can Download them.

In fact, I was receiving a text message while here about a research collaborator who had then trained it further for a thing that we were working on. So it's very productive in that way. There are trade offs there. There's obviously going to be trade off on intellectual property and we want to make sure if we push this open source model that people feel like they're invested and they're going to receive something from training of those models.

And there's also what we might call the deviation. We've talked about some collective good problems. So there's a real issue if a government decides, thank you United States. We'll take all your open source code, we'll keep our code bank private and we'll do everything, we'll see everything you're developing and then we'll do some development on our own that could make it harder to keep up.

So finding a way to facilitate both internationally and domestically, this open source I do think would level the playing field on access to information that doesn't work for everything, but works for, you know, several of the areas I discussed.
>> Amit Seru: John, what would you say?
>> Jonathan Levin: Well, I think when sort of, there's a little bit of a question of what, what, what is the source of competitive advantage that allows certain firms to sort of win in big technology markets and will that be different with AI?

What will be the sources? You know, at the risk of being all wrong, I'll make some predictions. I, I don't think in the end just scale of computation will be one of them because eventually the cost of computation, maybe today it is, but eventually the cost of computation and efficiency will likely fall and it's hard to differ.

You know, my, my Nvidia chips are no different than Steve and Justin's Nvidia chips, at the end of the day could, I don't know that it will, it could be data. Some people think that it'll be a huge competitive advantage if you have a source of data that's proprietary, particularly as we exhaust the public sources of data and so say in self driving cars, that's perceived as a huge source of competitive advantage.

If you have been driving around like Tesla with cameras on your cars and that allows you to train your algorithms better and that may persist over, over time. It's not clear. Another is algorithms, but, but actually those are pretty much out there in the public domain. There's not a lot of proprietary algorithms.

There's proprietary know how in terms of how to use them and tweak them and so forth and that may be sustainable and the tech companies that are investing a lot in having these, you know, real experts in building teams that may differentiate them over time. And then the last one, which is, I think for sure will be a source of competitive advantage, is actually just the traditional things that allowed these companies to become big so far, which is mostly network effects, essentially.

They run big marketplaces, they run big media platforms, they pair advertisers, they have an economic model that sort of has a flywheel to it that increases with scale. And, and, and, and I do think that in some sense will be amplified by these tech, by these technologies. So if you were to ask, is there a risk that the kind of, the big companies just get bigger and bigger and bigger.

I, I think, I think that, you know, that's why their stock market valuations are so high right now. It's, well, that's one of the reasons that their market values are so high right now is because people do think that, that, that is a, that is a, an issue.

And, and that does then raise the, the questions about information and so forth. And it's not that people will go to Google for answers because Google's answers are so much better than what you would get with an open source model or with something that someone that's in a Stanford Lab could create for 1/1,000th or 1,1 millionth or 1,1 billionth of the cost.

It's because you just go there because you're used to going there and that's where you, you know, you're sort of locked in and there's other reasons that you're, that you're there. And I do think that's something that is, that we should be incredibly thoughtful about because that is a huge amount of influence in some sense in the hands of a small set of platforms.


>> Steven Davis: Yeah, I share the views John just articulated, but I'm also struck by the following. There's no doubt these network effects are tremendously powerful in many platforms. But I actually worked on Microsoft's behalf in the consulting on the remedies phase of the government's case in the late 90s.

And if I look back at that time, what was the fear that Microsoft was gonna use some combination of Windows operating system and Internet Explorer to dominate the future path of technological advance? And well, I don't know, maybe some people in this room don't even know what Internet Explorer is.

So that didn't pan out. And then I think about, well, there was a time when Facebook just seemed to be so dominant and now we're talking about this at dinner last night. At least the younger hip folks know that Facebook is old news and there's newer things. And despite the importance of these networks effects on platforms, new platforms are coming out all the time.

So there's something about a vibrant market system which despite the power of these network effects, still, still allows for competition. I think we want to facilitate that where possible. And it kind of takes me back to the position if we did get to a point where some single or small collection of firms had such a dominant control of something that seemed vital to our security, to our lives, to the economy that exposed we do something.

I mean that's sort of what we did with Standard Oil. I think rightly or wrongly, a long time ago, it's, I think what many people saw themselves doing with respect to Microsoft in the 90s. But looking back to me, it seems like it was a tremendous government effort for something that wasn't really such a worry after all.

So I would again just take a little bit of a cautious wait and see approach and ex post. If things really work out in a way that's alarming, then yes, we need to take some policy response.
>> Jonathan Levin: I agree with that as well.
>> Amit Seru: Can I ask another thing which is again related to some of the things that I heard, both for policymakers, regulators, but also public.

Because things are happening so fast and coming from so many directions, there is potential to be fearful or be misinformed in case of regulators like you said, Justin, they don't even have resources to keep up with all the changes. So is there a role for education or awareness on all of these issues which gets us there more effectively?

How do you all think about this? I knew I would get you.
>> Steven Davis: I think that was directed.
>> Justin Grimmer: Good, yeah, I'm happy to start. So I think the first order before we start thinking about whether we can educate the public to correctly consume AI or to offer best practices where these things are deployed, if we're thinking regulation would be correcting this imbalance like I discussed.

So thinking through how to get regulators who understand even the basics of the technology in place so they can work with the companies and speak a common language, I think that's a first order thinking about like how the public can use this and how they perceive this. This is obviously a big literature in fairness, sort of thinking about what we'd even define as fair.

I'd say two things about that. So first, when describing these algorithms to the public, there's a tendency to look for the cheap click, or the cheap headline, or the cheap academic article, which is to say, bail algorithms are biased, but Steve alluded to this with Waymo. You don't compare institutions in an absolute sense.

It's not interesting that bail algorithms are biased, it's interesting. The policy question is are bail algorithms more biased than a human and how much more biased or in what direction? And is that better or worse, according to some welfare, way that we would calculate that? So that's the first thing I would say.

So educating the public just about how to think this I think is really useful. There's a second tendency that goes in the other direction where people think they can just rely on the the generative models to produce content or to write something for them and then they're shocked to find that it's not as good as advertised.

So I've documented an instance where an election wasn't certified because someone erroneously said they had evidence of voter fraud because Chat GPT screwed up the analysis. We've all read bad things Chat GPT produced receive emails regularly with the GPT sign off at the bottom that people forgot to edit.

So I think teaching people about what the limits of these are so appropriately conveying what these models can and cannot do. But I think the overarching thing is solving this regulatory asymmetry.
>> Amit Seru: John the Stanford of a rule.
>> Jonathan Levin: Well, I think there's always concerns about the federal government can acquire the relevant technical expertise across a huge range of subjects.

AI is just one of many of them and there's lots of reasons why it's hard to do because the federal government has a, you know, complicated HR system and it doesn't necessarily, it's not flexible, it can't necessarily offer market wages to people. I think that concern is probably in certain places that concern is probably like the problem is even worse than people think.

Like in certain parts of the military you would like, you really would like to have top cyber people. Can the government really go get top cyber people in national security? Both for often seems it's hard to know because it's all classified, but it does seem potentially hard to get the very best computer science graduates out of Stanford to come work in national security and work on cyber and in many cases I think it's much less actually because so much of the public regulation is openly debated and there can be expertise and so forth and do you need to be able to code up an LLM to be able to have an opinion on sort of how OpenAI works works and so forth?

Not necessarily. You need to be able to ask good questions and know who to consult and tell a good answer from a bad answer and so forth but you don't necessarily need. You know, deep technical expertise for, for a lot of problems. Having said that, I do think there are some things that the federal government could do very simply that would enable more access to technical leadership.

One thing the federal government historically has done, but not nearly as much as they could, is to, for example, bring people out of academic, out of universities into government and to work there not on a long, you know, not forever, but on a short term basis. And people do that.

We've had lots of people here at Stanford, including the director of the Hoover Institution, who's been in government multiple times. But there's ways that we could make that far more flexible. For example, like you could do it remotely. One of the barriers to have people go to work in government is you gotta go to move your family to DC for a year.

Well now as of this morning, you need to do it in D.C. but as of yesterday you didn't. And, and you could imagine somewhat more flexible arrangements that would allow the government to, to make use of people from industry. And you see current industry doing that a little bit.

They use people from industry, make use of people from academics. If you feel there's a big dearth of knowledge. I think that just takes a little creative thinking. And actually, from a university perspective, it's a fabulous thing. It's a great, great thing if you're a professor at a university and you've got an expertise in something the government's looking at to get the chance to go for a year or six months and to help formulate advise on policy.

So it would actually be a good thing for, for universities as well as for the, for the federal government to try to have more of a scientific policy interface.
>> Amit Seru: So why don't I invite questions now from the audience as before, you can line up at one of the microphones that are posted in the aisle and I'll collect three questions at a time for the panelists.

And so once you ask a question, please take your seat.
>> Speaker 6: It's been suggested that the capital cost of AI research is so staggering that universities will be frozen out. I'd ask President Levin if he agrees with this and if so, what can the universities do?
>> Speaker 7: I have a question about the data center.

So just mentioned, like OpenAI is going to invest $100 billion for training centers and data centers. Right. So I'm thinking like from the energy perspective, from the demand side, there should be a lot of energy requirements. So I'm just curious about how the AI will boost the US energy economic in the future.

Especially with a huge demand side for the artificial intelligence training perspective.
>> Speaker 8: Can you please talk about your concern level with respect to competition on AI vis-a-vis China, for example?
>> Amit Seru: All right, three questions. So some for you John.
>> Jonathan Levin: All right, I'll start with the universities. So I think something everyone is pointing to the fact that it costs $100 billion or something to train the latest LLM and universities don't have as much as the CS faculty would love, $100 billion shaft.

It's unlikely to happen and it's certainly not coming from the federal government right now. And our, you know, just to give you a sense of the scale of that, we just opened one of the largest academic high performance GPU computing clusters in the country this fall, Merlot and it has 250 of the hot top end Nvidia chips.

Meta has 250,000 NVIDIA chips. So it's a thousand to one differential. Now of course a lot of that's supporting their operations and so forth but that, that's a very stark differential and we will not close it. I don't think that is. We are not going to have more chips than Google or more chips than Amazon.

They have business models supporting it and we have philanthropy and federal grants. And so that's not going to close the federal government help by investing in, in research computing for the country. That would be a good thing. I hope they'll do that. But I actually think I'm not worried about universities research capacity and ability to do great work.

And I and the reason for that is what universities do is just different than, than industry. Industry is going to do lots of advances to create products and services and universities do a lot of work to create fundamental knowledge. That is different types of questions. And we have basically two advantages when it comes to in the end we're going to have a human capital strategy.

We're gonna benefit from having great people and we have two things that will continue to attract great people I believe. One is we just give them so much freedom to work on the things they want to work on in a way that in the long term companies don't do.

And second, we bring people together from across tons of different disciplines in a way that mostly companies have not been able to do with very few exceptions. And that's exciting because you're around graduate students from other fields, you're on faculty from other fields. And so you know, my Belief is as much as you could go off to industry and make a lot of money and some of our faculty will do that and some of our graduate students are doing that, we will still be able to attract terrific people into the university and they will do great work.

Maybe not, they may have to work with a company if they wanna do it on $100 billion of chips. But they will still be able to do great work because of the environment we create that's so conducive to knowledge production.
>> Amit Seru: Anybody want to take the energy question?


>> Jonathan Levin: I'll quickly say to that. I actually think there's an incredible opportunity created by the growth in demand for energy which is that you. It's a little bit different the demands because a lot of it could be like electricity that's co located with data centers. There's a great opportunity for the federal government, if it got a little more creative with licensing of power generation, to use this as an opportunity funded by private companies to try out some technologies that would be both great for electricity generation and probably great for energy transition over time, like nuclear, like geothermal and so forth.

That's actually one of the really exciting areas in energy and you could leverage the high private demand for dedicated energy in order to, to, to do this. And that would be a very creative strategy for the country to take in energy generation Going, going forward. I'm asking from a US perspective.

Globally it'll probably be a little different.
>> Amit Seru: Then there was a China question. I'll add a wrinkle to the China question, which is all the regulation when you are having two countries and winner takes all. Is there gonna be good policy making and good regulation when you're fighting with China?

How do you all think about this?
>> Speaker 6: It's reaching outside my domain of expertise now, but I already hinted earlier that I'm quite concerned about the potential for nefarious actors to use AI systems. In China's case, extend and deepen its surveillance state and help other autocrats around the world extend and deepen their surveillance and states and their political repression.

That's kind of outside the narrow economic sphere. But I think it's a very serious concern and it's not just about whether we so called win or they win the AI race. And that's not the framing that helped you think about that issue. There are tools now for political repression and propaganda that autocrats are happy to use that are quite worrisome.

And I'm not quite sure how to deal with that. And then there's the set of concerns about a different set of concerns about nefarious actors hacking into our AI based and information systems and using them for cyber attacks, or just to undermine and corrode the performance, the trust in our system and so on.

I worry about those a great deal, but they are, you know, kind of beyond the narrow economic sphere and I don't feel like I have great expertise to comment on them beyond that.
>> Amit Seru: Let me take the second set of questions and then we'll wrap up.
>> Speaker 9: A lot of your talk has been about like future of AI, but there's a real question right now about the potential systematic abuse of public data under fair use principles.

Can you talk about this from an economic and more importantly public policy perspective?
>> Speaker 10: One question or one issue that some of you brought up was this idea that there's going to be so much resources needed to build these models that there might end up being a small number of players who actually work in this space.

How do you think about how this industry can work in a competitive way rather than having a couple oligarchic monopolistic companies and the impact that has on long term innovation?
>> Speaker 11: I wanted to ask the panel about if they had any opinions about closed source versus open source when we have these longer term goals as far as maybe a more equitable landscape for other companies to rise and defend against monopoly and also nefarious actors.

There's any strong viewpoints on whether the future should be open sourced or closed sourced?
>> Justin Grimmer: Yeah, I'll talk about the fair use. I, I just, I think this is like a real policy problem that needs to be figured out. And so just so you have a sense of what the issue is when training one of these large language models, they need lots of text as they're ingesting everything from the New York Times and the Washington Post and novels and everything they can scrape off the Internet on websites.

This is then internalized, trained into the models and then used as regurgitated or reported back as the models are queried. The question is then what do you do with the labor that comes from something like the New York Times, which was behind a payroll paywall? There's a copyright.

Just think about the trade offs. I don't have a good solution to this, but just see the two trade offs here. Clearly we want to continue developing these generative large language models. That's very important. On the other hand, it, it seems illogical that it can be the case that we carve out an exception for these models to say you can just violate copyright however you, you choose, because we think building these models is so important.

So there's a number of ways to think through what would, what would happen in order to make that make sense. The, the real problem, I think the real issue would be the biggest risk is that a copyright case or something like that could result in these models not being allowed to be trained for some period of time.

I think that would be an extreme decision from a court, but that, you know, could really stymie process progress. Surely there's some economic solution to resolve both sides of that.
>> Steven Davis: Well, this is back to some things I was saying earlier. We know that clarity over property rights is essential.

Okay. Now there's still the issue of how you assign the property rights, which has distributional consequences and in a world in which it's costly to transact, can also affect the efficiency with which this underlying, say, data source is used. So we need to think through those. It looks like this is playing out in the courts now and some, at least with respect to major newspapers.

And you're right, it would be good to settle these things expeditiously and setting the distributional concerns aside, coming to some clear regime about this and what would be appropriate compensation when Google or whoever is tapping the New York Times archive? You know, we need a solution, we need an answer to these and some sense.

It's at least as important to get a clear answer than to get a particular answer.
>> Amit Seru: Joan,
>> Jonathan Levin: I'll add something about the open source question. I think the open source models, having a supply of open source models, whether it's for Meta or from other companies, is a huge boost to both competition and future innovation.

And here's why those, by the way, the open source models are not completely open in the sense that you don't know how Meta estimated its model and the data they used, but it's made available to people for free in a form, what's called open weight, where you could build on it by continuing to refine it and fine tune it and build on top of it.

And that has two incredibly valuable benefits. One is it gives, say, a small entrepreneurial company a chance to have a starting point that's not at zero, but at a high level, to then go on and build a product and service. So it's a huge enabler of innovation and competition.

It's a constraint on market power. It's terrific for the, for the, for the, you know, for sort of economic competition. And secondly, in an environment, say like in academics, that's an, that's just an incredible resource to have available because if you want to go do research and learn about these models and so forth, you have it and you want to build some in some different direction just for curiosity or for some other reason, scientific advances, you can do it and you don't have, you're not beholden.

You have to pay every time you do a computation to a big company. And so I, I think that's, you know, the more I think that is a we, we should view that as a, as a tremendous a resource both economically but also just in terms of scientific opportunity.

And you know, I'm delighted that there's a couple of companies that are pushing in this, in the, in this, in this direction. It has a lot of positive benefits.
>> Amit Seru: If only there was a simple answer to these things. But our hope was to give you all the questions and I feel that I learned a lot from this panel and I hope you agree.

So let's thank the panel.

Show Transcript +

The panel discussion, "How Should the US Economy Adapt to the AI Boom?" featured the following following scholars:

Jonathan Levin, President of Stanford University and Bing Presidential Professor and Professor of Economics, Stanford Graduate School of Business

Steven Davis, Thomas W. and Susan B. Ford Senior Fellow and Director of Research, Hoover Institution

Justin Grimmer, Senior Fellow, Hoover Institution and Professor of Political Science, Stanford University

Moderator: Amit Seru, Senior Fellow, Hoover Institution and Steven and Roberta Denning Professor of Finance, Stanford Graduate School of Business

Expand
overlay image