The Hoover Institution and the School of Engineering at Stanford University held the DC launch of the Stanford Emerging Technology Review at the Hoover DC office on Thursday, January 25th, from 4:00 PM - 5:30 PM ET

The discussion highlighted the findings of the Stanford Emerging Technology Review (SETR) Report on Ten Key Technologies and Their Policy Implications

This panel discussion was in person at the Hoover DC office.

>> Amy Zegart: Welcome, everyone, to the launch of the Stanford Emerging Technology Review. I want to thank you for joining us here today. My name is Amy Ziegert. I'm a senior fellow at the Hoover Institution, and I have the honor of co chairing this inaugural initiative that is a joint effort between Hoover and the School of Engineering.

The co chairs of this effort are Condoleezza Rice, the director of the Hoover Institution, myself, the dean of engineering, Jennifer Widom, and my economist colleague, and Hoover senior fellow, John Taylor. We have a packed schedule ahead of us for the next hour and a half, so get your drinks and get comfortable.

We are so delighted to kick off this program with a bipartisan effort, something that is rare and precious here in Washington. So honored to have Senator Young. And then after he's offered opening remarks, Senator Warner. Now, Senator Young is, as you know, from Indiana, Senator Warner is from Virginia.

But they are national treasures, and they are leaders in the field of emerging technology and our policy. Senator Young currently serves on the committees on finance, foreign relations, commerce, science and transportation, and small business and entrepreneurship. He has been one of the leading lights on all things technology and policy, including national security.

He's a member of the Senate's AI caucus. He is co-sponsor of the Create AI Act, something that is very important we talk about in our report. The Act is designed to establish the national AI research resource. Think about this as a public resource of compute data and tools.

So that universities across the country and civil society have access to the critical ingredients of AI that right now, only a handful of companies do. He's also led in a number of areas. He's introduced the Global Technology Leadership Act, a bipartisan bill that aims to strengthen our national competitiveness and technology by establishing an Office of Global Competition Analysis.

To assess how the United States is faring in key emerging technologies such as AI. Please join me in giving a warm welcome to Senator Young.

>> Senator Todd Young: Thank you, Amy. Well, thank you, Amy. Thanks to all of you. I am grateful to play a bit part in this launch event.

And congratulations to the Hoover Institution, to the Stanford School of Engineering on the launch of the Stanford Emerging Tech Review 2023. I'm already looking forward to the next edition. Abraham Lincoln was perhaps the most prominent hoosier in our state's great history, and he was the only United States president to have ever held a patent.

And I predict that that record won't be defeated in this coming election. Lincoln was fascinated by innovation. He was fascinated by the world. But he was particularly fascinated by new and useful things that were fashioned from what he called the fire of genius. Lincoln once reflected that man is not the only animal who labors, but he's the only one who improves his workmanship.

This improvement he affects by discoveries and inventions. Innovation, improvement, as Lincoln liked to style it, is as much a part of America's character as our faith and self government itself. More than that, the two are inseparable. When you think about it. Waves of innovation across industrial revolutions have improved the human condition.

They've ushered in an era of prosperity previously unimaginable throughout history. But they do much more than create wealth. Innovation provides the stability that makes government by consent possible. It creates opportunities that allow people living amongst it to flourish and tools that allow them to do so in peace and insecurity.

It flows naturally, then, that our laws and institutions should keep pace with technological advancements, help unleash the creativity of our citizens, and encourage additional discoveries. And Stanford University has produced an invaluable resource in that effort, which brings us here today. Now, as you know, we often disagree on things here in Washington, important things, but on the point that American innovation is a key to the future of this country, our prosperity and our national security.

There's bipartisan agreement blessedly, several years ago, I struck up a conversation with Senator Chuck Schumer. Now, don't judge the substance of the conversation by the venue it was in the Senate gym. Senator Schumer is a Democrat from Indiana. I'm a Republican of course, we're very different people. Schumer, a blue state Democrat.

Me, a conservative, red state Republican. I hail from the industrial Midwest, Schumer's a northeastern guy. Lots of differences, but we quickly found that there's concern by both of us that America is falling behind in a technological race with the People's Republic of China. It didn't escape us that the CCP had invested $14 trillion in the frontier technologies that will shape our modern economy.

And decide winners of future wars, technologies like quantum computing and synthetic biology. And crucially, while China accounted for less than 4% of microchip production just a half decade before our conversation, it was on pace to control roughly 20% of the market. So that conversation in the Senate gym, ultimately resulted in what we now call the Chips and Science act, the law of the land.

A generational investment in not just national security and economic development, but really an investment in people. It's an investment in the ingenuity of the American people, one that was, in many ways, a collaboration among the primary elements of our unique innovation ecosystem in this country. A collaboration between government, private industry, and academia, much of the law's funding goes towards upskilling rank and file Americans to create technologies of the future.

Technologies that will tilt the balance of our competition with China in our favor. This, of course, means reviving domestic semiconductor production to prevent future supply disruptions. The happy coincidence of establishing these fab stateside, however, is the spread of employment opportunities in every part of the country. Many of these areas have, as you know, been neglected when it comes to tech sector investment over the years.

In fact, construction of a corridor of the semiconductor industry is. Already happening throughout the american heartland. And the regional tech hubs funded by the law will launch innovative companies. They'll help revive American manufacturing and lay the foundation for new jobs that will lift our communities. Since the law's enactment, not all that long ago, private industry has already begun investing heavily in domestic semiconductor foundries, and universities are establishing new laboratories, new programs, and degree programs that will prepare citizens to work in them.

Besides this progress, and it's been significant, more to come. The CHIPS and Science Act also catalyzed conversations, important conversations, about the role that policymakers can play in advancing technologies of the future. And how government can further encourage innovation. Stanford's emerging tech review is one example. It will be indispensable, I believe, in these dialogues now.

In addition to supporting a domestic semiconductor revival, the CHIPS and Science Act also authorized $170 billion in federal spending for R&D over five years. The next five years in technologies, again, like quantum computing, robotics, and artificial intelligence. Many of these key areas are the very focus of the emerging technology review report.

The rapid maturation of these fields is evidence that today in the early decades of this 21st century, the pace of innovation has quickened, and its possibilities are truly vast. As these technologies have potentially society changing implications, healthcare, education, agriculture, national security, and other areas of our life. It's crucial that policymakers understand these breakthroughs and stay informed as they progress.

That's how we'll make informed decisions during uncertain times and develop a framework in which our citizens can seize the opportunities that these technologies present. And we as a nation can navigate potential risks that they also bring. The comprehensive research and guidance provided by this report in the clear, accessible way in which it's been presented, will be an incredible tool for policymakers in the coming years as we establish american leadership across these fields.

Now, our ability to find the right legislative touch and to foster additional collaborations across industry, government, and higher ed will determine whether these technologies further the values of the Declaration of Independence. The universal enlightenment values on which this nation was founded. Or if instead, they become weapons in the hands of nations who disregard these values, instruments of those who subscribe to the laws of the jungle.

 

>> Senator Todd Young: But America, I would argue, has a distinct advantage when it comes to what Lincoln called discoveries and inventions. No nation, no nation governed by a small cadre of individuals, can match the imagination, creativity, and unique perspectives of millions of free people. It just isn't going to happen.

To think otherwise is the fatal conceit.

>> Senator Todd Young: Speaking of that point, Abraham Lincoln's patent I mentioned earlier, it was for a device that lifted stranded boats over sandbars and out of shallow water. Lincoln was convinced he would go down in history for revolutionizing river navigation. Well, he was disappointed.

Years later, his law partner, William Herndon, reflected that Lincoln's threatened revolution in steamboat architecture and navigation never came to pass. But doesn't the very ambition of this creation, of this design, of this conception, and of this patent still speak to us in a way? Human ingenuity overcoming seemingly intractable obstacles, moving us forward out of today's perilous moments.

Well, our friends at Stanford recognize that this moment calls for similarly spirited creations and another golden age of American innovation. At their contribution, they've helped ensure that this revolution will, in fact, unlike Lincoln's, come to pass. So thank you for inviting me today. Congratulations again to all of you who've played a part in bringing us to this day.

I look forward to doing some good together and working with the distinguished contributors to this report. God bless.

>> Amy Zegart: Thank you so much.

>> Senator Todd Young: Thank you.

>> Amy Zegart: Well, thank you, Senator, for those remarks. Very optimistic to say that we can do better than Lincoln. That's quite a bar that you put for us.

Next, I want to welcome, it's a pleasure and honor to welcome Senator Mark Warner, who is, as you know, serving his third term in the Senate, where he's the chairman of the Senate select Committee on Intelligence and member of the Senate Finance, Banking, Budget and Rules Committee. He is, as you know, a successful technology and business leader, co founding a company that became Nextel in his previous life.

Most importantly, what you may not know is that this entire endeavor started with his visit to Stanford two years ago when he asked us a simple and powerful question, how is Stanford thinking about emerging technology? Senator Warner, you gave us some homework, and we're here to present it to you.

So please join me up on the stage. Please join me in welcoming Senator Warner.

>> Senator Mark Warner: Thank you so much.

>> Amy Zegart: Thank you so much. Well, thank you so much for joining us. It's a pleasure to have you here with us. And I want to start, if we can, by talking about the lay of the technology landscape from your perspective.

Give us your sense of what technological areas are top of your mind. We covered ten in this report. What's top of your mind?

>> Senator Mark Warner: First of all, let me give a thanks to the Hoover Institute for having this and thanks for inviting me. My good friend Todd Young.

He did the science piece, I did the chips piece. John Cornyn and I, and together we made legislative magic, or at least sausage making in terms of that now bill. I'm grateful for the list that you guys came up with. It's still a quandary. I think back to when I first became vice chair of the committee and it was six or seven years ago, and thought, should we come up with a list?

China clearly had a list. And I went around and I said to CIA, what's our list? I said to ODNI, what's our list? I said to OSTP, what's our list? I said, commerce. Every one of them had a different list. And six or seven years later, I'm still unsure whether we should have this priority list or not.

I'm grateful for what you guys have created, but I'm still wrestling of, is the list focusing or is it limiting? And I'd love people's help and suggestions on that. What Top of mind for me. My background was in wireless, Huawei and 5G was a huge wake-up call for me, a holy crap kinda moment of like, my gosh, not only are we behind, but we don't even have a meaningful player in the game.

I've been enormously focused in terms of overhead and space, and again, competition with China. Obviously, we all have AI on our list, although I cannot think of a subject that, at least for me, is less linear in terms of the more time I spend on AI, doesn't mean I'm getting any smarter in terms of trying to fully understand it.

And normally I saw Drew out in the hall, saw some of the folks from the Biotech Commission, I think synthetic biotech, biomanufacturing, are they one or are they three? But I think that is a enormous, enormous area of focus for me, where, again, I'm not sure even where we place this in the government.

And I'm very grateful for the Biotech Commission the president set up. I met with them today, I think that's really important. Todd actually is serving on that committee. I'm big on energy, I'm really long on small modular nukes. And I am fearful that in our competition, particularly with China and Russia, that if we can't get some of this into actually installation as opposed to conversation, it will be a huge loss for us.

I'm still in the wireless world, I think we need to move to ORAN networks, open radio access network, so we can move away from the stack. Rafi's over here, who works with me, is a lot of my brains on a lot of the stuff. I may have forgotten a couple of things on my list, but those are some of the ones that are, that are top of mind.

And this again goes to the list or no list. If we have the list, yet we have this dramatic breakthrough tomorrow and with how slow government moves, does that lock us into a list that may very quickly become outdated? And that's, I think, still an ongoing conversation.

>> Amy Zegart: You've raised this profound question of the list or no list.

Is it limiting or is it helpful? And that gets us thinking vertically, this, this, this, this. But we also know, and we talk about it in the report, synergies between technologies, that this is a moment where there are many hidden synergies, some we know, some we don't, and they go in multiple directions.

Can you talk about how you're thinking about-

>> Senator Mark Warner: Yeah.

>> Amy Zegart: The synergies?

>> Senator Mark Warner: Like today with the Biotech Commission, I mean, and again, Jason Endy, one of your great professors at Stanford, not Jason but Drew, who I was just visiting with. And I had no background in biotech, but I was so enthralled with the ideas around biomanufacturing, the whole notion of synthetic biology versus biotech, and a field that classic biotech, I would argue, has overpromised and under-delivered for many years.

But suddenly, if you take all of this bioactivity and marry it with the rapid advances that should come from AI, means that that ability, whether it's gene sequencing or what Jason Kelly from Ginkgo Bioworks talks about, of creating a large language model that's totally based on bio language rather than human language.

It's like, my God, that makes so much sense. Yet then I see the markets haven't picked up on it at all. So that's an area where I think there should be this synergy, but I'm not sure it's been fully created. I think when we think about the alternative energy space, there should be more synergy, when in fact, it seems to be a wind vertical versus a solar vertical versus a fission vertical versus a fusion vertical, whichever is a generation source.

There should be some level of synergy with the distribution network. And that brings a lot of networking. And like with John Backus, we've done deals together before, the communications technology, there should be more there. I think as well, kind of the normal, the obvious synergy between AI and networking is something that no matter how much you scrape, you've gotta still have then a networking component below that.

So these are some of the things, I'm thinking about it. But it's one of the challenges I gotta keep, and why I'm so happy that I'm chairman of the Intelligence Committee rather than the other committee, so much of the government thinks about these things in silos. And if I had to say my biggest challenge or opportunity as chairman of the Intelligence Committee is to redefine what's national security, that national security is no longer who has the most tank ships and guns.

But it really is who's gonna win in all these technology domains. And since the IC touches everything, I feel like I've got the right then to touch everything and try to see if we can mush some of this stuff a little better together.

>> Amy Zegart: So let me drill down on that for a minute.

So we know that the IC is focused on collecting abroad, understanding what Redd is doing. But now, as you say, intelligence requires understanding what we're doing. Where is the United States ahead? Where are we behind? Where does it matter? How do we fill that gap?

>> Senator Mark Warner: Amen. Maybe from folks like you, because I remember,

>> Senator Mark Warner: Early on with the ODNI, saying, let's talk about China.

And we have lots of good analysts, and they had by sector, but because of the restrictions, they can always look outward. These very smart analysts didn't have a first-year Goldman Sachs associates knowledge of the domestic market because they were constrained from touching them. And in many ways, as I look at what China's doing and how we were shaping up on a competitive basis, I've used the folks at In-Q-Tel.

The CIA's venture capital fund is a place that can both look outward and inward. And I do think there's this recognition. One of the things early on I brought ODNI, Bill Burns at CIA, Paul Nakasone, and Gina Raimondo at Commerce together. We've gotta marry these capabilities. And I think in these, and it's a great, great loss that General Nakasone is retiring, but I think these leaders had both the confidence and the technology understanding that you have to marry outward looking and inward looking.

And so, well, at NSA, I think Nakasone's last budget had about 35 people sequestered over from Commerce. There was ongoing collaborations between CIA and Commerce. We need to sort that through. We need to figure out in a way that it doesn't freak people out. My gosh, are we looking in and out?

We also don't do all that good a job of collection, even on our allies in technology domain. And frankly, even within technology per se, what we're used to looking at is tanks and guns and ships and planes and not, as Drew and I were talking about, the kind of areas where China is making massive investments, in synthetic biology and biomanufacturing.

We just don't have the expertise inside to do it. And it's why we have to find, I think, a much more collaborative relationship with academia and where the experts are.

>> Amy Zegart: I'm glad you say that. We hope that one of the things that this emerging technology review initiative.

Will provide is earlier situational awareness of what's happening in technology. As we've been saying around town, if you have to read about it in a paper, go to a conference, you're too late.

>> Senator Mark Warner: Yeah.

>> Amy Zegart: So let me ask you a little bit about AI. We hear a lot about AI.

AI is all the news these days. And you hear, on the one hand, AI doom, existential risk, the world is going to end. On the other hand, you hear AI boom, everything is great. The promise is endless. Help us understand what's hype in AI and what's hidden. How do you think about the balance between those two narratives?

 

>> Senator Mark Warner: I'd love to, but that's classified.

>> Senator Mark Warner: No, this goes to kind of the heart of your question. How much of AI is simply what was big data, predictive data analysis, now called AI as a marketing tool? And if we go way, way back in time, say, 13 months ago, I think the premise was that the AI winners would only be, who's got the most data, who's got the most compute, who's got the ability to test that data on an ongoing basis.

And that's why we had to be concerned about China as a nation state. That's why the startups a la OpenAI and Sam's operation or Anthropic couldn't do it alone, had to partner with Microsoft or Google. And that premise operated for most of 2023, and I would say three or four months into 2024, meta then launched Lama, and suddenly the question of, would a large language model even be necessary?

And could you then bolt on AI products with or without an LLM connected to it? I think the debate is still out there. I see huge opportunity. I still do see some downside, and I am concerned, and this will be a bit contradictory. I hope we get to your question six about technology going forward, because I really want to push back on that premise.

But I am more than a little bit humbled by our ability as Congress to put some guardrails on this. The fact that when Schumer had his kind of who's who of all the AI guys in the room, the fact that they all raised their hand when they said, do you think we need regulation on AI?

Doesn't give me a lot of solace that that means we're gonna get there. Having seen the fact that coming in the aftermath on social media, or in the situation post-2016 with the Russian intervention in our elections. When things that I would have thought would have been total no brainers like data portability, interoperability.

If you buy political ads in rubles on Facebook, you ought to have the same disclosure requirements that if you buy them on television. We've done nothing, let alone taken on the holy of holy on section 230. So the idea that we're going to come in with some comprehensive approach on AI, I think is a bridge way too far.

So I'm looking at three errors. I'm looking at the national security concerns and kind of more on a technical piece. Number one, I've got four of the errors, number one. Number two, do we try to fill in the gaps on the EO, which I think the Biden administration did a good job, pretty comprehensive, but there's a lot of gaps there.

And you try to do that with a big bill or just a technical, fixed bill. But the two areas where I'm trying to make kind of more of a political but also policy statement are, I've said, what are the two areas of AI tools right now that could have a hugely negative effect without a next generation iteration?

And the two areas that I think are, one is talked about a lot. The other is not talked about as much. One is elections. And we've already seen tools around deepfakes. And again, we've had election manipulation for a long, long time, and we've had social media intervention, disinformation taken somewhat to scale with Russia in 2016.

But what's different now with AI is you can do this at scale and speed that is totally unprecedented. So can you get your arms around something around elections this year, not just America wide, but worldwide, since more than half the world, I think 4 billion people are voting this year.

And so I'm going to Munich in a few weeks, and I'm going to try to be putting forward a broad based bipartisan. And I don't wanna give away the story today with a number of nation states, bipartisan coming out of Congress, some of the big platforms that says, let's at least have some voluntary guardrails around disinformation, misinformation in elections.

And one of the areas that I do think we've got kind of a cool idea is, you know, judging what would be taken down or not taken down, gets into free speech debate. But if you have it on a voluntary basis, some of these rules, but then you have tools that come in and disable the watermarking or some of the tools that might be, that actually doesn't get to First Amendment.

That might be a place where disabling of voluntary tools might be a place where you could find some consensus. I think we gotta get a marker down. The other area that's not gotten near as much attention, and I'm not even sure we would be 100% certain that it's not happening already, of AI manipulation of our open markets and public markets.

The number of AI tools, again, way beyond a deepfake of a CEO saying something he or she didn't say. But in terms of filing false product reviews or. I got a whole list. And if I was an AI toolmaker, AI criminal using these tools, I wouldn't go after Fortune 50.

I'd go after Fortune 200 or 400 companies and nick away on a way that. And there I've got Senator John Kennedy as my, my co sponsor, and we are kind of looking at not adding a whole lot of new laws, but saying if you use AI tools on things that are already illegal, you might have treble damages.

Again, a concept that's already been used in the SEC. We also think that AI, in terms of the markets, ought to be taken on by. Gosh, I'm having a brain melt moment. The association of all the financial regulatory agencies, yes, FINRA, yeah.

>> Speaker 5: FINRA.

>> Senator Mark Warner: FINRA, not FINRA, no, no.

Speak up louder, Rafit.

>> Speaker 6: FSOC.

>> Senator Mark Warner: FSOC, which was a great idea and has had a pretty crummy record since Dodd Frank came around. And, again, that might again go into a different liability standard if these AI tools are used for market manipulation. So, you know, I guess I didn't answer, is it going to save the world or destroy the world?

I think the proverbial jury is out, but I think we would be crazy if we default. And this, for everybody who is nodding with me up to this sentence, you'll shake your head differently. If we default to the traditional approach, which is, well, gosh, you put any guardrails, you're gonna slow down innovation.

China's gonna beat us. I think there are guardrails that don't necessarily slow innovation. And the notion that, this will come to your technology irreversibility argument, that we'll fix it five years from now, at least we found so far using social media as an example. We've been dreadful at that.

 

>> Amy Zegart: Yeah, so the irreversibility problem. Let's talk just a little bit about that. So one of the big differences between the sources of national power today and the sources of national power of yesteryear. So you think Think about old sources of national power, which are still important, tangible things you can hold.

You can touch oil, steel, land, populations that can be controlled today with the sources of national power being data technology, know how, once it's in the wild, you can't bring it back. So you talked a little bit about how to think about that in AI, but that affects lots of levers of power that we're used to.

 

>> Senator Mark Warner: And I'm not sure it is. I'm not sure I agree with the premise because I would think, and again, maybe I'm looking back into too many 20th century models, but people devise cars. It's like, my God, we got cars on the road. Didn't mean at some point, and it took us a while, maybe even to the 1960s, before we had some level of safety regulations.

RF frequency was starting to be put to use. And it wasn't like, my God, RF's out there forever. We're never gonna put any constraints on it. And I actually think we've done a fairly good job of trying to gain the benefit of technological advance with some reasonable set of guardrails.

Now, I would argue in many ways that we got so enamored with technology, particularly social media kind of in the Obama era. Where we had this strange convergence of traditional Republican view of we don't wanna put regulations in place with this, Democrats being infatuated with technology. And the beneficiary of this with social media companies, a way where we went so far beyond we're not gonna put guardrails is we're gonna give you a get out of jail free card in section 230 that says, matter of fact, you have no responsibility at all.

And I do feel that has led to, and it's led to a position where there is still probably an 80% agreement amongst members of Congress that we ought to do something about at least child safety online or something, but we still haven't been able to act. And it does concern me that the idea that we have to race with China, which we do, or the rest of the world on AI, but the idea that, don't worry, we'll fix it later, is concerning.

But I'm not at the point that it's out and it's irreversible. It is a little harder when we think about everybody's personal DNA drew out there being scooped up on a regular basis. But I don't think we should get into the defeatist mode of, my God, once it's out, we can never put guardrails around.

 

>> Amy Zegart: It's nice to start with an optimistic note. So let me ask you, I know you have another commitment. I wanna-

>> Senator Mark Warner: No, I'd be happy to take some questions.

>> Amy Zegart: But I wanna end with asking you to give us more homework. Where do you think Stanford and this initiative can add value and help you and the sissi and other parts of government?

Most.

>> Senator Mark Warner: I am incredibly anxious to learn more about the whole bio space. Not just contribute here, not just with the commission, but I just think the possibilities around biomanufacturing, the possibilities around the notion we're talking about today. 50 years ago, the idea of collecting imagery was not a domain, and we created NRO and NGA to create all that imagery.

The notion of how we collect bioinformation, not beyond just the kind of disease, but it has to be thought about on the national security basis. I'd love help there. I'd love help on how not just the Stanfords of the world, but that an amalgamation of a number of universities help policymakers sort this out on an ongoing basis and how we think beyond our silos.

I mean, again, I go back to. I offend a lot of people on the hill because I don't think about my jurisdiction as chair of the intel committee, just being intel per se, cuz I think national security is my jurisdiction. And that touches all of these domains. How we think cross-jurisdictionally.

As a matter of fact, if we have some ability to collect biomarker, bioinformation, maybe it ought to not probably be at the IC. Probably not ought to be even at any of the traditional disease control or BARDA. Maybe it ought to be at a national lab or something where an entity like that is coming.

I also think we still need, and one of the things I think Todd did a good job on, but we still have not proved. We've not proved the thesis that, well, is that not all the innovation has to be at Stanford or in the Bay Area or in Boston or for that matter.

We got a pretty robust community in this area around DC. The ability that we've promised for years of, you can build it anywhere. One of the things that will come out of the infrastructure bill is if we don't get the 98% high speed broadband, affordable connectivity over the next two to three years in this country, it will be a failure of execution, not a failure of money, where we put out more than enough money.

So continuing to prove out that premise, that you can build innovation, not just in tech centers, I think is one of them would be so, so important because it goes to this whole question of who feels they're included and who feels they're left behind. So I know that's a little outside the straight technology piece, but Stanford's a pretty smart place, so you might be able to help us figure that out.

 

>> Amy Zegart: Health of the innovation ecosystem across the country, that's a much more articulate way to say it. Thank you for spending time with us. Please join me in thanking Senator Warner for coming tonight.

>> Amy Zegart: So I want to welcome next dean of engineering at Stanford, Jennifer Widom.

>> Jennifer Widom: Thanks very much, Amy.

As Amy said, I'm the dean of the engineering school. I'm also a computer scientist, and I've been at Stanford for 30 years. And this project has been great. Co-chairing the project with Amy and Condi and John Taylor has really been a pleasure. My role here is to spend just a few minutes telling you about the project itself.

And so I'm an engineer and also an educator for 30 years. And early on, I developed a framework that I've used and taught throughout my career whenever I'm trying to communicate about a project. And it's based on five questions, and I feel pretty confident many of you can use these five questions for something you're working on or want to explain.

So the five questions are, what is the problem that we're trying to solve? We engineers solve problems. Why is it important? Number three, why is it hard? Number four, why has it not been solved before? And number five, what is our contribution? So I'm gonna spend just a few minutes answering those five questions with regard to the Stanford emerging technology review.

So what is the problem we're solving with the review? First and foremost is the problem. It's already been mentioned a few times that policymakers and other government officials really need an accessible way to understand the emerging technologies. We know they're complex. We know they're rapidly changing. How are these folks going to learn about them at the level that they are able to understand them and the time that they have to devote to learning them?

Secondarily, people in industry who develop and market the technology will also benefit quite a bit from a better understanding of the foundations and the background of the technologies and even where they might be going in the future. And then there's the general public, and I think they need to understand these technologies to some level as well.

I feel like these days, there's a veil of mystery and even fear around some of the technologies that are emerging. So overall, there's a wide swath of people who would benefit and appreciate having a better grip on the technology and the opportunities of the technology and the downsides as well.

But to be clear, we're here in Washington. Our primary target audience is government and policymakers. But we do expect the review to have broader appeal, appreciation and impact. And you all have a copy, and I expect you to go home and read it tonight. So that's the problem that we're solving now.

Why is this problem important? Well, policymakers, other government officials are making very critical decisions based on and about these technologies. I would say that the technologies of today are having an unprecedented impact on society. We all think about AI and the impact that AI is having or will have on society.

But as you look through your booklets, you'll see that there are nine other technologies that we cover in the review. And I would argue that those nine technologies are also having a significant impact on society and the gap between what's happening in the technology and the understanding of the government of these technologies.

That gap can have a bearing on our leadership in the country and on national security. So that's why the problem is important. Number three, why is this problem hard? Well, the tech itself is very complex. Not very many people understand it deeply. How many of you fully understand what a large language model is based on?

Nobody. One person. Fei Fei? Actually, I kind of understand it, too, but okay. So these are complicated technologies, and they're also evolving very quickly, I would argue more quickly than they've evolved in the past. And those who do understand the technology are one person here. But you know what I'm saying.

I mean, those who do understand it and can keep pace with the changes aren't always great at communicating about it to a non technical audience. So sometimes they're just not interested in trying, in putting that effort forward. Sometimes they don't have the patience, and sometimes they actually just don't have the skill to explain those technologies to a more lay audience.

And sort of on the flip side, the government and policy audience doesn't always have the patience to learn something that's so complex, so fast moving, maybe not explained to them at the right level. So making this education work from the experts to the policymakers, government, lay audience, requires expertise.

It requires time, patience and collaboration. So that was the first three questions. Number four, why has this problem not been solved before? It takes a unique ecosystem to solve the problem. And I would argue that ecosystem is found not many places, and Stanford is one of those places.

And I'm going to start by talking about Stanford Engineering. I would say that our engineering school is uniquely positioned to help with this problem. So unlike our peer engineering schools, we are embedded in a world class liberal arts university, great humanities, great social sciences. We're also colocated with all of the professional schools.

Really, nowhere else is comparable. And in the engineering school, at Stanford in general, we've always thrived on collaboration. That's sort of in our DNA. But I would say in the engineering school, if you look historically, those collaborations have been largely with the medical school, with the sciences. We've made great advances with those collaborations.

But when I became dean, one of my primary goals was to increase the scope of collaboration. I felt like the engineers could have further impact by collaborating more with, say, the humanities, social sciences, law and business. And this technology review presented a really fantastic new opportunity for collaboration, maybe even beyond what I was imagining between the engineering school and the Hoover institution, with very far reaching impact.

So, last question, what is our contribution? Our contribution is bringing together our technology expertise at Stanford, together with Hoover's communication abilities, Hoover's strong connections with policymakers, with government, and launching the review. And one thing I want to say is that it was not difficult to find a lead faculty member in each of these ten areas.

And that was key. And I thought it might be hard, but people were excited. And I just want to say, faculty are busy and they choose what they want to do. And so the fact that they were interested in this project, I think speaks to the importance of the project itself.

And it wasn't just the faculty. Their postdocs got involved. Their graduate students were eager to contribute. We had about 100 faculty, postdoc students contributing to this project, generally Stanford faculty and students. They want to better understand the impact of their work on society. And this is a fairly recent phenomenon, but it's a clear one.

And so I've seen quite a bit of increase in this interest and this allowed a vehicle for the faculty and students to work again with folks from Hoover who really understand policy much better than I would say we do in the engineering school, but we are learning. This is also, I would say, an opportunity for the experts to learn how to present their work in an accessible fashion.

And that was welcomed by the faculty as well. They don't often make that effort to explain their work to the lay audience, but it's beneficial for them and, of course, beneficial for the audience. So let me just conclude by saying we hope this project opens up a strong line of communication between those who are at the cutting edge of technology and those who are creating the policies around it.

And this dialog, this understanding, will help secure our country's leadership in innovation so critical to our economy and to our national security. So thank you all for coming today. And now I'm going to turn it over to four of my colleagues who are going to come up for a panel discussion.

So first, I think we have Condoleezza Rice in the wings, former us secretary of state, director of the Hoover Institution, and it's been a real pleasure to work with Conde over the last year. So welcome, Condi.

>> Jennifer Widom: We have Professor Fei Fei Li. Fei Fei is Stanford Sequoia Capital professor in computer science, my colleague, and she's the co director of Stanford's Institute for Human Centered Artificial Intelligence.

Welcome, Fei Fei.

>> Jennifer Widom: We have Professor Professor Drew Endy. Drew is a Stanford Associate Professor of Bioengineering and a Senior Fellow at the Hoover Institution.

>> Jennifer Widom: And finally, we have Herb Lin. Herb is the Hank J Holland Fellow in Cyber Policy and Security at the Hoover Institution. He's a senior research scholar for cyber policy and security at the Center for International Security and Cooperation at Stanford and has been the director of this project.

So welcome.

>> Condoleezza Rice: Well, thank you very much, Jennifer. I just want to say that really, the dean has been a fantastic partner for us in this. She's been willing to put up with political scientists and social scientists asking all kinds of questions about engineering that she probably thought she stopped answering when she stopped teaching first year undergraduates.

But it's been a wonderful partnership, and thank you very much for that. And so I am joined on the stage by people with degrees in science and engineering. And that is the reason for this project is so that people like me, political scientists who understand the institutions, policymakers who have to make big decisions about where we are, that they can draw on folks like this.

And so I'm going to start with just a kind of very basic question for Drew and for Fei Fei, what's exciting in your field, where do you see your field going? And perhaps what should the policy community, the Washington community, know about your field? I'm going to start with you, Fei Fei.

 

>> Fei-Fei Li: Well, thank you, Condy. Thank you, Jennifer, for this incredible opportunity to work together. Well, my field is AI. So what's exciting in AI? Well, we haven't slept for about 14 months. That's how I feel. Well, AI, truly, I loved AI during the middle of AI winter. I literally entered the field about 24 years ago at the turn of the century when nobody even talked about AI.

So what has really been truly exciting is I feel we have hit a public awakening inflection moment in the past year or so, mostly thanks to the large language model progress. And a lot of people ask, what is AI really? The simplest way to put it now is AI is a new way of computing.

So if you think about anything in the world that has a chip in it, from computers, cars, servers, to all the way to light bulbs, fridge, washing machine, anything that has a chip in it does have or will be having AI that would let the chip run. And that's what's exciting to me as an AI scientist, which is that AI is not only an advanced technology, but it's also having a profound impact.

And we'll be having an even more profound impact in the jobs, everyday lives, people's work and people's living, and every single vertical industry that humanity touches on.

>> Condoleezza Rice: Great, and how about you, Drew? What excites you? And what should we know?

>> Drew Endy: When you look about the room, you can see the hardware or the computers and the software that we learn about.

And where's the wetware of? I'm a bioengineer, and so I think of biology as a technology. And perhaps some or many of us might benefit from genetically engineered production of insulin for treating diabetes. And that's a biotechnology long established now, but it's invisible to us. What's exciting to me is, and when I moved to Stanford, the reason I moved was to be part of something bigger than that in bioengineering.

Stanford has recently 20 years old, but that's recent for a university like us. Started a new bioengineering department. And so what do we see when we survey our faculty colleagues in the emerging tech review? What are they doing? They're bioengineering microbes on the skin to train the immune system to attack melanoma.

Michael Fischbach's work. Could you have a living skin cream that prevents skin cancer? Or Mark Schuyler Scott's work working in pediatric cardiology. Could you 3D print a heart? And how would you make that practical at scale? So what's exciting, Condi, is this emergence of bioengineering or wetware as a new form of engineering contributing to a flourishing society.

And I want to echo, Fei Fei, your opening of thanks, because it's really been extraordinary not only to see engineering and Hoover come together, but through the emerging tech review, to have a chance across disciplines within engineering to find those additional opportunities.

>> Condoleezza Rice: That's great. Now, before I turn to herb to ask a little bit about some cross cutting themes, I want to now put on my other hat.

Amy Siegert, who is also one of the founders of this entire project, often says that she studies intelligence, and so she's always looking for what's wrong here. So I'm going to ask you not what you're excited about, but what's concerning to you. I could say scary, but I won't, I'll say what's concerning to you and how should policymakers think about the implications of the technologies that you explore?

So we'll start with you again, Fei Fei.

>> Fei-Fei Li: How many hours do we have? Well, humanity's entire civilization is a history of making tools. Tools are double edged sword, right? So every time we make a tool to advance our lives and productivity, we make it available to harm ourselves and to harm each other.

I guess since Condi probably just gave me one or two minutes, I'll just say two things that concerns me. One is the tool itself, which is AI is very powerful, and I do not talk too much about the so-called existential crisis. I think that's a respectful academic discussion topic.

I think there's far more tangible, urgent social risks that AI brings us. Whether it's its impact on democracy itself in terms of mis and disinformation, or its impact in labor market as it shifts the way we do tasks and how it impacts jobs, or the related issue from privacy to fairness or bias.

These are all things that this tool, this powerful tool will bring from the adversarial impact point of view. That's one thing that does concern me, and I can go on hours. The second thing, then, just to highlight what concerns me, is something that earlier, I don't know, Jennifer or Amy talked about, is how healthy is our nation's ecosystem in AI?

Because after all, AI is a very, very important technology to lead not only in this country, but humanity into 21st century. It will help curing rare diseases, mapping out our biodiversity, Diversity, advancing our healthcare system. With all these opportunities, we need a healthy ecosystem of talents of the technology, and of public sector AI participation.

Because everything you hear about in large language model, whether it's the neural network algorithm, the transformer models, the big data, all this started in academia, mostly America's academia. And if we don't have that healthy public sector, to continue making public goods, to continue explore, and make discoveries, to continue help to benchmark, evaluate.

And create trustful technology, to continue to train our talents, we're gonna disrupt this healthy ecosystem, and it will hurt our future. And today, not a single American university can train a ChatGPT model. And we're losing talents to other countries because of visa issues. We're not capable of using enough AI to do this kind of cross disciplinary research.

So I think this is the second thing that concerns me, is we need to invest in America's public sector AI.

>> Condoleezza Rice: True?

>> Drew Endy: The headline for me is what concerns me most is we don't think of biology as a strategic domain, a domain of power. We think of biology as something that happens to us.

And that bothers me more than a little bit for two reasons.

>> Drew Endy: If we go back in time to the 1920s, we can look at France as a case study. They'd come out of the Great War and the 1918 flu pandemic, and they faced important decisions 100 years ago about how to spend the limited public treasure to secure the future of their nation.

And the debates went back and forth, and they were confronting emerging technologies of the time, things like the internal combustion engine, things like radio. Some could see that that might recombine in interesting ways, allow soldiers in the infantry to move around quickly and be coordinated, but not everybody could see that.

And so the public treasure in part goes into concrete, and Maginot gets built, and it's insufficient. And so if we fail to perceive biology as a strategic domain, if we fail to see what's structurally new in these emerging technologies, we have the Internet. What about a Bionet, where you're moving bioinformation around and downloading and growing biotechnologies, the opposite of centralized industrialization?

How are we gonna build and secure that? Now I don't know if I'm right or wrong, that's the game of emerging tech. We're figuring it out as we go. But I'm very concerned that we're in the forget phase of a pandemic, right? And that's not gonna get us to a secure future in an emerging biotech.

So that's part a, the next thing that concerns me is, even if we get the strategy wrong, we know how to pump the brakes, like stop that. But we're less good and I'm worried we've almost completely forgotten how to be strategic around push and go. I think about what Bob Kahn and colleagues did at ARPANET, running the table around packet switching networking in the 1970s and eighties to all of our great benefit.

I'm worried those muscles have atrophied. So, for example, it took us about four administrations, but in September of 22, President Biden signed an executive order on biotechnology and biomanufacturing innovation. Fabulous, for the first time, we can come to this town and we have air cover to talk about biotechnology being important.

It's all in support of a safe and secure and sustainable American bio-economy. But here's a question. What makes the bio-economy or broader economy American? Is it that it's located in the 50 states, it's the zip code that makes the difference? Or is it about culture and values? And what's so exciting about the partnership between Hoover and engineering and Stanford more broadly, is Hoover's bringing important values to this enterprise, ideas, advancing freedom.

What does it mean to have a bio-economy in which we are citizens of it, as opposed to only consumers or subjects or objects being acted on? It's like I'm concerned about we're missing the strategy on mitigating the minuses because we're being incrementalist. And let's seize this opportunity of building the thing we really wish for.

And that's gonna require flexing those strategy muscles. But in a way, maybe we've forgotten a little.

>> Condoleezza Rice: So, Herb, one of the things that we wanted to do in bringing these ten technologies forward was to think about the interaction between them. Because we have specialists in AI, we have specialists in synthetic biology, we have colleagues who are specialists on space or material sciences.

And one of the things we've learned is that we have been able to create some conversations across those emerging technologies. But we came to understand some you've caused called the cross cutting theme. So could you talk a little bit about that?

>> Herbert Lin: Well, sure. Some of the issues that we identified, that my colleagues identified here in their fields, turns out that they're common to many fields across technologies, across the most or all of them.

And let me go through a couple of them first. The idea is, one idea that's important is that the US, once upon a time, you could assume that it was completely dominant in every area of science and technology of any importance. Best scientists, most funding, most productive scientific enterprise, whatever you want.

That day is no longer. Maybe that was some time ago, but it's just not true now. And yet there's a lot of American policy that seems to be premised on the idea that we're always the best in everything, and therefore anybody who wants to interact with us, any other nation, can only take from us.

They can't give us anything. And we just think that that's not right, that other nations also have talent and ideas that enrich the American scientific enterprise. And we understand that there are issues with immigration and so on, but US immigration policy seems to be specifically designed to keep out foreign talented, especially in S and T.

We graduate people with PhDs, and then we kick them out and they go to Canada.

>> Condoleezza Rice: It's a very nice country, Canada.

>> Herbert Lin: Right, yes.

>> Herbert Lin: Not a slam on Canada, but. So that's one aspect of it. The other aspect of it is that as other countries become more advanced in science and technology, develop their own enterprises, by definition, our monopoly control sort of vanishes.

Technology is wonderful, isn't it? With decentralized access. And there are many more players in this. In many more nations technology is devolving down to high school level, where in biotechnology five years ago took a PhD to do certain things, now high school students can do it. And with that, there are more players involved, more motivations.

Private sector actors are getting more involved in this. This is a very complicated policy world compared to the bipolar world of the US versus the Soviet Union. So that's one part. We talk about multiple technologies. There are indeed synergies between many technologies. AI is the technology of the moment, but you couldn't do modern AI without all those chips, specially designed chips, not specially chips designed originally for another purpose, but they've adapted chips for AI use.

That really driving the future of AI. You don't get good chips if you don't have good semiconductors. You don't get good semiconductors if you don't have good material science, and so on and so on. So all the technologies are interconnected. This also partly explains why the pace of change seems to be fast at times and slow at other times.

Fei Fei talked about the AI winter 25 years ago, progress in that. There was progress at that time, but not in the public eye. In the public eye, AI came out in the last year or two. Why is that? Because many things have to happen for those breakthroughs to occur now in such a visible way.

And I think the last thing is every faculty member on our team expressed concerns about preserving, nurturing the ecosystem for innovation, right? And there are many aspects of it. It's not just the scientific breakthrough, that's necessary, but not sufficient, right? You have to achieve engineering feasibility. It has to be economically viable, it has to be socially acceptable.

Many things have to happen before breakthrough is translated into something tangible and real. We find that no technology is just about technology when it comes to society, right? It's got economics in it, it's got law in it, it's got regulation in it, it's got ethics in it, it's got values in it.

And you can't just wish those away. The technologists can't just wish those away. They matter. And the last thing is, it's not just in AI where the university ecosystem matters, it matters everywhere. Universities don't have a vested interest in the outcome of research. That is, they conduct their research not guided by commercial interests.

It is good, we think it's a good idea for the private sector to be investing money in AI and in synthetic biology and so on, in doing research. But their motivations for pursuing the research that they do is they see return in it. We're not saying stop that, but we need the ability to continue that kind of research in multiple dimensions, not just AI, across a wide number of fields.

To the extent that we have something to say about that, we're trying to advance that point of view here.

>> Condoleezza Rice: Well, thanks. I'm going to turn to the audience for a few questions in just a minute here. But I want to ask Drew and Fei Fei about policy.

We have collected here in Washington, DC, people who for one reason or another care about policy. They either study it or they're on congressional staffs, they try to make policy. And we have been talking about what these great technologies can do and, in fact, helping in a lot of policy areas that we're all concerned about as Americans, whether it's policy areas like education or healthcare, we understand that.

But policy also will have an effect on, as you've been calling it, the american ecosystem. I've been concerned, for instance, that a lot of the regulation, particularly around AI, but technology more broadly, is actually coming out of Europe, not out of the United States, and in effect, because Europe, if you set aside the UK, which is an AI powerhouse, but the rest of Europe has not been at the lead edge, and yet it's become the place where most of the regulation is coming from.

I was recently in Paris and listened to a couple of EU Commissioners talk about making certain that if something calls caused harm, it should be regulated, and you would have things that are harmful and things that are not. And I was talking to one of my VC friends about this, and he said, well, of course fire causes harm.

 

>> Fei-Fei Li: How is electricity?

>> Condoleezza Rice: So does electricity, yes. So how do we think about telling, if you had a kind of magic wand for policy, what would you say to policymakers? What must we do? What must we do quickly? Because as Herb has said, others are moving on.

So, Drew, can you start there?

>> Drew Endy: Yes. Three points I want to try and get across. Where do we get leverage? And when you're thinking about building a building, yeah, you need to get the first and second and third story in the roof. Right. But if your foundation's wrong, you're in trouble, right?

There's a lot of leverage down low. So from a policymaker perspective, where's their leverage potentially, in emerging biotech? One of the things the constitution tells the Congress to do, if I remember correctly, is establish the weights and measures, and today that's become NIST, the National Institute of Standards and Technology, a little thing under the Department of Commerce.

Well, if we're talking about the bioeconomy, the transactions in the marketplace that allow for coordination of labor and trust around bio goods and services. Do we expect to be able to get to that without having a formal foundation in the weights and measures of biotechnology? When Baltimore caught fire over a century ago and the trucks from Baltimore and from Washington and Philadelphia went to help, and they got there, you couldn't hook up the hoses and hydrants, and you got a conflagration.

And that was sufficiently motivating to regularize hose and hydrant hookups. We got to do that playbook over again for coordination of labor in biotech. If I've got a bioengineered yeast that's going to make a medicine and I show that that works in my fermenter in California, I need to be able to translate that into Indiana without that becoming another research project and experiment.

So, more specifically, could I be less obtuse? I would love to see NIST get resourcing to create a biometrology laboratory to get high leverage technical standards and measurements. The weights and measures of the bioeconomy. Not only does it advantage us domestically, but now when we say we're going global with the bioeconomy and with biosecurity, what do we bring to our partners and allies?

As much as we talk about. About China. What's our Indonesia strategy? How do we want to have a partnership with Indonesia in 2030, 2040? They're sitting on the largest carbon reserve. They've got biodiversity out the wazoo. They've got massive supply chain puzzles. What are we bringing to the table?

Not just hard power, but soft power. So that's one. Number two, it's easy to get money if you're solving a problem that somebody has right now, cure the disease, save the climate, the apps layer. It's harder to get money for fundamental scientific research because the harder story. But we know historically it's important.

Caught in between, that is fundamental engineering research, because the engineers are supposed to solve the problems. Imagine if I was a computer scientist and I could only get public funding for doing research projects with my graduate students, which was to make a mobile phone app to help a patient in a doctor's office.

That's mostly what it's like being a bioengineer, because I can get money from the NIH and the new ARPA agency to cure diseases today. And so where in our portfolio do we sustain fundamental investment in engineering research? The NSF would normally be the place, but if you look at the new directorate, it starts with the letter T, translation.

So not against that, but it's this is little, so we could get some extraordinary leverage if we supported the foundations of engineering. Well, maybe that's enough as an open.

>> Condoleezza Rice: Great, we'll write that down, yes Fei Fei?

>> Fei-Fei Li: So, I'm a computer scientist. One thing I really, truly loved in the past five years of my Stanford life is working as the codirector of Stanford Human Center AI Institute, which Condi is a part, advisory on our board, basically.

And one thing I've learned a lot is the word policy, and you talk about Europe. Policy is not, I learned from you guys, is policy is not just regulation. Policy is good ways of incentivize or using resources to incentivize good work in the society. So I want to say two things to answer Condi's question.

On the regulatory front, like I said earlier, AI is a technology that has certain boundaries where we might need to see guardrails, right, when it's applied to patients, when it's applied to financial activities, when it's put in cars, when it's used in our environmental policies. And there, I actually personally believe we have pragmatic framework in the vertical space, whether it's through the FDA or through the EPA or through the SEC.

There are areas where we can update and renew or refresh some of our regulatory details that's the area I think actually is quite pragmatic and present in terms of possible regulatory updates. But I'm against sweeping calls of stopping AI. I really want to ask for those people who have signed the pause AI letter about seven months ago.

Did anyone pause? This is why I didn't sign the letter. So on the other side of the policy coin, which is incentivizing a good ecosystem, I want to double click on the public sector investment. Condi and I talk a lot about this at Stanford, and I was very much inspired by Condi when my colleague and I met with President Biden to talk about our nation's AI about half a year ago, and we used the term moonshot mentality.

It is such an important technology, and we all know that the industry is racing ahead. AI is important for the productivity, it's important for national security. But we need to use AI to produce more public goods. For example, like Herb said, crosscutting. This is the technology that can really advance synthetic biology.

This is a technology that can actually, it was the technology that advanced fusion about a year ago. When we see that breakthrough, it is a technology to help us to discover new materials. And I think that kind of public goods were lacking resource to produce them. It also is public sector creates public trust, a help to create public trust.

Today we have so many large language models produced by different companies. Stanford HAI is the only academic institute that is doing publicly available benchmarking of these language models. And that's really important because the public deserves to know. And also we have an independent, objective source to grade the homework, right, using Amy's language.

We can't have the companies that created these technology to grade their own homework. This is the second reason for having public sector investment, and the third reason for having public sector investment is that the public sector needs to be in this race with private sector, especially America's public sector.

It needs to be in this race to advance this technology. I remember human genome project was actually simultaneously a public sector and private sector effort. And right now we lack data, we lack compute, and we're lacking the healthy talent community. I was so heartened that the Eo was passed, and now we have a Nair national AI research resource pilot program just released by NSF this week.

And I was told today when I'm in DC that as a researcher, I could apply for NAIRR now, but we haven't finished the job we need. Right now, Senator Young is one of the co sponsors of the Create AI Act bill that is in Congress and in Senate.

We really need this bill to be passed and we need more moonshot mentality bill and effort to rejuvenate America's public sector AI ecosystem.

>> Condoleezza Rice: Thank you, I'm going to go to the audience for a couple of questions. I do just want to say one thing, which you mentioned Senator Young, and of course, we had Senator Warner here.

We see this very much as something that we have to do across the aisle. This has got to be a bipartisan or nonpartisan effort because you just mentioned the EO. One of the problems in our great democracy is that when we switch over at the White House, there's a sudden desire to get rid of everything that was done by the last White House.

I know that because I was part of that once. And so we have to make sure that we can sustain this in the way that, for instance, we sustained the effort toward getting a man to the moon. So I've got two right here in a row. So yes.

First. Yes. And then right behind you. Yes. Right.

>> Speaker 12: I was wondering about the sort of moonshot mentality and if you like, intergenerational investment, because I know about nine months ago when you were trying to fundraise for the Nair talking about, can we get $1 billion? But pension money has over $50 trillion, and it still doesn't recognize, as far as I know from conversations at the UN, any of the SDGs as asset grade.

So is this the same problem? Are we ever going to get biotech climate, all the really deep, huge data things which probably need a ten year thing connected, unless somehow you get to some of the pension money?

>> Drew Endy: Yeah, thank you so much for that question. I agree with where you're taking our conversation.

An example I could give is a positive example from the world of biotechnology is a company called MycoWorks. MycoWorks was founded by a Stanford graduate, studied in the fine arts program, and it's a company that works with mushrooms, of all things, that eat wood. And it turns out you can make a lasagna tray of mushrooms, if you will.

It's like a sponge. And the top of the sponge becomes a mushroom skin that you can peel off and put through a tanning process and get to leather. Well, it turns out they just opened up their first factory, 100,000 square foot factory in Union County, South Carolina. And that cost $100 million.

Now, where did that money come from, given the cost of money today? So I went to track down the CEO, a fellow named Matt Scullin. And it turned out, although their R&D is still in California, he had moved their headquarters to Paris to be next to Hermes and Louis Vuitton and so on.

And they're situated next to the forests of South Carolina. The feedstock they get is sawdust from the lumber mills, and they get that as a waste stream. And because of that, it might be possible that you could float a bond at the state level that would be tax free.

And suddenly that becomes a financial offering that, to your point, might allow us to take the limited public treasure and smartly invest it in the public interest, but then get that amplification and scaling to unlock the sideline capital. And as important, if not more important, the decision making about whether or not to push the go-

 

>> Drew Endy: Or not. AI.

>> Condoleezza Rice: Yeah, yeah, somebody just, yes, we'll keep going, yes.

>> Drew Endy: The decision making about whether or not to underwrite that particular project is done not by the government, not by the academics, but by the people who are expert in making those decisions. Why I'm so excited about your comment in this example, when I look to competition, I see capital flowing into the bioeconomy at the scale of hundreds of billions of dollars in China.

I'm not expecting we're going to get that appropriation through the Congress anytime soon. So we've got to take your advice and figure out how to do this all hands approach to the underwriting.

>> Condoleezza Rice: There's another project for setter. We'll work with our friends in the business school and John Taylor, who's not with us, an economist who is one of our co chairs.

And John keeps saying economics. Economics. So you just made his case? Yes.

>> Tim: Okay, thank you. Yes, thanks very much to the panel for an excellent discussion. Madam Secretary, my name's Tim Persons. I'm the chief AI officer for PwC. In my prior life, I was a chief scientist as a GAO, and I wanted to hit upon something that the panel brought up, which is, I think this dichotomy between academia and all the brainpower that we have.

The senators mentioned this earlier, as well as the public sector. And if I'm honest, when I would fly out of DC, it was like I would know how to speak martian. But I get to the Bay area and I had to speak Venusian, right? To use that analysis.

How do we bridge that gap where we have such incredible brain power like the Fei Fei's, like the Drews and so on, Herb, everything that you're doing, into the policy space, because it's so important at this time to build those bridges? So thank you.

>> Condoleezza Rice: Thank you.

>> Fei-Fei Li: I can comment on that.

First of all, Tim, hi. Good to see you again. First of all, it is pretty dire. So it turns out less than 1% of the USCS advanced degree holders go to governments. Earlier, when I used the word public sector, I did include academia. But I know that you're talking about this chasm between academia and advanced research and government specifically.

We actually just, some of us come from White House. It was really interesting to see that someone at the White House was talking about creating a program of interns and fellows that invites the technologists or people with technology degree to go intern at the White House. And then we at Stanford, HAI immediately said, we just created this program as well here, using our funds to fund technology CS engineering students to go to DC, place them in different agencies or staffer office, White House to do that kind of internship.

So I know this is not enough, but really, I think we're seeing a budding sign of the recognition that we need to create more, what we would call bilingual talents of tech and policy. And Amy always said, we talk so much about doom, but this is actually the one area I'm seeing a glimmer of hope, I would just add, I have more students, it seems right now who are political science and CS majors, and so they are doing the translation themselves.

 

>> Condoleezza Rice: So as we begin to grow that, I think it will be very important. Yes, in back, go ahead.

>> Speaker 14: I'm not an engineering guy or technical guy, although my twin brother went to Stanford, got an MBA, Leland Stanford Junior College, I always kid. The cost of creating a data set is so expensive that it really is an invitation to the big players only.

And the FCC, I guess, just went after what they consider to be the five big players to get information about their business processes. And the legal arena is completely unformed in terms of how we protect intellectual property. People are saying we weren't paid. You used your data to train our, to create your data set and to train your computers, and they want compensation.

So we're in a capitalist society and big money means something, but intellectual property is supposed to mean something. Do you see any potential strangleholds on progress because of legal issues?

>> Fei-Fei Li: Yeah, so that's an excellent question. In fact, I think your question really consists of multiple dimensions. There is this whole IP issue, for example, the latest gen AI technology is really rubbing against the creator economy.

People who actually, small players, authors, artists, musicians who have created contents and their contents are being used in this ginormous Internet dataset by big tech. And we're seeing actually interesting lawsuits in the horizon that the court will come down that, there is creators versus Midjourney, creators versus OpenAI.

I don't have the answer, I think, but the IP issue entangled with AI is a very interesting unfolding area that we should all pay attention to. Then embedded in your question is also this cost of creating data, but data is oil to AI. I agree with you, but I actually think, again, there's hope.

You mentioned that because it's so costly, it seems like only the big players, the big tech, can participate in this race. Yes, it's true that big players have the ability to aggregate data. Many of them actually hold data themselves. But as a researcher, my last sabbatical was at Google.

And around 2018 I was Google's chief scientist of AI at Google Cloud. And there was something so interesting at Google, I was already interested in healthcare research, but I had to leave Google and come back to Stanford to be able to do my healthcare research because the data lives in Stanford, not in big tech especially.

The kind of work I do is with sensors and ambient intelligence. It's the collaboration between doctors, clinicians, and scientists that we were able to use unique data. I'm sure Drew has unique data. I'm sure our material scientists have unique data. So I think there is actually a lot of ample opportunity out there in different areas of research, in different parts of our public sector system that has precious unique data that we can unlock.

And it's not just in big tech. So data is a really interesting player.

>> Condoleezza Rice: Earp, did you wanna say something?

>> Herbert Lin: No.

>> Condoleezza Rice: Okay, yes. Let's see, right here on the corner, yes. Okay. Next time.

>> Dan: Thanks for the panel. My name is Dan Inbad. I'm the veteran fellow at the Hoover Institute and I also work at Shield AI, doing AI pilots for defense.

A lot has been said this past week. I mean, with the emergence of dual use and tech innovation, defense tech innovation in the Bay Area, and that kind of resurgence over the last, in recent months. This week we saw the GSB deny the establishment of the defense tech club.

And I just wanted to know your thoughts on where do we go from here and what are your thoughts on-

>> Condoleezza Rice: As a faculty member at the GSB, I think I'm gonna ask a question about that when I get back.

>> Fei-Fei Li: I'm sorry, I didn't even know, what happened?

 

>> Condoleezza Rice: I'll explain later.

>> Condoleezza Rice: But I think that everybody has to understand that the national security implications of these technologies are ones that are so much more far reaching and fundamental than any technology that I've ever seen. And it's because this is the first time, really, cyber was a bit this way, where the domains are not owned by the government.

So when you think about nuclear, for instance. There were always these stories about how you we're gonna have somebody making a nuclear bomb in their basement. No, they weren't. It's actually very hard to make a nuclear weapon, which is why there's so few of them. It is, on the other hand, imaginable, what you could do with synthetic biology or what you can do with AI.

And we, of course, also have an adversary this time who does not share our values, who is a declared adversary. Just listen to Xi Jinping. And who has declared that they are going to marry their civil and military into technology. So when I hear anyone in the United States say, I don't know if I wanna work on the defense aspects of this, I wanna say them, you do work on the defense aspects of this.

You just don't know it. And if you aren't interested in helping the United States with the defense aspects of this, I really hope you like living in a world in which Xi Jinping and Vladimir Putin dominate. And so I think we have to have a very straightforward conversation about this.

Not everybody has to work on defense technology, but everybody does have to understand who's in this world that you are, the implications of the work that you do. I have one more question right here on the right.

>> Emily Messner: Hi, I'm Emily Messner. I'm with the Hoover institution here in DC for a long time, and with my friend and colleague, who is a technologist and also an MC with the band I manage, Gangsta Grass, and we bring people together.

So I'm curious about bringing together public sector and the private sector. The moonshot mentality phrase is so great. So I'm curious to know, in an era where technological developments are coming from places we're not necessarily expecting. For example, private companies from across different countries launching things, even people, into space, and NASA developing supersonic airliner for commercial travel.

How can those of us at Stanford and beyond help those two sides with their different incentive structures, public good and thoughtful regulation versus profits, and maybe not as interested in regulation or interested in self regulation. How can we help them work together in a way that builds on the strengths of each of those sides, the public sector and the private sector.

Thank you.

>> Herbert Lin: So you're asking, how do you sustain. First of all, I'm gonna comment on the moonshot part. There are many advantages to thinking of this, of having a moonshot mentality. But the fundamental aspect of moonshot is that it's over, right, when you get to the moon and you stop.

That's not true anymore. And how you organize for an environment in which it's continuous competition, there's no end in sight. Winning isn't winning anymore. You don't win by going to the moon and then stopping. I mean, there's lots of other stuff there, too. You have to reorient yourself to a world of continuous competition or challenges in the space.

That means you have to organize yourself. Government and private sector and universities have to organize themselves in a way that's able to, that's, for lack of a better term, sustainable. And many of the problems that we have now in the academic, sorry, in the research enterprise, are because, we haven't figured out a way, for example, to get continuous improvement in yeast production or chips or something like that, to fund the next generation of advances in technology.

So Moore's law, as you know, is the law that says that you get more and more value every year. That's ending now for a variety of reasons. The reason it was so powerful, it's not a law of physics. What it is, is we've been able to say with the next generation of information technology, there was a commercial value to it and they were able to sell it and they could get make money on it, and they could pour that money into a new, into doing the next generation.

We need models like that, and those are hard to come by. So-

>> Condoleezza Rice: Last comments on. Pei, Fei, and Drew.

>> Fei-Fei Li: Okay, I just wanna add, so there's one thing we should recognize, that opportunity. Cuz you asked specifically places like Stanford is education. And I don't just mean classroom courses.

I mean creating a more innovative education environment where we can actually enable the next generation talents to our society to be able to cross cut and do this kind of interdisciplinary work. We have never seen technology accelerating this fast, being this multidisciplinary in human history. And in the past five years at Stanford, at least, I'm part of this innovative experiment of creating a very multidisciplinary institute, like the Human Center AI Institute,.

Where we experiment on innovative educational opportunities from the classroom, like in classroom education to public sector education. All the way to cross cutting research, multidisciplinary research, all the way to policy collaborations. I think it's so important that we innovate on education opportunities so that we can answer to the demands of this changing time and changing world.

 

>> Condoleezza Rice: And I might just say about Hai, that one of the things is the humanities are very involved, so musicians and people in literature. So, yes.

>> Drew Endy: Are there any Stanford undergraduates here? Yes, I love your question, and I love that you're here. Because, in my opinion, you're the most important people at Stanford, because there's more of you and you're not messed up yet.

And then you leave and go do things. And to any of the students watching, I mean it. You're the most important people. So I teach a class this quarter, it's called inventing the future. It's joint between bioengineering and the design school. And to your question, next week we're going deep on biotechnologies, but our guest on campus will be Ahmed Best, a storyteller and actor.

Ahmed, you wouldn't recognize him at first because he played Jar Jar Binks in Star wars. But more recently, if you follow Mandalorian on Disney+, spoiler alert, he's in episode 20. He's a master Jedi, dual wielding lightsabers, and he saves baby Grogu. Now, what does he have to do with an engineering course?

He's coming in in a way that's very different than Senator Warner. He's coming in around bioengineering, and he hears what I have to say, and he reads it back and he says, you're making a synthesizer. You're talking about your band. And when he says synthesizer, he means a musical keyboard.

And I'd never seen it that way, even myself, as naively optimistic as I am. And so isn't that a neat complementary view? A biosynth. And in our classroom setting, one of the tools he uses and gives our students is a question, how do you want the future to feel?

And he asked the students to say the words out loud, what do you want the future to feel like? It's very interesting. Nobody says multinational corporation.

>> Drew Endy: Right? But they say really important things. So I love your question, and I love that we have some students here. Thank you for being here.

But that's part of it, right? It's how do we tell the stories about what we wish for? How do we create a collective sense of purpose that allows us to advance our individual agendas?

>> Condoleezza Rice: Well, thank you very much. Thank you to all of you for attending. Thank you to my colleagues.

Herb Feifei, Drew, I hope that during this time, you've seen why we think that this kind of collaboration across fields, but perhaps more importantly, across languages, the languages of policy, the language of science and technology and engineering, but most importantly, perhaps, the language of humanity, because the idea that these are transformative technologies, human beings have tended to be pretty good at the knowledge side, not always so great at the wisdom side.

I think these are such powerful technologies that this time we'd better be pretty good at the wisdom side, too. So let me thank my colleagues, let me thank Jennifer Widom and Amy Siegertae. And I just have to say I want to thank a few of our staff who are here.

I'm not going to name them all, because I will undoubtedly miss someone, but I especially want to thank the Hoover office here in Washington. Where are our folks? So if you can just raise your hands. Back there, the Hoover Washington staff, without whom this would not have been possible.

 

>> Condoleezza Rice: So stay tuned, we will continue. We think of this as continuous education, not just a report. So we look forward to further interaction.

>> Drew Endy: Thank you, Condy.

>> Condoleezza Rice: Thank you, it's great.

 

Show Transcript +

FEATURING

Condoleezza Rice 
Tad and Dianne Taube Director | Thomas and Barbara Stephenson Senior Fellow, Hoover Institution

Jennifer Widom 
Frederick Emmons Terman Dean of the School of Engineering, Stanford University

Amy Zegart  
Morris Arnold and Nona Jean Cox Senior Fellow, Hoover Institution

Herb Lin
Hank J. Holland Fellow in Cyber Policy and Security, Hoover Institution

Fei-Fei Li 
Denning Co-Director, Stanford Institute for Human-Centered Artificial Intelligence 
Sequoia Capital Professor, Stanford University

Drew Endy 
Martin Family University Fellow in Undergraduate Education (Bioengineering), Science & Senior Fellow (by courtesy) Hoover Institution

Upcoming Events

Wednesday, December 4, 2024
Ideas-Uncorked
Health Care Policy In The New Trump Administration And Congress
The Hoover Institution in DC hosts Ideas Uncorked on Wednesday, December 4, 2024 from 4:45–6:00 pm ET. The event will feature Hoover Institution… Hoover Institution in DC
Friday, December 6, 2024
Emerging Technology and the Economy
The Hoover Institution cordially invites you to attend a conversation with President and CEO of the Federal Reserve Bank of San Francisco, Mary C.… Shultz Auditorium, George P. Shultz Building
Monday, December 9, 2024 9:00 AM PT
Investing in Political Expertise: The Remarkable Scale of Corporate Policy Teams
Our 29th workshop features a conversation with Andrew Hall on “Investing in Political Expertise: The Remarkable Scale of Corporate Policy Teams" on…
overlay image