The Hoover Institution and Stanford School of Engineering held the launch of The Stanford Emerging Tech Review with Condoleezza Rice, Jennifer Widom, Marc Andreessen, and Richard Saller on Tuesday, November 14, 2023 from 4:30 PM - 6:00 PM PT at Hauck Auditorium, Traitel Building.

The Stanford Emerging Technology Review is a university-wide project bringing together science faculty across campus to outline key developments in ten frontier technology fields and their policy implications.

This event featured a conversation with Condoleezza Rice, Director of the Hoover Institution, Jennifer Widom, Dean of the School of Engineering, and Marc Andreessen, Co-Founder of Andreessen Horowitz, with welcoming remarks from Stanford University President Richard Saller. We also had lightning round presentations from Zhenan Bao, Herbert Lin, Allison Okamura, and Amy Zegart. The event marked the release of the Stanford Emerging Technology Review for the Stanford and Silicon Valley community.

learn-more
The Stanford Emerging Tech Review | Launch
Watch the Livestream

>> Condoleezza Rice: Good afternoon, ladies and gentlemen. Thank you very, very much for joining us here at the Hoover Institution for a very special unveiling, if you will, of the Stanford Emerging Technology Review. This is a campus wide effort. By the way, I'm Condoleezza Rice. I'm the director of the Hoover Institution, sorry, this wasn't in the witness protection program there.

Just wanted to make sure you know. But this is really a very special moment here at Hoover where we are concerned with issues of policy, as our founder, Herbert Hoover said, trying to improve the human condition. And we try to do it with data based, evidentiary based research that addresses some of the hardest policy problems that we face in our country and in the world.

There is no more challenging set of circumstances and set of issues these days than how to think about the transformative technologies that are all around us, that are changing the way we live, that are changing everything about us, that are even challenging, perhaps, what it means to be human.

And we at Hoover want to understand better the policy implications of these transformative technologies. And to be able to communicate to policymakers their obligation both to be concerned about the impact on institutions, but also to be concerned that we continue to innovate, continue to push forward. Stanford has stood through its entire history for that ability to innovate in a circumstance of academic freedom.

Our partner in this, who you'll meet a little bit later, is the School of Engineering and its dean, Jennifer Widom. You see, here at Hoover, we actually don't do the technology. We just talk about it. So what we did was to enlist people who actually do the technology, people who are at the front lines and the front door of artificial intelligence, and nano, and material sciences, and space, and synthetic biology.

And you'll hear from a couple of those people today as well. And so this has been a wonderful partnership between the Hoover Institution, the school of engineering. I should mention that the dean of medicine, Lloyd Minor, is also a member of our advisory committee. And so this is a university wide effort.

And just to show you that it's a university wide effort, we have asked, and he has agreed, the president of the university, President Richard Saller to come and to do a little introduction for us. He is an academic leader and classical scholar, and he serves as Stanford's 12th president.

Dr. Saller is a scholar of roman history who has previously served in several academic roles, including the dean of humanities and sciences here at Stanford and as provost of the University of Chicago. Prior to becoming president, Dr. Saller served as chair of the Stanford Classics Department. He's a dedicated teacher, he's published widely.

He's a terrific scholar. And perhaps most importantly, he is totally dedicated to excellence here at Stanford, to excellence in research, excellence in teaching, excellence in clinical care. He is dedicated to the proposition that only through free speech and academic freedom can we truly search for the truth, which is, after all, the trust that we all hold when we become a part of a great academic institution like this.

And so thank you very much, President Saller, for joining us. And if you would join me here now at the podium. Thank you.

>> Richard Saller: Thank you, Condi. It's a pleasure to be here with you this afternoon. Stanford, I really do think, is uniquely qualified for this kind of project.

The Stanford Emerging Technology review is launching at a critical moment. It's to state the obvious, to say that this is a time of very rapid technological change and accelerating technological change. When I arrived at Stanford 16 years ago, a cover of Stanford magazine said, innovate incessantly. And I thought at the time a better statement would have been, innovate thoughtfully.

And so this is the project that the Hoover institution can contribute to in partnership with the university. It's crucial that we focus on navigating both the opportunities and understanding the risks of the new technology. So CDER was born from a desire to connect government officials with Stanford expertise in emerging tech.

The development of effective tech policy requires a multidisciplinary approach. CDER draws on cutting edge research coming out of Stanford labs, our technical expertise, as well as the university's strengths across the social sciences, in ethics and in public policy to help inform discussions. CDER represents a remarkable collaboration across the university.

In addition to the Hoover Institution, five of the seven schools at Stanford are represented in CDER, and 11 of 15 independent institutes are also represented. This multidisciplinary approach reflects both Stanford's tradition of technological innovation from our home in the heart of Silicon Valley, as well as our long history of public service.

Stanford's tradition of government service dates back to the earliest days of the university. Stanford faculty and alumni have served on the Supreme Court, in Congress, and have led executive branch agencies. They've been ambassadors, cabinet members and world leaders working on issues from financial crises to geopolitics, from climate change to global diseases.

I see this project as another entry in that tradition. And in fact, two former cabinet members are involved in CDER, Condoleezza Rice, former Secretary of State, and Steven Chu, former Secretary of Energy. So this is a project that is uniquely suited to being held at Hoover and Stanford University.

And with that, I will turn it over now to Professor Amy Zegart, the Morris Arnold and Nona Jean Cox, senior fellow at Hoover and co chair of CDER, and dean Jennifer Widom of the engineering school.

>> Amy Zegart: Condi made sure that I did not go to Harvard. So I'm glad, Richard, that you made that clear.

Jennifer.

>> Jennifer Widom: Amy.

>> Amy Zegart: It's wonderful to see you. It's wonderful to have all of you here. What I thought we would do is start at the beginning and the AHA moment that led to the launch of the emerging tech review. So you know this story well, but for the rest of us, Senator Mark Warner, one of the leading tech experts in the Senate, and to be sure, there aren't many, but he is one, came to Stanford.

And he said, I want to know what you're thinking and doing in emerging technology. And we got our forces together and put together a day for him, and it was amazing, and it was nowhere near enough. And we realized we needed to do more. And you gave two things that were crucial in your early championing and leadership of this effort, your time and food.

So share with everyone why you decided to lead this effort and talk a little bit about the box lunch approach that you initiated.

>> Jennifer Widom: Sure. So the time and the food. And one more thing, our amazing faculty, and that was really what it was about. So Amy approached me and said, are engineering faculty at all interested in policy, for example, or impact on society?

And I said, absolutely. And you suggested that maybe they'd like to get together with people who make that their living essentially on the other side of campus. And so I just invited select faculty that I knew had a particular interest in the impact of their work, the policy implications of the work they were doing, work that can be confusing and almost scary sometimes, and they want people to understand it.

So I invited. We had two lunches, I believe, with about a dozen faculty each. And it was just a great dialogue to hear from these engineering faculty, who are usually very heads down, worrying about their labs and their research, to open up and talk about the broader implications of the work.

And how they could help others understand those implications, because they felt that it was important to bring that understanding to policymakers.

>> Amy Zegart: So, in the social science world, we hear a lot about emerging technology. What, in your mind, is different about the Stanford emerging technology review?

>> Jennifer Widom: So the review itself, I think, is fairly unique in its focus on academia, its focus on harnessing the knowledge that we have here at Stanford in these areas.

We have been at the forefront of many of the emerging technologies that are in the review, and these are technologies that are really critical at this moment in time, and some of them really scary at this moment in time. And Stanford has a long history of working in emerging technologies.

So it seemed like a natural fit, if we wanna bring these technologies to policymakers, to tap our faculty who are working in these areas. And I do want to say our faculty have been very receptive, and I think that's really quite important, starting with the food and time.

They were quite receptive to talking with others, with bringing their knowledge to others, and with also not just themselves, but they brought in their graduate students, their postdocs, they brought their whole ecosystem into this project. And I think that's really made a big difference.

>> Amy Zegart: So we talked a little bit before we came out.

75 different faculty have, in one way or another, participated in this effort, 20 postdocs, more than a dozen undergraduates. So let me ask you about academia. So we hear a lot, particularly in the past few weeks, about AI and public private partnerships, but there's also this role of academia.

What is the unique role that academia plays in the innovation ecosystem, and what are the unique challenges that you see from your vantage point as dean of engineering?

>> Jennifer Widom: Well, so the unique role, I think the most important aspect of the academic innovation is that it's not market driven, it's curiosity driven.

So our faculty work on problems that they think are interesting, are challenging, are important, and they're not driven by a need to make a profit, which work in industry is. They are also generally impartial. So it's not like a news report that our faculty are bringing to the technology review their impartial explanations of the technology, where it is now and where it's going in the future.

So that's the academic perspective and always has been. Some of the challenges of academia can be when you do want to bridge to that market, sometimes that can be challenging. In AI in particular, there's a need for massive amounts of computation that companies have that we don't have.

And that's something that we're discussing a lot about how we are going to innovate, maybe differently from where the companies are going. But we also, again, are motivated by different things. So our faculty are motivated in the AI area, for example, in solving problems in sustainability, in advancing human health, curing disease, things that companies might not be aiming for the way we are.

 

>> Amy Zegart: So I want to ask you about building community, because we talked about it from the beginning, that we wanted to have a product and a process. The product is a series of educational deliverables. So you all have a copy of our first report, our mini report in your chairs, and other things that are going to follow on, which we'll talk about in a minute.

But the process was important, too, this community building part at Stanford. And so how do you see this initiative facilitating that research and education mission and that community building, not just between engineering and Hoover, but within the School of Engineering and other parts of campus?

>> Jennifer Widom: Right, so we have a long history of collaboration.

And collaboration is what has led to most of the innovations, I would say, woven in the fabric of Stanford for a long, long time. And I feel that this project has catalyzed new kinds of collaborations that we haven't had before. It's the objective of bringing our technology, our innovation, to policymakers who may not have the chance to say, learn about it in layman's terms, has been different for people in a good way.

So our faculty, you'll hear from some of them, have learned how to talk to other people around the engineering school and around campus in new ways that I think are very appreciated by them. The ability to bring what they're working on to others who want to hear about it and want to understand it but aren't in the lab, for example.

 

>> Amy Zegart: As you know, one of the things we heard from former senior government officials on our advisory board, Washington, you can't just produce a report. You need to have an educational campaign. One report does not a campaign make. So talk a little bit about what's next, what you see in the future, and what excites you.

 

>> Jennifer Widom: Well, so I'm on the technology side of this project, and so I'm going to turn a little bit of this back over to you. What excites me on the technology side is that we, this is a snapshot moment in time. So we picked ten technologies. You can see the list.

They're obviously extremely important at this moment in time. They are moving very fast. They are having a big influence in working on world problems. I mentioned sustainability, for example, and human health and technology access and others. So I'm very excited that we picked these ten, and I'm very excited to see what the next ten will be.

So for me, that part of it is going to be a continued story about what is happening with these technologies, as well as what other technologies are gonna roll in over time that we are gonna feel are equally important to educate policymakers about. Now in terms of delivering a report and then saying, that's not a campaign.

I'll turn that over a little bit to you, Amy, to talk about what we do. We were on a Zoom call with our advisory committee, who I think you were quoting there. That was very eye opening to me. I have to be honest, I don't really know that well how Washington works.

And I learned quite a few things in that call about the fact that you can't just write a, what I would think was an amazing paper about a topic or an amazing summary and just give it to your congressman. And they're going to sit down at night and read the whole thing and come back with questions.

So I learned that we have to have a multifaceted approach if we're really going to achieve our goal. Which is to bring these knowledge of these technologies and this understanding to government and to policymakers. So I'll turn it over to you to talk about how we can do that most effectively and what my part is on the sort of technology side of things.

 

>> Amy Zegart: I think we're taking a lesson from the engineering side of campus, and we're going to rapidly prototype, learn, fail, and iterate from there. The first step is taking the show on the road right to Washington. So rather than expecting Washington always to come here, we're gonna go to Washington and do some briefings.

And then the question is, what's the right way to deliver this fabulous expertise from our faculty to policymakers in a way that they can hear it and use it in a continuous fashion? So we're going to be experimenting with a number of new things, whether it's podcasts or Ask the Professor.

We're thinking about a science and policy seminar series. We're not yet at the policy by tweet approach to explaining technology. But we're excited that we're gonna try a bunch of new things in the year to come. So I want to end by asking you about, you've had this amazing career in computer science and electrical engineering, chair of the department, dean of engineering.

You see a broad array of emerging technologies. You talk a little bit about what excites you. But as you look over the horizon, what is most exhilarating to you about emerging technology and what most worries you?

>> Jennifer Widom: Sure, okay, let me start with the exhilarating. You'll all think I'm gonna say AI, that is certainly the topic of the day.

For me, one of the things I got most excited about when I became the dean of the school of Engineering is learning about the breadth of engineering as a field and how impactful areas I knew nothing about were. I'll take material science. Material science, making new nano materials, that's key to energy, that's key to the future of the planet.

I didn't really know about that, and I got pretty excited when I learned about that. Another area I didn't know a lot about, biotechnology, synthetic biology, all kinds of truly amazing discoveries being made in biology and how they're being put to use to improve us, all of us, our human health.

I'm very excited about what's going on in computer science, what's going on in AI, but maybe most excited about the breadth of what's going on across all of the engineering fields and how each of those can have a really substantial impact on the problems that are facing the world today.

Now, in terms of what I'm concerned about, certainly my biggest concern is about misuse of the technology and people using it for purposes that I think most of us, everyone in this room would agree are not productive for society. I'd love to hear what you're most concerned about as well, if we have a moment.

 

>> Amy Zegart: I'm most concerned about us, about our society and the division in our society. I think technology can do remarkable things, but the risks of technology are misused by a society that doesn't believe in truth, that doesn't listen, that isn't united in common values. And the values underpin everything, both domestically and our inspirational power in the world.

So it's not a technological worry I have, it's a human worry that I have. Right, well, that makes sense. So just to give you a preview of coming attractions, we're gonna get a little taste of the breadth of emerging technology in three lightning round talks that we're about to hear from key players in the Stanford Emerging Technology Review.

First we're gonna hear from Professor Allison Okamura, who's gonna talk about robotics. Then we'll hear from Professor Zhenan Bao, who's going to talk about material science. And then we'll hear from Dr. Herb Lynn, the editor of the Emerging Tech Review, who's going to talk about cross cutting themes.

But please join me in thanking Dean Jennifer Widom.

>> Allison Okamura: Hello, I'm Alison Okamura. I'm a professor in the department of Mechanical Engineering. And I'm pleased to have contributed to the section on robotics, which was developed based on lots of interactions with students and colleagues in this very interdisciplinary field across many departments at Stanford.

For robotics, we have to begin by asking the question, what is a robot? For me, a robot is a human made physical entity that has ways of sensing and acting on the world around it. This technology is using some physics, so there's a nature of physicality that goes beyond just artificial intelligence, and that robots have the ability to create physical effects on the worlds around them.

This can make them life saving, and it can also make them dangerous. Robots today are used primarily for tasks that we call the three D's, dirty, dull, and dangerous tasks. This includes things like manufacturing lines, which are dull. Things that are dangerous, like disaster assistance. And other ones like military services, security and transportation.

Robots can be autonomous. That is, they can operate on their own, or they can be directly controlled by human operators. Humans really excel at working in unstructured, even chaotic environments. Whereas autonomous robots, at least today, work best in very structured, controlled environments like manufacturing lines. So to give an idea of where we are and where we can go, let's look at the field of medical robotics.

How robots affect human healthcare. Each year, thousands of surgical procedures are done with surgical robots. Most of them are actually robots developed right here in Silicon Valley. These robots that are used today are not autonomous, rather, they're teleoperated. A surgeon sits at a console and manipulates what amounts to some fancy joysticks.

And this controls a robot that has very small arms and hands that can actually be inside the patient's body to do the procedure in a much less invasive way than if the surgeon had to put their whole hands inside. And surprisingly, this doesn't just help the patient, it also helps the surgeon.

Whereas the patient can have a smaller incision and benefit from more decreased chance of infection and a faster procedure. The surgeon also sits comfortably at a console that's very ergonomic, rather than bending their back over the patient. So these types of technologies can help people on both sides of the equation, the providers and the receivers.

And over the past decade, with these types of teleoperated robots, robot designers and the medical teams kind of learned how to work together, how to integrate these robots into the operating room and balance the physical capabilities with the robot, with the intelligence of humans. And the goal, of course, being to optimize patient care over a wide variety of procedures.

But it's interesting that if you go into even one of these robotically based operating rooms, we'll see that the operating room doesn't really look that different from how it might have looked a few decades ago, before robots were introduced. But now, because there are breakthroughs in machine learning and artificial intelligence, robots can have a more transformational impact on human health and health care.

And there's really a need for this. We have increasing human length of life and associated diseases that affect older adults. For example, in April of this year, the White House issued an executive order on increasing access to high quality care and supporting caregivers. And of course, this is in response to the current growing social and economic crisis in meeting the needs of older adults.

The growing number of older adults who need assistance, combined with a severe workforce shortage of people who can assist them, is creating really high costs associated with elder care and really contributing not only in the US, but worldwide. Some recent analyses show that we expect to have around 85 billion people by 2050 who are age 65 and older.

And as the ages grow even older, we're going to have more and more needs for assistance of personal care in the, the home, and we just don't have the workforce to support this care. So researchers at Stanford and elsewhere are developing assistive robotics that can provide help in the home, hopefully delaying transition to, say, skilled nursing facilities and the such, so that people can age in place, live more independent lives, and overall improve their health.

Now, a one-to-one person to robot ratio just isn't going to be feasible the way it's used in surgery today if robots are going to have these types of impacts. And so we're going to need to go towards more autonomy and benefit from recent developments in AI technology, while also considering safety, so that we, or at least Americans, are starting to be more willing to accept such technologies.

So as robots increasingly enter our lives, we're going to need to balance the accessibility of robot technology with these needs. And that equation is starting to work out in the favor of robots. For example, robots need sensors in order to perceive the world around them, and the costs of things like cameras are coming way down as these component technologies get integrated into cell phones and other common platforms.

Robot bodies are also now lighter and cheaper as designs are enabled by new materials, some of which I'm sure Zhenan Bao will talk about in a minute, and structures. And some of these are even soft, physically creating safety. But it won't always be easy. We have supply chain issues that are some of the most important near term infrastructure challenges in robotics.

The robotics field relies on integration of so many different types of foundational technologies, and this means that progress in this field is heavily reliant on global supply chains for parts such as chips and materials. Now, these days, when people think about autonomous robots, they typically think about self driving cars, because it is so much in the news, whether Cruise in San Francisco is being banned, or we even see them around our neighborhoods here in Silicon Valley.

It's a robotics application that's going to affect many of our lives directly in the coming decade. But many of the autonomous procedures are going to be even more difficult because there are no rules of the road for helping someone in their home in the same way as robots on the road.

So we have high expectations, higher than what we have for other people. And we should, because robots are going to transform many of our lives through elimination, modification, or creation of jobs and functions. All of the challenges that come with these changes are going to have to be worth it.

And when we see how robots affect our lives, we're going to have to accept them being in our physical space, understanding their safety, understanding how they will change our work. And we'll have to understand how the benefits from these robotic technologies will balance out the challenges that we face.

Thank you very much.

>> Zhenan Bao: Hello, my name is Zhenan Bao. I'm a faculty member at the Department of Chemical Engineering and also by courtesy, material science and engineering and chemistry. It's my pleasure to be here and also to be part of this very exciting team to work on the inaugural review.

Here, I want to give you a highlight on the material science section. The key message is that material science is the platform technology underlying many of the advances for other research fields. Materials are essentially everywhere, from things you can see, you can touch, to very, very tiny atoms and molecules that are million times or even smaller than the diameter of human hairs.

Material science really cuts across many technology areas, but then they contribute to everything from stronger and also lighter weight aircrafts to materials. Biocompatible materials that are used for medical implants, to longer lasting batteries used for our electrical vehicle and also sustainable plastics. The goal of material science is really to understand how structure is going to impact the properties and functions, and also how manufacturing and processing will impact structure and as a result, the performance.

The ultimate goal for material science is to really be able to predict the best material to make on demand based on certain specification. Hopefully AI will enable us to do that someday. Therefore, broadly speaking, material science is really about understanding the synthesis characterization of materials, the manufacturing of materials and also computation modeling to predict materials.

Here I want to highlight some important applications that are generated by the discovery of new materials and properties. For example, new generations of wearable electronics made of skin-like materials is enabling us to be able to continuously monitor stress level of a person or glucose level of a person.

And also allow wound healing with less scar formation or less inflammation. Advanced manufacturing such as 3D printing is allowing the manufacturing of football helmet or bicycle helmet liners to make them safer, to protect the user. And also produce personal protection equipment during the COVID pandemic. Nanotechnology is a very active research field, it's a subfield of material science.

The reason it's very interesting to a lot of researchers is because when materials are scaled down from the bulk material to tiny, tiny structures that are less than 10,000 times the diameter of human hair, their properties become dependent on the size of the structure. For example, quantum dots is the subject of Nobel Prize of chemistry this year, was announced just over a month ago.

And they are basically semiconductor spheres that are a few nanometers in size. But depending on the size, the color emission is very different, if it's 1 nanometer, it's blue in color. If it's three nanometers only three times difference or two nanometers difference, it becomes red in color. Quantum dots made TV brighter and more colorful, you may have seen QDTV in the appliance store.

And they also make solar cells to be more efficient and make cancer detection to be more sensitive. And other applications for nanomaterials include, for example, the molecular nano assemblies allowed COVID vaccine to become stabilized and be able to get delivered for human. There are two dimensional semiconductors, these are atomically layered thin sheets of semiconductor.

They are being researched to make the next generation integrated circuits and semiconductor chips to be even faster than today. Finally, there are a lot of advances being made in nano catalyst that can be used to convert sunlight into electricity. And taking carbon dioxide from the environment and convert into valuable chemical fuel or other useful chemicals.

Going forward, it is important to better understand the environmental and health implications of nanomaterial. And AI will be a very important tool to combine with material science to generate even better and more powerful new materials. On the policy side, it is important that there's no ambiguity on what is considered fundamental research versus export controlled research to prevent unintended hindering of innovation.

By creating obstacles for non-US researchers to work in US or deterring international collaborations that's critically important for advancing the field. So, in summary, the key takeaways are material science is foundational, AI will help material science to become even more powerful. Material science requires long term investment in workforce development, fundamental research, and as well as infrastructure.

Material science requires interdisciplinary and international collaboration. And finally, more funding support is needed to translate fundamental research to commercialization, thank you.

>> Herbert Lin: Thanks, I'm Herb Lin., I'm the director of CDER and also the editor. My job is to talk about what we learned after looking at ten different fields, important fields of technology, I think you'd acknowledge that all ten fields are important.

But what was surprising, what could be surprising, is understanding how they're all interconnected, and that, for example, advances in one field often lead to advances in others. We've had some mentions now of how AI is transforming various fields, and AI impact on material science and semiconductor design and space exploration and robotics and so on.

And so you might think that AI is the fundamental technology that underlies everything else. Well, it's certainly one fundamental technology, no question about it, but material science is too, okay? As one of the things that we just heard is everything is atoms, you're all made of atoms, and you're sitting on atoms right now.

And so material science gives us better batteries and more advanced robots, and better energy storage and so on, different kinds of concrete and so on. Better energy technologies give us better robots and spacecraft and so on, but we also sometimes see that the fields that are helped also return the favor.

So, for example, better chips, better semiconductors, have enabled advances in AI, okay? Materials science will help to develop better semiconductor materials, which will lead to better chips, which we hope will lead to better AI and so on. So, there are these interesting feedback loops between the technologies, so there is no one fundamental technology.

Most fundamental technologies often build on each other, so that was one of the interesting things that came out of it. The second interesting aspect was what we call the democratization of advanced technologies. It used to be some many, many years ago, some era, many years ago, the United States had a lot of control and leadership over new technologies, government funding, expertise.

We were number one, those technologies now are spreading globally, not just to other countries, both including allies, by the way, and partners, not just adversaries. In other words, it's not just a China problem, there's lots of scientific talent in other nations and so on. And the expertise to implement many of these technologies is kind of diffusing downwards in many fields, like synthetic biology and robotics, what used to take a PhD to do now you can do in high school labs.

And this kind of democratization over horizontally and vertically creates a complex policy environment. Decentralized technology, decentralized talent, means there's no longer a way to get a handle on it in any one way. If you're going to have any influence over at all, it's a whole of government, whole of society kind of operation, rather than some one centralized point of control.

And with all of these actors, the policy space is a whole lot more complicated, okay? Some of the consequences of this- More actors means more policy complexity, right? It's no longer just our policy towards the Soviet Union. We have to worry about how policy affects high school students now, I mean, in science, that's something we never had to worry about before.

And with these other actors, state and non-state actors, they find ways of challenging our interests in ways that they didn't before. Technological advantages based on monopoly are gonna not go away, but diminish, right? It used to be we were number one, we could keep it all to ourselves.

We can't do that very much anymore. Other nations will have capabilities and we can't exercise monopolies. Winning isn't winning anymore. That is, it used to be we could win a race and the race would stay one. Not anymore, okay? Leadership is gonna go back and forth, and there's no such thing as resting on your laurels anymore.

Constant competition is the name of the game now. And more diversity in bureaucracy and in ethics and so on. Different ways of looking at the world, different organizational structures, these have consequences. And in our country, we care about operating ethically. We establish bureaucracies to help us do that.

Other countries, if they don't care about ethics as much, maybe they're gonna work faster in this. That's an interesting policy challenge, how we're gonna deal with that. And the last thing I wanted to talk about, we have 12 of these themes. The third one I wanna talk about is that technology advancement, we find, is more than just scientific advancement.

It requires grappling with economic policy and social factors in important ways. So, for example, we've all heard about the breakthroughs in nuclear fusion, and they have been genuine breakthroughs. We have reached more than breakeven for the first time in history. We've done that twice now, where we got more energy out of a fusion explosion, controlled fusion explosion, than was present in the lasers used to compress it.

That's a big deal, okay? But scientific feasibility that breakeven is possible is not sufficient. It's necessary, but it's not sufficient, right? You need to demonstrate engineering feasibility. What that means in this context is that not only should the scientific breakeven have been achieved, but all the energy used to power the lasers to go into the fusion explosion, into that little miniature fusion explosion, you had to do more better than that.

And that kind of breakeven, engineering breakeven, we are not anywhere near. So real energy production requires engineering feasibility as well as scientific feasibility. And even the most optimistic forecasts say fusion 15 years away. And even beyond that, beyond engineering feasibility, there's social feasibility, there's economic and social challenges.

For example, where is the fuel gonna come from? Turns out that the fuel you need for nuclear fusion, you have to produce nuclear reactors or accelerators. That's gonna be hard. Are we gonna produce lots of nuclear reactors to produce fusion reactors? It's going to be interesting. So the summary there is that tech advancement requires more than just scientific breakthroughs, and we can't just look at just the science.

We have to look at the entire engineering and social and economic and political complex surrounding technology for it to succeed. And with that, I will turn it over to Condi Rice and Mark Andreessen, who will come out and join me now. Thank you.

>> Condoleezza Rice: Before I introduce our esteemed guest, Marc Andreessen, I'd like you to thank again, particularly, our faculty who have put an awful lot of time into this, to Jennifer Whittam, to Eddy Siegard, to her co-pilot, John Taylor.

I'd just like us to thank everybody who did so much to make this possible, so thank you.

>> Condoleezza Rice: Hello, Marc.

>> Marc Andreessen: Hello, good afternoon.

>> Condoleezza Rice: How are you?

>> Marc Andreessen: Good, great.

>> Condoleezza Rice: So first of all, let me just introduce you briefly. I think people know Marc Andreessen, co-founder and general partner at the venture capital firm, Andreessen Horowitz.

To say that he's an innovator and creator is to understate the case. He's a real pioneer in software, and software now used by a billion people, one of the few to establish multiple billion dollar companies. Marc co-created the highly influential Mosaic Internet browser and co-founded Netscape, which later sold to AOL.

He also co-founded Loudcloud, which was Opsware, which sold to Hewlett-Packard. And he served on the board of Hewlett-Packard. Marc holds a BS in computer science from the University of Illinois, Urbana Champaig, and serves on the boards of Andreessen Horowitz portfolio companies, Applied Intuition, Carta, Coinbase, Dialpad, Flow, Golden HONOR, OpenGov, and Samsara.

You're busy. And he's on the board of Meta. So, Marc, you kind of shocked the world a little bit ago with a manifesto. And I just wanna ask you, did you intend to be provocative?

>> Marc Andreessen: So it turns out being pro-technology is a very radical position these days.

For those of you who have read it, one way to read it is just this Clinton/Gore liberalism from 1995 written down. And the fact that people have been shocked by it or shocked it's there, I think is much a sign of just how much the times have changed.

I mean, the attitudes have really, really seriously changed over the last 25 years. And so, in a sense, it's very radical. In a sense, it's very much not. The reason I wrote it, quite frankly, is because we work with young tech founders and engineers all the time. And in my view, they and the broader society are just on the receiving end of what I call the manifesto.

It's just this constant demoralization campaign to kinda just basically just take on the most pessimistic possible interpretation of anything new that happens in tech. And it's been building for a long time, but I think it's just gotten to the point where it's kind of we're sort of in the theater of the absurd.

So I at least thought it would make sense to kinda write down the counterargument.

>> Condoleezza Rice: I'm going to quote your unconditional defense of technology and ask you to say a bit more about it. We are told that technology takes our jobs, reduces our wages, increases inequality, threatens our health, ruins the environment, degrades our society, corrupts our children, impairs our humanity, threatens our future, and is on the verge of ruining everything.

Well, now, that's quite a statement. So yes-

>> Marc Andreessen: Now, the good news is you don't have to read the New York Times tomorrow.

>> Marc Andreessen: Cuz I covered it all.

>> Condoleezza Rice: Yeah, you covered it.

>> Marc Andreessen: Okay.

>> Condoleezza Rice: So essentially, what you are trying to do in this manifesto is to say that we've become overly cautious about technology.

And is your concern about our overly cautious approach to technology, that we will not allow innovation that will frighten off innovators? We'll come to this in a moment, but that regulators who will regulate, even if they don't understand what they're regulating might in fact cut off pathways to regulation.

So what is it about what you're calling here, techno pessimism, that really is concerning to you at this particular? You mentioned the young tech entrepreneurs, innovators. Is it that they will not feel that they are appreciated in the way that maybe even ten years ago, Silicon Valley was sort of the heart of those people who are coming here from all over the world to understand Silicon Valley.

So what is it that concerns you?

>> Marc Andreessen: Yeah, so take a little bit of a broader perspective on this. So there's a website called wtfhappenedin1971.com and WTF, for those of you who, if you don't recognize that, ask your kids. They'll tell you, first two words are what the.

And so WTF happened in 1971. And it's this site that basically has chart after chart after chart after chart of basically social economic trends in the US that changed. And I should start, I should also say, I take this very personally, I was born in 1971, so the timing is exquisite.

I like to think it's not all my fault, but just a tremendous number of things basically started to change in the 1970s. And one of the things that basically changed was the sort of national attitude. Starkly negative on tech. There's actually this very interesting juxtaposition. Peter Thiel talks about Woodstock and the Apollo moon landing happened basically in the same week.

And basically the culture decided to go away from the moon landing and basically towards the values of the Woodstock generation. And then basically the applied version of this that happened was Richard Nixon in 1971, proposed something he called Project Independence. And Project Independence was actually a Kennedy-esque kinda call for national greatness.

But his form of it was, he said we should achieve energy independence by 1980 with clean energy. And the way that we should do that is we should build 1000 new civilian nuclear power plants in the United States over the course of the next nine years. We should cut the us energy grid completely over to nuclear power, and then electric energy.

We should completely stop fossil fuels. We should cut over to electric cars. Electric cars are actually a very old technology. Actually, electric cars were actually invented before internal combustion cars. We could have cut over to electric cars at any point. And then consequently, we go completely emissions free across the entire us energy sector.

It's a very exciting call to action. He also, in that same period, created the nuclear Regulatory Commission, which then prevented that from happening. It's one of these situations where we just have developed this pattern in american society, where basically when it comes to tech topics, we shove the accelerator down as hard as we can.

And then we shove the brake down as hard as we can at the same time, and we kind of expect something to happen. Sitting here today, there has been, I forget the exact number. It's either zero or one new nuclear power plants approved over the course, basically, since the nuclear regulatory commission was formed.

And look, this is playing with live ammunition. This is a very kinda big topic. If you're on the right, you're like, my God. The government basically just strangled one of the great new industries that America could have dominated. If you're on the left, you're like, my God, we could have solved carbon emissions 40 years ago, and we didn't.

So basically, that basically set the pattern. And then basically, that's now playing out in sector after sector after sector. Another thing to look at today is basically just, you have these just industries that just are sort of chewing up larger and larger major shares of GDP every year, and specifically healthcare, education, and housing.

And then you could just say, generally, law, government, administration is a sector. And these are sectors that are just exploding in sort of size, are characterized by either zero or negative productivity growth. They're basically impossible to introduce technology into. We live with the sort of real world consequences of that every day.

And so yeah, this topic, I think, carries real implications.

>> Condoleezza Rice: Yeah, I mean, it sounds as if, in part, it's a mismatch. You get the technologies, but the institutions somehow don't quite accept them or push them forward. The nuclear case is a very interesting one. I was just in France.

France gets 80% of its generating power from nuclear. And just next door, Germany has shut down nuclear. And so one of the things that we're trying to do through cedar is to be concerned about how policy has an effect. So the nuclear regulatory is a really interesting example.

We will return to that. It's one of the hot topics.

>> Marc Andreessen: One more thing, Germany just shut down the nukes, of course, like, a way to interpret basically everything happening around Ukraine is basically, it's a European energy war, in a lot of ways. In that basically, Europe has been subsidizing the Russian military machine through the purchase of Russian oil and gas for all these years.

Some of you remember Trump actually went to the UN and basically gave this basically speech excoriating the Germans for becoming energy reliance, right, on Russia. And there was this famous viral video at the time where the kinda German representatives in the UN were making faces about what an idiot Trump was.

 

>> Condoleezza Rice: Actually, people have been telling them ever since Ronald Reagan not to become dependent on Russian natural gas.

>> Marc Andreessen: And they did, right? And to your point, they shut down their new plants. We, by the way, we of course do. In California, we're completely committed to European policy on everything.

So we shut down our nuclear plants.

>> Marc Andreessen: We just think everything Europe does is great. So that's the modern version. There's an origin to that story which is actually this idea of the precautionary principle and the precautionary principle, those of you who are experts in sort of science and technology policy know this.

This is like a formal process. It's sort of more widely known in Europe than it is here. But it's a real thing. But even if the term is not used, it's sort of the theory that's applied to technology policy today. And basically what the precautionary principle says is that for any new technology that the people building the technology have to prove that it's harmless before they're allowed to roll it out.

It's actually very similar to the drug approval process for the FDA. You have to prove that it doesn't cause injury before you can roll it out. Principle was actually created by the german Green party in the 1970s to stop nuclear power. All right, so that was actually its purpose and now it's basically being applied across the entire economy.

Of course, the problem with it, the problem with the precautionary principle, it sounds great. Of course you want to prove that things are not going to harm people. The problem is if you back test that theory and if you basically apply that same principle to any basically important technology of the last basically 4000 years, starting with fire, the shovel, the wheel, the automobile, electricity, you just go right down the list.

They all would have been stopped in their tracks by the precautionary principle. And so one of the ways to think about the last 4000 years of human civilization is that we basically did not apply the precautionary principle up until 1971, and we've applied it since.

>> Condoleezza Rice: So if you don't apply a precautionary principle, however, is there any responsibility of the technology entrepreneur?

Of the technology, I mean, not use the word entrepreneur, the technologist. Is there any responsibility to think about the implications of your technology? I'll give you, we're gonna talk about artificial intelligence in a minute. I was at a dinner recently on artificial intelligence and somebody asked me, will it become a weapon of war?

And I had to say, look, every technology has become a weapon of war. When you think about nuclear, when we learned to split the atom, we were able to turn on the lights for civil, nuclear. We were able to do medical ice stuffs. We also built the bomb.

So inherent in any technology, there is the potential for it to be used to these purposes. So do we just wait and see what happens? Or how would you think about that as a technologist?

>> Marc Andreessen: Yeah, so this is a really fundamental question that's gotten sort of deeply embedded in human culture for a very long time.

I always cite the sort of myth of Prometheus that the Greeks kinda encoded into our culture, which is basically this idea that Prometheus was the God who brought fire down to man. And for doing that, he was chained to a rock by Zeus and has his liver pecked out by an angry bird every day, gets regenerated overnight, and then it goes.

 

>> Condoleezza Rice: I'm sure Richard, our classicist, knows that story, our president.

>> Marc Andreessen: And so this idea, basically, of technology, again, starting with fire, this idea of technology as a double edged sword is obviously very fundamental. There's a more recent kinda version of this, the Frankenstein myth, and then in our time, it's the Terminator Skynet kind of thing.

So, look, this is a very fundamental thing. I think it's obviously, it's certainly true. The technologies are double edged swords. I think they, basically, any sort of effective technology ends up having sort of negative uses. You get into these very interesting questions about gunpowder, for example, which is like, how do you score gunpowder after all this time?

Was it basically all the death that it caused? Or was it basically the establishment of the nation state and the ability to provide for defense and law enforcement, social order? We've had this conversation before, actually, about nuclear weapons. Did nuclear weapons actually, were they actually destructive or on net did they actually prevent World War III?

So you get into these very fundamental questions. These are real questions. But then the very next question you have to ask is, are the technologists who invented the technology actually the ones who should be answering these questions? The fact that you invent the computer or the radio or whatever, or the AI or the nuclear bomb, does that give you basically special privileges in our society to be able to answer those questions?

Probably here, probably everybody saw the movie Oppenheimer. One of the things that the movie did a really good job of, one is it wrestled with this question a lot, but it did a really good job of basically showing how crazy, how politically crazy a lot of the atomic scientists were at that time.

And a lot of them were actual communists. And a lot of them, it turns out, a fair number of them were actually Soviet spies, right? And they actually, the first Russian nuclear atomic bomb was, as they say, wire for wire, compatible with the US Nagasaki bomb. And it's cuz you have these kind of crazed Antifa, basically nuclear scientists handing everything over to the Russians.

And then, by the way, you also had, by the way, a similar level of extremeness on the other side. You had John von Neumann, who was on the right, who was not in the movie, but he was critical to developing the bomb. He was one of the great geniuses of the century, and he actually advocated, in 1945, a nuclear first strike against the Soviet Union.

And his line was, if you tell me we should bomb tomorrow, I say, why not today? If you tell me we should bomb at 05:00, I say, why not 02:00? Okay, so crazy communist Oppenheimer, crazy right winger, like von Neumann, right? Like, how about we not listen to either of those guys, right?

 

>> Marc Andreessen: I don't think these. And I put this way, I don't think it'll shock anybody here to say, look, when people cloister themselves away at an ivory tower for their lives, working on some technical formula, they might not emerge with totally sensible politics. I know that might be, you know, crazy concept.

And so I think these, like, I guess maybe wrap it up by saying, like, these questions are too important to be left to the technologists. Like, these are very, very fundamental questions that have to be dealt with at a societal level. They have to be dealt with, you know, kind of very deeply.

At a philosophical level. They have to be developed at a political level. And one of the things you see happening right now is we have these quote unquote experts. Well, we saw it during COVID Every virologist all of a sudden was sort of a public health expert getting involved in all kinds of societal engineering.

And I just think that the fundamental assumption there is just deeply wrong.

>> Condoleezza Rice: Yeah, I want to come back to that, because, again, pointing to what we're trying to do here, is to marry these two worlds in some ways. But let me talk for a minute artificial intelligence.

You had a quote, we believe artificial intelligence is our alchemy, our philosopher's stone. We are literally making sand think. When I read that, I thought, is that a good thing that sand thinks? So why don't you talk a little bit about why artificial intelligence is special in this way?

 

>> Marc Andreessen: Yeah, so, well, okay, so special. Start by saying why it's good. We could see if it's special. So why is it good? So, look, basically, all of human civilization, everything we're surrounded by is sort of the application of intelligence, right? And so our ability to build a building like this, ability to turn the lights on, and our ability to have the discussions we're having, it's all based on human intelligence.

Look, we have used prosthetics for as long as we've been able to develop technology to try to augment our intelligence for spoken language and then written language, and then mathematics and then computers and many other technologies. We're all constantly trying to find ways to use tools to make us smarter or at least leverage the intelligence that we have in more interesting ways.

And then, look, yeah, the holy grail for computer science for the last 80 years, actually all the way back to literally 1941, 1942, the holy grail has always been, okay, computers are hyper literal, right, and they're really good at math calculations at super high speed. But they famously fall down when you expect them to interact with the real world, when you expect them to interact with people, when you expect them to do natural language, anything, or sort of understand concepts or anything.

And so there's been this 80 year research project, including, by the way, many decades here at Stanford, to actually try to get computers to think at least a little bit more like people do. And basically, it turns out that research program was correct. And by the way, that comes as a huge surprise because that was an 80 year research project that actually had many, many false starts.

There were many crashes and wipeouts along the way. When I came to Silicon Valley in 93, it was a deep AI winter where nobody believed in these ideas, and it was basically a dead field. And basically, it turns out it was right. And then what it presents is the opportunity to basically leverage human intelligence in all the ways in which we think that we have problems that we need to solve.

And that could be everything from biomedical research. That could be a very broad cross section of what you would view as sort of obviously good applications. Education AI is, I think it's already transformative for education, but I think it's going to be monumentally transformative in the years ahead in a very positive way.

But look, at the same time, it is also absolutely correct. It is a weapon of war. AI is actually already a weapon of war. Both the US Department of Defense and the chinese military have declared even pre all this recent stuff. As far back as the mid 2010s, both the US and China had declared that AI and autonomy and perception were basically automated weapons were the future of both of those militaries.

The US DoD defines this as what they call the third offset, which is, they call offsets basically ways that you win war are basically guaranteed. And the first offset was nuclear weapons. The second offset, I think, was precision guided munitions, maneuver warfare. And then the third offset is an autonomy.

So China's absolutely racing ahead to apply AI weapons. There's very active DoD programs and defense contractors and new startups pursuing that. There is the opportunity to apply those new AI warfare technologies in ways that I think are very beneficial. And we could talk about that, but at the same time, exactly your point, we are entering a world in which there will need to be a new set of military doctrines, and those will need to be thought about very carefully.

 

>> Condoleezza Rice: Well, let me have you be the optimist for a moment on AI, so what does excite you over the horizon about what it might be able to do?

>> Marc Andreessen: Yeah, I'll just give you a micro example. So we have a company. We have a company called Shield AI, and it's co-founded by twin brothers.

One of them is a chip engineer from Qualcomm, and the other is Navy SeAL. And so it turns out they're the perfect founding team for this kinda thing. And they have developed a drone that will, for small unit operations that will clear buildings. And so what happens when we were active in Mosul or Ramadi or Fallujah, all these places, or literally what's happening right now in Israel, well, literally right now in Israel, right in real time, they're going into the hospital.

And so what the Army Marine special forces have to deal with in any sort of urban situation these days, counterinsurgency thing, they have to go kick in doors. They go kick in doors, they go inside, they clear the room. American Sniper had a lot of this, if you wanna see the fictional version of it.

And look, they go in with guns drawn. They don't know who the bad guys are. They don't know who has a weapon. They don't know who has bomb grenade. They're going room to room. There's tremendous risk to the soldiers who are involved. There's also tremendous risk to the civilians, the innocents that are in there, and there's lots of accidents that take place.

And so now there's this drone, this drone from Shield AI, and it's a little backpack drone. And basically you still need the person to kick the door. But once you kick the door, you sail the drone inside, and the drone autonomously goes room to room, maps the house or the building, and has a military sensor package on it and relays all this to the operators outside on their phones in real time and has, like, infrared.

So it basically is able to spot everybody in the thing. It's able to map the whole thing in 3D, it's able to spot everybody in it. It's able to give you video, and it's able to classify friend and foe, right? And so by the time the person actually goes, by the time the soldier actually goes in, you actually, if there's a bad guy with an AK 47 in the closet, you actually already know that.

And, you know, it's not this person, it's that person. And so there's the opportunity here, right? I mean, anybody who's at least what I've always heard from people who've been in conflict situations is the good news of human judgment is presumably humans care about life, or at least some do.

The bad news is, you know, lack of information, judgment, you know, adrenaline, you know, people getting kind of rattled under pressure, high tension situations, the opportunity to apply technology into situations like that, to be able to really drain the risk out and make it basically a more actually logical, fact based process to be able to do things like this.

I think there's the possibility that we might be actually seeing a very big step function down in battle deaths-

>> Condoleezza Rice: Interesting.

>> Marc Andreessen: And civilian deaths.

>> Condoleezza Rice: Yeah, let me move to a couple of other hard questions. So you are a big defender of liberal democracy, and you're a techno-optimist.

I'm just assuming that you think it's really important that the United States and its allies win this race rather than authoritarians. And you've mentioned the nuclear age. I've often asked people to do the thought experiment. Suppose the Nazis and the Soviet Union had won the nuclear arms race rather than the United States.

What might that have meant? So I assume that, am I right that you are concerned that this race that we're in be won by liberal democracies?

>> Marc Andreessen: That's right, yeah, and I would classify it a few different ways. So first of all, I think it's important to win, and I'll come back to that in a second.

And then I think it's also important to not then just hand over what you've invented, right, which is like we discussed, how Russia got the bomb. So there's like, there's a part two in there, which is if you win, you have to actually maintain your victory, which is, which is a thorny thing these days.

Look, the importance of winning. Importance of winning in World War Two with atomic weapons was obvious. I think the importance of winning in AI is very straightforward. And the following is just, I'm going to try to basically say what they have said. So this is not me casting aspersions on China, but just saying what they've said.

One of the great things about the Chinese Communist Party is they tend to just say it. They have these very amazing kinda statements they put out and these documents they put out, these communiques, and it's all very 1970s dictator language but with AI in it. And they have basically outlined, they basically outlined a full national agenda around AI.

Part of it is on the defense side, which we talked about, but they've also laid out a full strategy for an AI-driven surveillance state society. They have laid out an entire vision for this. And then they've laid out basically a vision and a goal that says this isn't just domestic China, and there's certainly enough issues just in domestic China to worry about.

But this is also they have an expansive vision, they wanna take this other places. And so they have these programs like digital Belt and Road and their smart city program and their 5G program, where they have been using every instrument of state power that they have to proliferate their approach to network technology and to surveillance camera technology, right, and to e commerce technology, logistics technology, drone technology.

They've spent the last 15 years deploying that out into as many countries as they can. There have been big fights. Actually, the US has had big ongoing fights. Even major european countries have been adopting a lot of this stuff. And so the next shoe that's happening, it's dropping right now, is they're out with their AI strategy.

And they're gonna go to all these same countries and they're going to say, look, do you want the American, quote, freedom, quote, democratic kinda model of AI for your country, or do you want the Chinese state model? And boy, I'm sure you tell your people you want the open and free one, but wouldn't it be nice to be able to track all your citizens in real time?

And wouldn't it be nice to know that political troubles brewing before it actually hits the streets? And aren't you actually kinda jealous that we have the Great Firewall? And don't you kinda wish you'd have a Great Firewall of your own? And so they're basically gonna replay what they've been doing in these other sectors in AI.

And there's a big, I mean, basically the future of, I think, geopolitics, democracy, and human rights hanging in the balance.

>> Condoleezza Rice: So we've touched on a lot of kinda big policy themes indirectly here, but you made a statement earlier on. You said these are big societal political issues that cannot just be left to the technologist.

So that's why we're doing this, because we really do believe that the ability of technologists who really are at the frontiers of the technology to collaborate with those who have spent a lot of time thinking about societal issues, economic issues, education, political issues, not to mention those who have to make decisions.

You mentioned the european regulations, but it's coming in the United States as to what do you regulate? How do you regulate, et cetera. So the educational and collaborative process between technologists and those who worry about these institutions, what would you want them to know? How would you want them to think about it?

Because that's essentially what we're going to try to do with this effort in the Stanford emerging Technology review. So how would you advise us on making this work for the techno optimist but who understands that there are big implications for this technology?

>> Marc Andreessen: Yeah, so I think the big thing I say is just like, look, these are really complicated, multidimensional questions, right?

And even just in the conversation we just had, we've touched on science, technology, sociology. You very rapidly get into psychology, finance-

>> Condoleezza Rice: Geopolitics, right.

>> Marc Andreessen: Security, geopolitics, right, political science. And so these are very complicated, multidimensional things. I think the big thing is this is a time, and these are the topics where ivory towers are going to be very dangerous.

And let me start by saying one ivory tower is the tech industry. And this is a criticism leveled against the tech industry from the outside very routinely, which is you guys are in this bubble. You're in this university of your own in Silicon Valley. You don't kind of understand what's happening in the rest of the world.

I think there's a lot of truth to that. And so you asked what responsibility do technologists have? I think one responsibility is even if technologists should not be allowed to make these decisions, they should, we should at least try to learn as much as we can about how this all actually works as it plays out over time.

A second dangerous ivory tower is basically just literally sit in a room and think. And there's a few people in history who have been good at sitting in a room and thinking. Most people have done much better when they've gone out and talked to a lot of people and really tried to learn things.

One of the great virtues of Stanford and Silicon Valley always has been the world's best technology companies and technologists are within 20 miles of where we're sitting. So I think there's a great opportunity here to really have the minds of Stanford intersect maybe even more with the people actually building this stuff going forward.

And then look, there's a third ivory tower, and it's Washington, DC. And it's sort of the policy world, and it is 3,000 miles away. And sometimes it feels like it's maybe on another planet. And, you know, I go out to DC a lot and I'm the alien, you know, invasive species.

 

>> Marc Andreessen: And they're kinda staring at me like, what is this guy smoking? And then they come out here and it's sort of vice versa. We've been in various forums where folks come out here and they're a little bit,

>> Condoleezza Rice: Yeah.

>> Marc Andreessen: There's a lot of long pauses in the conversation.

And so, and look, I just tell you, they're just, there are not that many people in DC who deeply understand these issues. There's not a lot of technical expertise out there. You know, I view that, again, as technologists, we have a real responsibility to go try to explain things out there, and we also have a responsibility to listen.

And I think to the extent that you guys and continue doing what you've been doing or do more of it, to kind of bring the two coasts together and the Silicon Valley, DC World together, I think is very valuable.

>> Condoleezza Rice: I've been saying they've learned how to spell AI in Washington.

 

>> Condoleezza Rice: I'm not really sure they still understand what it is. And so if we can do our part in bringing those two worlds together, it will be something that I think Stanford is, if not uniquely qualified to do and capable of doing pretty close to uniquely. And so thank you very much for being a part of our inaugural launch of the Stanford Emerging Technology Review.

Thank you for everything that you've done to bring technology to the fore, to make this place. You've been a part of this place for a long time, this Silicon Valley community. And your impact on what we've been able to learn, know, create, innovate is really quite remarkable. And so thank you for that very much, and come back anytime.

Please join me in thanking Marc Andreessen.

 

Show Transcript +

 

FEATURED SPEAKERS

Condoleezza Rice
Tad and Dianne Taube Director, Hoover Institution
Thomas and Barbara Stephenson Senior Fellow

Jennifer Widom 
Frederick Emmons Terman Dean of the School of Engineering 
Fletcher Jones Professor of Computer Science and Electrical Engineering

Marc Andreessen
Cofounder and General Partner, Andreessen Horowitz

Richard Saller
President and Academic Leader, Stanford University 

Upcoming Events

Thursday, November 21, 2024
Language Model Detectors Are Easily Optimized Against
The fourth session of Challenges and Safeguards against AI-Generated Disinformation will discuss Language Model Detectors Are Easily Optimized… Annenberg Conference Room, Shultz Building
Wednesday, December 4, 2024
Ideas-Uncorked
Health Care Policy In The New Trump Administration And Congress
The Hoover Institution in DC hosts Ideas Uncorked on Wednesday, December 4, 2024 from 4:45–6:00 pm ET. The event will feature Hoover Institution… Hoover Institution in DC
Friday, December 6, 2024
Emerging Technology and the Economy
The Hoover Institution cordially invites you to attend a conversation with President and CEO of the Federal Reserve Bank of San Francisco, Mary C.… Shultz Auditorium, George P. Shultz Building
overlay image