Simon Johnson (MIT Sloan Economics Professor and Former IMF Chief Economist) joins the podcast to discuss his new book "Power and Progress", co-authored with his MIT colleague Daron Acemoglu, on the interplay between technology, political economy, and economic development.

Learn more about your ad choices. Visit megaphone.fm/adchoices

>> Speaker 1: It's that time of the year. Your vacation is coming up. You can already hear the beach waves, feel the warm breeze. Relax and think about work. You really, really want it all to work out while you're away. Monday.com gives you and the team that peace of mind.

When all work is on one platform and everyone's in sync, things just flow wherever you are. Tap the banner to go to Monday.com.

>> Jon Hartley: This is the Capitalism and Freedom in the 21st Century podcast where we talk about economics, markets, and public policy. I'm Jon Hartley, your host.

Today, I'm joined by Simon Johnson, who is the Ronald Kurtz professor of entrepreneurship at the MIT Sloan School of Management. And was previously, chief economist to the IMF. And just co-authored a fascinating newly published book with his MIT colleague Daron Acemoglu titled Power and Progress, Our One-Thousand Year Struggle Over Technology and Prosperity.

Welcome, Simon.

>> Simon Johnson: Thanks for having me.

>> Jon Hartley: So you're British? You were born in the UK, how did you first get interested in economics?

>> Simon Johnson: Well, I was looking for engineering, but on a grander scale, I think. And economics captivated me early on as something that has the potential to address really, really big issues.

So I was lucky to fall into it when I did and lucky to get the education that I received. And I think it's been all fascinating questions since then.

>> Jon Hartley: You were an MIT economics PhD student, certainly the best place to get an economics PhD. Were you influenced by anything, say, your childhood?

I know for some that grew up in the 1970s UK, it was sort of an important experience for them, or as they grew up sort of under when Margaret Thatcher was Prime Minister. Was any of that formative for you in terms of your sort of outlook and kind of global growth in a lot of the topics that you've written on, politics as well?

 

>> Simon Johnson: Yes, I think in retrospect, I'm from Sheffield in the north of England, a city that was an economic powerhouse. My family was involved in making screws on one side and making steel on the other side. And by the time I was aware of the economy, things had turned downhill.

And all of the industry there was troubled. There was a lot of job losses, people were moving away. So I think I'm still trying to understand exactly what went wrong there and why we lost so many jobs. It does give me a bit of a lens for thinking about economic decline, but also recoveries and more sustained growth and innovation and entrepreneurship elsewhere.

 

>> Jon Hartley: So I wanna get into your research agenda. You've written a lot of highly influential articles in political economy. How would you describe your research agenda? You've got over 80,000 citations, that is quite an accomplishment. You co-authored The Colonial Origins of Comparative Development, which is both your own most cited paper, as well as Daron Acemoglu's most cited paper.

You've been at the center of all this groundbreaking research on institutions and growth, which has been highly influential for several decades now. Certainly had a lot of influence on the World Bank and the IMF and their approach to promoting economic growth. How would you describe your research agenda in your own words?

 

>> Simon Johnson: Well, I think we're looking for the causes of poverty and wealth around the world, and many people, of course, have done that before, and we're standing on their shoulders. But there's a lot of really interesting questions, including, after the collapse of colonial empires, when technology in principle, could have flowed freely to many places.

And many places around the world could, in principle and in theory, have become much richer, they didn't, so why not? What are the actual impediments that these countries face? And what could we do in terms of academia or in terms of public policy or in terms of the IMF?

What could we do that would actually help the billions of people around the world who wanna live better?

>> Jon Hartley: And you were chief economist at the IMF, right after Anne Krueger? Or remind me again, in sort of your time at the IMF, what was the big thing happening then?

 

>> Simon Johnson: So I became chief economist at the IMF in early 2007, I held that job until the end of August 2008. So this was the run up to the global financial crisis. I'd worked there previously for two years under Raghu Rajan, who of course, is a brilliant financial thinker.

And he pulled the IMF and the part of the IMF that I was in, the research department, towards thinking about the interaction between finance and macroeconomics. And when I got that job, I was Raghu's successor. We were immediately, the IMF, my department was in the hot seat in terms of understanding what is going on with mortgage backed securities, with credit default swap spreads.

How will this spread around the world? What are the sensible interventions that could be made? So it was a ringside seat for the events that became, after I left, somewhat devastating to millions of people. And from that, I took away a lot of questions and a lot of interest in trying to resolve those questions with regard to making the financial system safer.

 

>> Jon Hartley: You wrote a book, 13 Bankers, really, that whole episode, and you've written quite a bit on financial cycles and business cycles as well. Now your latest book, Power and Progress, it is primarily on economic growth, political economy and technology. Could you tell us a little bit about the book and the point that you're trying to make with Daron?

 

>> Simon Johnson: Yes, I think the book both for Daron and me, is a bringing together of work we've done on political economy together and work we've done on technology, which tend to be somewhat separate. And so we've merged these things. And I think it's the political economy of technological choice.

And it asks the question first and foremost, is technology predestined, just something that happens to you? And we think the answer is no, there's a lot of choices in there. And secondly, is it possible to redirect technological change? Can you push it one way or another? We see influential individuals doing that, we see companies doing that.

Can public policy do it? Can civil society do it? And if so, which direction? In which direction would you like to push it?

>> Jon Hartley: Got it, so there's, these topics, directed technological change, distorted technological change. To me, it sounds a little bit like the words industrial policy. To what degree can policymakers really sort of shape industries in direct technology, and what are their limitations like?

To what degree should they really be involved in, in directing technology? Obviously, industrial policy has been a controversial set of words in economics for quite some time, certainly less controversial in some circles versus others. But I'm curious, what is your thesis on topics, directed technological change and industrial policy?

 

>> Simon Johnson: Well, policymakers definitely can shape technology and innovation. They do it with the tax code, they do it through advantages for certain industry. They do it through the Department of Defense. I think a really interesting and still to be totally resolved question is, can they tilt it in a way that would be more broadly in the public interest?

So it's not about capture, it's not about lobbying, but it's about more good jobs, for example. And we think the answer is yes, we think it's not easy. We think it's quite a struggle, requires a fair amount of debate and argument. That's one reason we wrote the book, to try and push that debate forward.

For example, if you think about the way that artificial intelligence is being developed now. A lot of emphasis on machine intelligence, which is euphemism for replacing workers, and replacing them in a way that doesn't raise the marginal worker productivity of those who remain employed in the same enterprise.

We would rather tilt it towards machine usefulness, which is a term that we've invented, but we're standing on the shoulders of a lot of computer science people who came before us. Who say, the point of machine should be meant to augment human capabilities, not to replace them, but to augment them.

I think it's easier to see that in history which way we've gone in various moments., it's easy to talk about the creation of new tasks, stimulating the demand for labor, and potentially raising productivity and wages. But can that be done in a more deliberate policy driven fashion. I think it's a fair question, and that's what now lies before us, trying to sort out that question.

 

>> Jon Hartley: So there's a popular set of words in sort of the labor economics and technological change literature, which is skill bias technological change. And in this sort of introduction of new AI tools, things like ChatGPT and generative AI, there's some who argue that it's actually gonna be skilled jobs that are gonna be severely impacted.

For example, coders, those in the service economy, those that are writing things, I guess, in their service jobs, those jobs, we don't need as many coders anymore. We'll have fewer software engineers that will use things like Copilot and ChatGPT to code faster. I'm curious, what is your view on that?

Do you think we're sort of in for this reversal where skilled workers have done so well for so long over their unskilled, less educated counterparts? Do you think AI is gonna trigger a big reversal there? Or do you think just broadly, both skilled and unskilled work is sort of in jeopardy here?

 

>> Simon Johnson: Well, robots replacing unskilled workers as well pretty quickly. If you read the novels of Isaac Asimov, which I really do advise to everybody. Because he had a brilliant imagination of the robot novels in particular, what would happen in the 1940s and 1950s when he was writing those.

The thinking was that robots would take over manual jobs first, and then later, as the robots became more evolved, they'd move on to cognitive tasks. I think what we've discovered, including the last two years, is the opposite. That artificial intelligence is quite good or better than humans at some cognitive tasks and really not as good as humans, and won't be for a long time at much more manual tasks.

And that's probably because we've been walking around and walking across rough terrain for millions of years, us and our ancestors. But abstract thinking is tens of thousands of years old, and we've only been going to meetings, writing memos to each other for a few hundred years of the modern soul.

So I think if you're unskilled, I would call it manual work and cognitive work, the pressure is on the cognitive jobs. Now within that, there's a lot of variation already developing, some occupations, it seems like it's the most highly skilled, highly paid people who will benefit. But in other places, they're the ones who are gonna get fired and then people who own the capital are gonna replace them with much cheaper labor.

So I think it's absolutely all in flux and our main point is it doesn't have to be about replacing workers. It can be about augmenting the capabilities, including augmenting the capabilities of lower-skilled workers with some sort of cognitive function. And that's really interesting because then there could be much more by way of productivity gains, individual productivity gains, economy wide productivity gains, and wage gains that would filter down.

So this doesn't have to be a zero-sum game or some people win, some people lose. Many more people could win than lose in this instance, but that may not be the default course we're on currently.

>> Jon Hartley: That's fascinating, I'm curious, what do you think about this whole argument that we've seen over the past decade, gained, I think, some traction in public intellectual circles that, this new era of AI is due to unleash massive unemployment?

And hence we need something like universal basic income to sort of step in and help those that are significantly displaced from this technological innovation.

>> Simon Johnson: Yeah, we're not big fans of Universal Basic Income, UBI, primarily because we think people like to work. Now it doesn't mean that they have to work 60 hours a week in backbreaking jobs.

But work seems to be important as a source of income, a source of identity, a source of political voice. And UBI, I think, lets the technology industry off the hook with regard to its impact. And I would rather they be on the hook and be responsible and not point fingers at other people and say, right, it's your job to take care of all these people who've gotten fired.

I think it's also the case, by the way, that in economy like the US, you don't get that much unemployment. What you get is people pushed down to very low wage jobs and they fall out of the labor force, so labor force participation declines. We've certainly seen plenty of that from the digital transformation of the past four decades, so I would not put too much weight on a UBI type approach.

 

>> Jon Hartley: Interesting, and you think about people like Milton Friedman is writing in the early 1960s in Capitalism and Freedom, he famously sort of endorsed the negative income tax, sort of a universal basic income. I think he famously had some sort of analogy that better to pay people something like UBI type payment rather than to have them shovel holes and fill them up again.

I think he had some story about in the Soviet economy, there was people sort of that were doing something like this, I guess. Do you think that it makes sense to, I guess, direct people toward more unproductive types of work in that sense? Or do you think that that sort of view has any validity?

I mean, obviously, it's much more complicated than that, but do you think to any degree, these sorts of industrial sort of policy type ideas are sort of pushing people toward more unproductive forms of work? Where maybe it makes more sense to have something like, I guess, a more generous kind of redistribution system.

Maybe not like UBI, but maybe a more enhanced earning tax credit or something like that to respond to this?

>> Simon Johnson: Yeah, I haven't seen anybody really proposing truly unproductive work. I mean, yeah, there's a question of how much you subsidize the construction of new chip fabs, for example, in the US.

And there's a very interesting debate about how much more expensive it is to build such fabs in the US versus in Taiwan, for example. The Taiwanese people say, it's four times expensive, my expert friends at MIT say, it's 20% more expensive. So we're gonna find out.

>> Jon Hartley: What's sort of a worker shortage thing with that as well.

 

>> Simon Johnson: Well, so-.

>> Jon Hartley: That's one of the complaints that I think TSMC and some of these chip companies have with.

>> Simon Johnson: Yes, they do say that, when you drill down into it, it's so interesting, so they use PhD qualified engineers on the shop floor in Taiwan, and it's not clear to everyone that that's a super productive use of PhDs.

And there may be a malleability of labor and a willingness of highly skilled people to do mundane or even routine tasks. In Taiwan, you also hear that about South Korea compared to the United States. So I think we are attempting to find out what kind of training is needed, where those people will come from.

These are good jobs in the fabs, who's gonna get the job? How much education do you need to be effective in the clean rooms and so on? Very, very interesting and important questions, but I'm pretty optimistic in this country, which is a big country with a lot of people who wanna work hard, I think we will find plenty of talented people.

Back to just Friedman for a second, I think that the question that we grapple with is first of all, are we taxing labor too much? Payroll taxes, raising the cost of labor at a time when people are thinking, machines versus labor, which one do I wanna go for?

I think that that's an important issue. And I think also that we have a lot of issues around the care economy, around how much are you willing to pay out of the public purse for home health aides, who take care of people who have really terrible health problems?

You don't wanna send them to hospital, that's way more expensive, somebody has to take care of them and their families, can't afford to do it by themselves. How much are you willing to pay? Can you pay a living wage? And if we don't, who really goes in into that work?

So I wouldn't call that at all unproductive work, but I think it is a massive question of how we handle additional longevity, how we handle many extra years of health, but also some ill health at the end of life, those are tough questions. On the whole tax side of things, it seems like in recent years there's obviously been a lot of interest in taxing wealth.

There's been a lot of big push from folks like Emmanuel Sieyès, Thomas Piketty, Gibraltar Zucman on sort of things like a wealth tax. And I guess, more in the sort of inequality frame than perhaps it's certainly related to the sort of AI and gains to capital discussion that we're talking about now.

But I'm curious, there's been people like Bill Gates have talked about, having a robot tax, what do you think about that idea? So I do think that tax code is a bit too tilted towards encouraging machines versus hiring labor. So I think that can be redressed in various ways, but in contrast to those people who are arguing for redistribution.

So let the productive process do its thing, look at the distributed outcomes, and then do some tax and subsidy welfare payments if you don't like the outcomes. We are much more about tilting or redirecting technological progress to change the outcomes that way. And that's primarily, because I don't think a place like the US is ever gonna do a lot of redistribution.

And I think the nature of work and the kinds of jobs you get, that's important in and of itself. And redistribution, of course, doesn't address that just gives you a bit more money for the work that you've done. So thinking about how to redirect technological progress, that's our main agenda in this book.

 

>> Jon Hartley: Got it, but sort of on this question of massive unemployment, I guess the unemployment rate is between three and a half percent and 4% currently. You don't really see there being like a massive surge in unemployment, say, anytime soon, or?

>> Simon Johnson: Well, I think my former colleague and good friend Mike Musa, who sadly departed us, left us, liked to say, his main advice for anybody who worked at the IMF was never forecast a number and a date in the same forecast, right?

So look, I think the ChatGPT is a wake up call and there's a speed of change in some of these cognitive tasks, the capabilities of AI, that is disconcerting. Because we know, particularly in contact the US, we can handle a lot of different shocks thrown at us, but there is a speed of adjustment issue.

So I don't think we're gonna face mass unemployment, I do think there's gonna be pressure on some jobs that were previously good jobs. And I do worry that we may not be creating, that's automation and that's a natural part of economic development. The offset and what we were good at in the early 20th to mid 20th century was creating a lot of new tasks.

So that people had jobs, were employed and we could absorb all the people who moved out of agriculture, all the jobs that were eliminated when Henry Ford automated car production. We didn't lose jobs in the car business, we gained jobs in the peak phase of that transition, but it takes some time.

Electricity was adopted over 30 or 40 years, electrification of factories and production. AI seems to be coming at us a lot faster than that, so I think we need to step up our game in terms of response time.

>> Jon Hartley: And I guess in this adjustment, I guess at some point some people will leave the workforce, but maybe some of those people will be retrained to manage the robots or manage the AI is, I guess, the hope in that.

I guess, pivot a little bit to the political economy sort of discussion around AI. So I'm curious, what do you think about when we talk about AI in the context of totalitarian countries like China where they're using AI for, I guess, suppression of its people. I'm curious, there's a sort of idea that was promoted by Milton Friedman that's partially related to the work that Jerome and yourself have done.

Milton Friedman's argument, this is like the first chapter of capitalism and freedom. And the idea is that, growth would cause democracy or economic freedom would cause political freedom. Jerome and his co-authors, the sort of institutional framework, I think, has also made some arguments that democracy also causes economic growth.

This thing with China, I think hasn't quite played out, it's still not a politically free society, but it has experienced quite a bit of growth in recent decades that may be fading now. But I think something that Friedman acknowledged later in life was that what he had argued in capitalism and freedom was not playing out with China.

That becoming a much more prosperous society did not ultimately cause it to become democratic, and to this day it's not democratic. How do you think things like AI are sort of disrupting these traditional processes of growth and democracy as being something that we would get together and some sort of indigenous relationship?

How do you see that impacting that symbiotic relationship?

>> Simon Johnson: Well, I think there's been a problem there for a while, I remember for some reason there's vivid image of Milton Friedman in Hong Kong. I think it was his PBS special, talking about capitalism and freedom exactly this way.

I remember thinking, well, that's good, all we need to do is get growth, then we come more like Hong Kong. And first I went to Hong Kong probably in the early 1990s, I was quite impressed, but I think all that have not been recently. But all the stories from Hong Kong, including right now, are quite discouraging in terms of the way in which freedom, in any Friedman sense or any sense, has been suppressed, and it's become an oppressive place.

So I think that's a wake up call. Now is AI a technology for liberation, a technology for self expression, or is it a technology for surveillance and suppression? And the answer is, yes, it's both right, and it depends on how you use it. So I think that the new split in the world is going to be between countries that are more like us and are gonna put, I think we'll have to put a lot of guardrails around surveillance, for example, and that's a good thing.

And then there'll be other countries that put no guardrails around surveillance with regard to state surveillance, certainly. And they'll be following China, and they will be following a certain line of technology. And if you think about, being authoritarian is obviously a set of policies, and it's implemented by a technology.

So how much does it cost you to be an authoritarian or run an authoritarian regime? Well, I think the AI, as developed by the Chinese, is gonna make it a lot cheaper to be authoritarian. So if you think in terms of an incentive framework, we're gonna have more authoritarians, and they're gonna last for longer.

Facilitated by the same technology that I hope, I believe we will use to strengthen freedom and true liberty in the United States.

>> Jon Hartley: One of the interesting things I think about this book is that it really talks about technology in a sort of more political sense. And it's not usually a frame that we typically, as economists or casual readers, really think about technology.

It seems to me like it's only been recently where we think about the politics of big tech and censorship, and obviously, that's a very big topic right now. But traditionally, it's usually been, great, we have iPhones now or we have computers now, this is great, we can do things faster.

Traditionally, at least in my own lifetime, technology had a relatively sort of positive, sort of connotation with it. I'm just like, what are your favorite examples from your book about this sort of struggle, where elites have used sort of technology in some sort of way that may not have been to promote broad based prosperity as much as it could have otherwise?

I'm curious, or what are some of the examples in the book that you sort of outlined?

>> Simon Johnson: Well, I think the most awful example, the cotton gin, right? Which was developed right at the end of the 18th century, right after American independence. It made it easier to process upland cotton, it meant you could run cotton plantations away from the east coast, across the deep south.

And what happened, of course, was enslaved people were moved from a very harsh life on the east coast into a much worse conditions across the deep south. And that became the mainstay of the slave economy, and that lasted and remained with extremely harsh conditions until the Civil War.

And of course, that was also facilitated and supported or driven by industrialization in Britain, which was about cotton and cotton textiles. And they were buying a lot of the raw cotton from the American South. So I think that it's actually rather pervasive throughout history that some people gain and other people lose from the deployment of technology.

Much more unusual and with the sort of holy grail we're looking for here is the sort of Henry Ford experience, where Henry Ford automates car production, brings electricity to the factory. He replaces a lot of workers, but he also generates a vast number of new tasks. And there's a class of people, call them, and we've adopted the term manager engineers.

White collar workers who emerge to plan this to organize it, to run all the infrastructure around car production. And that's a lot of people, and that is a lot of money that's made and very high wages that are paid through sharing. I don't think Henry Ford was that altruistic, I think he was quite paternalistic.

But he faced some countervailing power, including trade unions, and he wanted to preempt that with high wages. And he also had to at some point negotiate with the unions and have collective bargains. But that was in a context of the company did well, his family did well, the whole auto industry did well.

So finding that kind of win-win approach, win-win solution, we did a lot of that before, during, and after World War II. After 1980, it's become much more unusual.

>> Jon Hartley: I mean, it's fascinating, I guess, in the past few decades that we've had these productivity gains, but we haven't quite had real wage gains to sort of follow that.

Do you have a favorite explanation of why that's not following as much historically as it's been the case, that real wage gains are generally fall productivity gains? There's a lot of, I feel like, different narratives that are told about why this is the case.

>> Simon Johnson: Right, so real wage has obviously risen for some people, more skilled people.

It's for the lower, less skilled, less educated people, that they haven't risen. I think the main contenders are automation and globalization. And Geron's work with Pascual Restrepo, says that automation is 60, 70% of the explanation. I think even if you lowered it a little bit and said, well, I think globalization, the China shock is a bigger part of it.

You have to remember that there's a technology dimension to globalization, including communication, including computers being applied to trade. And so if you look at this, I would say the direct effect of technology change and automation. And the indirect effect through globalization and the shifts in where you put certain kinds of jobs, including manufacturing jobs.

I think you're looking at 70 to 80% being attributable one way or another to the way in which technology has changed. I do think that attitudes among management have also changed. I've looked at a lot of literature what management were writing, what executives were saying, what people who led industrial organizations were saying in the 1920s.

They were nowhere near as confrontational and antagonistic to workers as more recently. They were much more patting themselves in the back and saying, hey, capitalism is a win-win. In fact, Herbert Hoover is a perfect example of exactly this. And that's why the Republicans were so strong in control of the presidency in the 1920s in the US, because they were seen as the men.

They were men of business, right, and that everybody could gain from that. So there was a shared prosperity moment. Okay, a particular rock was hit in terms of the Great Depression, but there was also recovery from that. And Eisenhower is another Republican who was regarded as a practical man, okay?

He was a military general, but he was also pro-business, and the business of America was doing business.

>> Jon Hartley: It's also interesting to think about the 1920s, even though we think it was at the time of prosperity. I mean, it's also the time when you have zoning laws became legal in the US, and the whole immigration system is sort of created as we know it today.

Sort of two of the largest sort of barriers to entry were created during that time. It's a very interesting sort of time period in general. Things that probably didn't have much great effects till much, much later on. But it's interesting, too, on points of disruption, too. It's also, I think, a regional thing as well, where automation certainly impacts the automobile industry, which is very much rooted in the Midwest.

Whereas the China shock, it impacts textiles, which very much impacts the Appalachians. So two very different geographical areas with two different regional labor markets being disrupted in different ways. Do you have any thoughts on the Luddites? This whole term, it's sort of a famous term, is used in a derogatory sense toward folks that maybe spout left leaning tendencies.

But it's very relevant, and probably one of the historical examples. I would think of first, where it's this famous story of, and it may be apocryphal, I'm not totally sure. But it's this idea that people were destroying these textile machines, I think, in the 1800s. Do you think that's kinda a relevant example here of, and I'm not sure if it's an apocryphal story or not.

I think a famous person associated with that as well, I forget the name of the person.

>> Simon Johnson: Yeah, well, Lord Byron gave some great speeches about it, and that's in our book. Yeah, the Luddites were not apocryphal, Ned Ludd. Whether there was something called Ned Ludd, that is questionable-.

 

>> Jon Hartley: Definitely.

>> Simon Johnson: Perhaps not, but look at that, so the Luddites were, depending on how you read the history. They were either skilled workers who were threatened by the arrival of textile factories and automated looms, so they were losing the weaving jobs that they had before. Or I would actually say they were independent entrepreneurs, because by any standard, like IR's definition today, they had their own equipment, they worked at home, they controlled their own hours, right, they had a lot of autonomy.

And actually, either they weren't being offered jobs in the factory, or if they had to come into the factory, they had to take on relatively routine tasks, relatively unskilled tasks. Which were supervising the machinery that took care of what they'd previously done with skill in an artisanal fashion.

So I think that if you see them as independent entrepreneurs who are threatened by the growth of big business, people could be a little more sympathetic. But I think, honestly, also, stop trying to prevent automation and prevent the elimination of jobs by machines is tilting at windmills. However, it doesn't mean that's the only thing that can happen or should happen.

And in some phases of industrial development, including when the railways came to Britain, about the same time, or slightly after the big Luddite, that created a lot of new tasks, a lot of new jobs. And a lot of jobs in which the railways wanted to pay premium wages to their workers, because they wanted them to take care of safety on the railways.

Actually, that was an absolute key issue, be very responsible in a relatively independent way. So I think, technology can and has been developed in ways that generate new tasks and a lot of new tasks at the same time as it automates existing work. After 1980, we had less task creation than we really needed.

The demand for relatively unskilled labor has been weak, we haven't been able to turn that around. And there is a concern that AI will worsen that problem or maybe layer on some other versions of that problem now and I don't think it has to be that way. I think we could push, and the industry could certainly push to develop algorithms and approaches that would complement human ability.

So don't replace humans, make them more productive, augment their capabilities. And we would say, to the extent that raises marginal work of productivity, paid them a high wage.

>> Jon Hartley: I wanna get back to this sort of broad topic of growth in institutions. Obviously, it's a highly related political economy topic here.

Can you explain which institutions matter? For me, I've always been told that government policy and politics matter for economic growth. However, we wanna describe that, you call it institutions. But I think the key question has always been, which institutions or policies matter? And I think this really summarizes a lot of your work, as well as a lot of Jerome's work.

Can you explain what your taxonomy of institutions Is and what ones are most important?

>> Simon Johnson: Well, I think a central institution on the economic side is property rights, and that has to be supported by political rights. And if you go back to the situation in Hong Kong, for example, which had very strong property rights, limited political rights under the British, but they were definitely there.

Those political rights have been stripped away, and I think the property rights are under great pressure as a result. But I would also say that, if you think about big technological transformations and think about what happens in those transformations. There's often a phase in which things that were regarded as common property and shared or worked in a common way and maybe work quite productively, like common land and medieval Britain.

There is an episode or part of the modernization of Britain in which that land is turned into private property. The communal rights are stripped out and they're not well protected. The private property is then accumulated, becomes a big source of wealth, and that wealth is used politically to take more common land into private property.

I think the same thing is happening now with data. There was a lot of data we put on the Internet, a lot of data we allowed other people to look at and to use through social media, for example. Those data are now being used to train algorithms in ways that were not being compensated.

They didn't ask for permission. You may not like, I would predict that you wouldn't like some of the surveillance outcomes that are gonna come from pictures that you and others posted on Facebook. And that, I think, is exactly analogous to that taking of common property without permission or forcing it through in the enclosure movement in Britain before the industrial revolution.

And I think that recognizing and respecting property rights on the Internet, including the rights to the data that you've created and the images that you've put there, that's actually really important. And if we could get some recognition of that, and we're already behind the curve. There isn't sufficient legislation, for example, on that, and there won't be any time soon.

But if we could address that, I think that would also help us a lot with regards to what we can ask from tech companies, what we can expect from them, and maybe some of the political consequences also.

>> Jon Hartley: So this is like the Coase theorem for data. I guess the idea is that we should be compensated at some level for the data that these companies are selling to some degree?

 

>> Simon Johnson: Yes, I mean, I don't think the amount of compensation, I don't think it's worth that much, I don't think it's gonna enable you to retire. But I do think recognizing that it's your property and you should have a say in how it's used, and you should be able to say, no, this is my data.

And you're not allowed to use it for workplace surveillance that I don't believe is good for me or for other people around me. And you can only do that if those property rights are properly defined and protected. We won't be able to protect them individually cuz we're too small relative to big companies.

But if we could form data unions on whatever basis, that then becomes an interesting conversation.

>> Jon Hartley: So that you think is really scoped for like a regulatory authority. I guess something like the FTC or a body like that in the US, at least that could I guess, impose some sort of changes on big tech companies?

 

>> Simon Johnson: Well, you would certainly need legislation to assert that there are such things as ownership over data and to put those restrictions on it. Would it then need to run through regulatory body? Would you need to grant that regulatory power to someone? Yes, probably, although, I think I would prefer a body that was very focused on protecting consumers and didn't have multiple tasks.

I think when you give these regulators, you say, worry about market structure, worry about conduct and other things. And worry about data protection. That is not necessarily gonna be their top priority.

>> Jon Hartley: That's fascinating, it's so amazing, I think about just all these huge questions around technology, political economy, growth, institutions.

This has been an amazing discussion. Thank you so much for joining us today, Simon.

>> Simon Johnson: Thank you.

>> Jon Hartley: Today, our guest was Simon Johnson, who is the Ronald Kurtz professor of entrepreneurship at the MIT Sloan School of Management, was previously chief economist at the IMF. He just co-authored a newly published book with his MIT colleague Daron Acemoglu, Power and Progress, Our One-Thousand Year Struggle Over Technology and Prosperity.

I highly recommend you check it out at your local bookstore. This is the Capitalism and Freedom in the 21st century podcast, where we talk about economics, markets and public policy. I'm Jon Hartley, your host, thanks so much for joining us.

 

Show Transcript +

The views and opinions expressed on this podcast are those of the authors and were produced prior to joining the Hoover Institution. They do not necessarily reflect the opinions of the Hoover Institution or Stanford University.

Expand
overlay image