logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 9h 33m 5s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Eric Brinjalson.
He's an economics professor at Stanford and the director of Stanford's Digital Economy Lab.
Previously, he was a long, long-time professor at MIT where he did groundbreaking work on the
economics of information. He's the author of many books, including The Second Machine Age
and Machine Platform Crowd, co-authored with Andrew McAfee.
Quick mention of each sponsor, followed by some thoughts related to the episode.
Ventura Watches, the maker of classy, well-performing watches.
FourSigmatic, the maker of delicious mushroom coffee.
ExpressVPN, the VPN I've used for many years to protect my privacy on the internet,
and Cash App, the app I use to send money to friends.
Please check out these sponsors in the description to get a discount and to support this podcast.
As a side note, let me say that the impact of artificial intelligence and automation
on our economy and our world is something worth thinking deeply about.
Like with many topics that are linked to predicting the future evolution of technology,
it is often too easy to fall into one of two camps, the fear-mongering camp
or the technologically utopianism camp.
As always, the future will land us somewhere in between.
I prefer to wear two hats in these discussions and alternate between them often.
The hat of a pragmatic engineer and the hat of a futurist.
This is probably a good time to mention Andrew Yang,
the presidential candidate who has been one of the high-profile thinkers on this topic,
and I'm sure I will speak with him on this podcast eventually.
A conversation with Andrew has been on the table many times.
Our schedules just haven't aligned, especially because I have a strongly held-to preference
for long form, two, three, four hours or more, and in person.
I work hard to not compromise on this.
Trust me, it's not easy.
Even more so in the times of COVID, which requires getting tested non-stop,
staying isolated and doing a lot of costly and uncomfortable things that minimize risk for the
guest. The reason I do this is because to me, something is lost in remote conversation.
That's something that magic I think is worth the effort, even if it ultimately leads to a
failed conversation. This is how I approach life, treasuring the possibility of a rare
moment of magic. I'm willing to go to the ends of the world for just such a moment.
If you enjoy this thing, subscribe on YouTube, review it with fast stars on Apple Podcasts,
follow on Spotify, support on Patreon, connect with me on Twitter at Lex Freedman.
And now, here's my conversation with Eric Green-Jawson.
You posted a quote on Twitter by Albert Bartlett saying that the greatest
shortcoming of the human race is our inability to understand the exponential function.
Why would you say the exponential growth is important to understand?
Yeah, that quote, I remember posting that. It's actually a reprise of something Andy McAfee
and I said in the second machine age, but I posted it in early March when COVID was really just
beginning to take off and I was really scared. There were actually only a couple dozen cases,
maybe less at that time, but they were doubling every like two or three days and I could see,
oh my God, this is going to be a catastrophe and it's going to happen soon. But nobody was taking
it very seriously or not a lot of people were taking it very seriously. In fact, I remember,
I did my last in-person conference that week. I was flying back from Las Vegas and I was the
only person on the plane wearing a mask and the flight attendant came over to me. She was very
concerned and she kind of put her hands on my shoulder. She was touching me all over,
which I wasn't thrilled about. And she goes, do you have some kind of anxiety disorder? Are you
okay? And I was like, no, it's because of COVID. This is early March. Early March. But I was worried
because I knew I could see, or I suspected, I guess, that that doubling would continue and it did
and pretty soon we had thousands of times more cases. Most of the time when I use that quote,
it's motivated by more optimistic things like Moore's law and the wonders of having more computer
power. But in either case, it can be very counterintuitive. I mean, if you walk for 10 minutes,
you get about 10 times as far away as if you walk for one minute. That's the way our physical
world works. That's the way our brains are wired. But if something doubles for 10 times as long,
you don't get 10 times as much. You get 1000 times as much. And after 20, it's a billion. After 30,
it's a million. After 30, it's a billion. And pretty soon after that, it just gets to these
numbers that you can barely grasp. Our world is becoming more and more exponential, mainly because
of digital technologies. So more and more often, our intuitions are out of whack. And that can be
good in the case of things creating wonders, but it can be dangerous in the case of viruses and
other things. Do you think it generally applies? Is there spaces where it does apply and where it
doesn't? How are we supposed to build an intuition about in which aspects of our society does exponential
growth apply? Well, you can learn the math, but the truth is our brains, I think, tend to be
learn more from experiences. So we just start seeing it more and more often. So hanging around
Silicon Valley, hanging around AI and computer researchers, I see this kind of exponential
growth a lot more frequently. And I'm getting used to it, but I still make mistakes. I still
underestimate some of the progress in just talking to someone about GPT-3 and how rapidly
natural language has improved. But I think that as the world becomes more exponential, we'll all
start experiencing it more frequently. The danger is that we may make some mistakes in the meantime
using our old caveman intuitions about how the world works. Well, the weird thing is it always
kind of looks linear in the moment. It's hard to feel, it's hard to introspect and really acknowledge
how much is changed in just a couple of years or five years or 10 years with the internet. If we
just look at investments of AI or even just social media, all the various technologies that go into
the digital umbrella, it feels pretty calm and normal and gradual. I think there are parts of
the world, most of the world is not exponential. The way humans learn, the way organizations
change, the way our whole institutions adapt and evolve, those don't improve at exponential
paces. And that leads to a mismatch oftentimes between these exponentially improving technologies
or let's say changing technologies because some of them are exponentially more dangerous and our
intuitions and our human skills and our institutions that just don't change very fast at all.
And that mismatch I think is at the root of a lot of the problems in our society, the growing
inequality and other dysfunctions in our political and economic systems.
So one guy that talks about exponential functions a lot is Elon Musk. He seems to internalize this
kind of way of exponential thinking. He calls it first principles thinking, sort of the kind of
going to the basics, asking the question like what were the assumptions of the past? How can
we throw them out the window? How can we do this 10x much more efficiently and constantly
practice in that process? And also using that kind of thinking to estimate sort of when create
deadlines and estimate when you'll be able to deliver on some of these technologies. Now
it often gets him in trouble because he overestimates like he doesn't meet the initial estimates of
the deadlines but he seems to deliver late but deliver and which is kind of interesting. Like
what are your thoughts about this whole thing? I think we can all learn from Elon. I think going
to first principles, I talked about two ways of getting more of a grip on the exponential function
and one of them just comes from first principles. If you understand the math of it, you can see
what's going to happen and even if it seems counterintuitive that a couple of dozen of COVID
cases can become thousands or tens or hundreds of thousands of them in a month, it makes sense
once you just do the math. And I think Elon tries to do that a lot. In Ferris I think he also
benefits from hanging out in Silicon Valley and he's experienced it in a lot of different applications
so it's not as much of a shock to him anymore but that's something we can all learn from.
In my own life, I remember one of my first experiences really seeing it was when I was a grad
student and my advisor asked me to plot the growth of computer power in the U.S. economy
in different industries and there are all these exponentially growing curves and I was like,
holy shit, look at this. In each industry it was just taking off and you don't have to be a
rocket scientist to extend that and say, wow, this means that this was in the late 80s and early 90s
that if it goes anything like that, we're going to have orders of magnitude more computer power
than we did at that time and of course we do. So when people look at Moore's law,
they often talk about it as just so the exponential function is actually a stack of S curves.
So basically it's you milk or whatever, take the most advantage of a particular little revolution
and then you search for another revolution and it's basically revolutions stack on top of revolutions.
Do you have any intuition about how the head humans keep finding ways to revolutionize things?
Well, first let me just unpack that first point that I talked about exponential curves but no
exponential curve continues forever. It's been said that if anything can't go on forever,
eventually it will stop. That's very profound. It's very profound but it seems that a lot of
people don't appreciate that half of it as well either and that's why all exponential functions
eventually turn into some kind of S curve or stop in some other way maybe catastrophically
and that's happened with COVID as well. I mean it went up and then at some point
it starts saturating the pool of people to be infected. There's a standard epidemiological
model that's based on that and it's beginning to happen with Moore's law or different generations
of computer power. It happens with all exponential curves. The remarkable thing as you alluded in
the second part of your question is that we've been able to come up with a new S curve on top of
the previous one and do that generation after generation with new materials, new processes
and just extend it further and further. I don't think anyone has a really good theory about why
we've been so successful in doing that. It's great that we have been and I hope it continues for
some time but one beginning of a theory is that there's huge incentives when other parts of the
system are going on that clock speed of doubling every two to three years. If there's one component
of it that's not keeping up then the economic incentives become really large to improve that
one part. It becomes a bottleneck and anyone who can do improvements in that part can reap huge
returns so that the resources automatically get focused on whatever part of the system is in keeping
up. Do you think some version of the Moore's law will continue? Some version, yes, it is. I mean,
one version that has become more important is something I call Kumi's law which is named after
John Kumi who I should mention was also my college roommate but he identified the fact that energy
consumption has been declining by a factor of two and for most of us that's more important.
The new iPhones came out today as we're recording this. I'm not sure when you're going to make it
very soon after this. For most of us, having the iPhone be twice as fast, it's nice but having it
the battery life longer that would be much more valuable and the fact that a lot of the progress
in chips now is reducing energy consumption is probably more important for many applications
than just the raw speed. Other dimensions of Moore's law are in AI and machine learning.
Those tend to be very parallelizable functions, especially deep neural nets.
Instead of having one chip, you can have multiple chips or you can have a GPU, a graphic processing
unit that goes faster and now special chips designed for machine learning like tensor
processing units. Each time you switch, there's another 10x or 100x improvement
above and beyond Moore's law. I think that the raw silicon isn't improving as much as it used to
but these other dimensions are becoming more important and we're seeing progress in them.
I don't know if you've seen the work by OpenAI where they show the
exponential improvement of the training of neural networks just literally in the techniques used
so that's almost like the algorithm. It's fascinating to think like, can I actually
continue us figuring out more and more tricks on how to train networks faster and faster?
Well, the progress has been staggering. If you look at image recognition, as you mentioned,
I think it's a function of at least three things that are coming together. One, we just talked about
faster chips, not just Moore's law but GPUs, TPUs and other technologies.
The second is just a lot more data. I mean, we are a wash and digital data today in a way we
weren't 20 years ago. Photography, I'm old enough to remember, used to be chemical and now everything
is digital. I took probably 50 digital photos yesterday. I wouldn't have done that if it was
chemical and we have the internet of things and all sorts of other types of data. When we walk
around with our phone, it's just broadcasting a huge amount of digital data that can be used as
training sets. Then last but not least, as they mentioned at OpenAI, there have been
significant improvements in the techniques. The core idea of deep neural nets has been around
for a few decades but the advances in making it work more efficiently have also improved
a couple of orders of magnitude or more. You multiply together 100-fold improvement in
computer power, 100-fold improvement in data, 100-fold improvement in techniques of software
and algorithms, and soon you're getting into a million-fold improvements.
Somebody brought this idea with GPT-3 that it's training in a self-supervised way on
basically internet data. I've seen arguments made that seem to be pretty convincing that
the bottleneck there is going to be how much data there is on the internet, which is a fascinating
idea that it literally will just run out of human-generated data to train on.
I know we make it the point where it's consumed basically all of human knowledge or all digitized
human knowledge. That would be the bottleneck. The interesting thing with bottlenecks is people
who often use bottlenecks as a way to argue against exponential growth say, well, there's no way you
can overcome this bottleneck but we seem to somehow keep coming up in new ways to overcome
whatever bottlenecks the critics come up with. It's just fascinating. I don't know how you
overcome the data bottleneck but probably more efficient training algorithms.
Yeah. Well, you already mentioned that, that these training algorithms are getting much better at
using smaller amounts of data. We also are just capturing a lot more data than we used to,
especially in China, but all around us. Those are both important. In some applications,
you can simulate the data, video games, some of the self-driving car systems are
simulating driving. Of course, that has some risks and weaknesses, but if you want to exhaust
all the different ways you could beat a video game, you could just simulate all the options.
Can we take a step in that direction of autonomous vehicles? Make sure you're talking
to the CTO of Weymouth tomorrow. Obviously, I'm talking to Elon again in a couple of weeks.
What's your thoughts on autonomous vehicles? Where do we stand as a problem that has the
potential of revolutionizing the world? Well, I'm really excited about that, but it's become
much clearer that the original way that I thought about it, most people thought about it, like,
will we have a self-driving car or not, is way too simple. The better way to think about it is that
there's a whole continuum of how much driving and assisting the car can do. I noticed that you're
right next door to Toyota Research Institute. That's a total accident. I love the TRI folks,
but yeah. Have you talked to Gil Pratt? Yeah, we're supposed to talk. It's kind of hilarious.
So I think it's a good counterpart to what Elon is doing, and hopefully,
they can be frank in what they think about each other because I've heard both of them talk about
it. This is an assistive, a guardian angel that watches over you as opposed to try to do everything.
I think there are some things like driving on a highway from LA to Phoenix, where it's mostly
good weather, straight roads. That's close to a solved problem. Let's face it. In other situations,
driving through the snow in Boston where the roads are kind of crazy, and most importantly,
you have to make a lot of judgments about what the other driver is going to do at these intersections
that aren't really right angles and aren't very well described. It's more like game theory.
That's a much harder problem and requires understanding human motivations.
So there's a continuum there of some places where the cars will work very well and others
where it could probably take decades. What do you think about the Waymo? So as you mentioned, two
companies that actually have cars on the road, there's the Waymo approach that it's more like
we're not going to release anything until it's perfect and we're going to be very strict about
the streets that we travel on, but it better be perfect. Well, I'm smart enough to be humble
and not try to get between. I know there's very bright people on both sides of the
argument. I've talked to them and they make convincing arguments to me about how careful
they need to be and the social acceptance. Some people thought that when the first few people
died from self-driving cars, I would shut down the industry, but it was more of a blip actually.
And so that was interesting. Of course, there's still a concern that if there could be setbacks,
if we do this wrong, your listeners may be familiar with the different levels of self-driving,
level one, two, three, four, five. I think Andrew Ng has convinced me that this idea of really focusing
on level four, where you only go in areas that are well-mapped rather than just going out in the
wild, is the way things are going to evolve. But you can just keep expanding those areas where
you've mapped things really well, where you really understand them and eventually all become kind of
interconnected. And that could be another way of progressing to make it more
feasible over time. I mean, that's kind of like the Waymo approach, which is they just now released,
I think just like a day or two ago, a public, like anyone from the public in the Phoenix,
Arizona, to, you know, you can get a ride in a Waymo car with no person, no driver.
Oh, they've taken away the safety driver? Oh, yeah. For a while now, there's been no safety
driver. Okay. Because I mean, I've been following that one in particular, but I thought it was kind
of funny about a year ago when they had the safety driver, and then they added a second safety driver
because the first safety driver would fall asleep. I'm not sure they're going in the right direction
with that. No, they've Waymo in particular done a really good job of that. They actually have a
very interesting infrastructure of remote observation. So they're not controlling the
vehicles remotely, but they're able to, it's like a customer service. They can anytime tune
into the car, I bet they can probably remotely control it as well, but that's officially not
the function that they... Yeah, I can see that being really, because I think the thing that's
proven harder than maybe some of the early people expected was there's a long tail of
weird exceptions. So you can deal with 90, 99, 99.99% of the cases, but then there's something
that just never been seen before in the training data. And humans more or less can work around
that, although let me be clear and note, there are about 30,000 human fatalities just in the
United States and maybe a million worldwide. So they're far from perfect. But I think people
have higher expectations of machines. They wouldn't tolerate that level of death and
damage from a machine. And so we have to do a lot better at dealing with those edge cases.
And also the tricky thing that if I have a criticism for the Waymo folks, there's such a
huge focus on safety where people don't talk enough about creating products that customers love,
human beings love using. It's very easy to create a thing that's safe at the extremes,
but then nobody wants to get into it. Yeah. Well, back to Elon. I think one of...
part of his genius was with the electric cars. Before he came along, electric cars were all
kind of underpowered, really light. And there were sort of wimpy cars that weren't fun. And the
first thing he did was he made a roadster that went zero to 60 faster than just about any other
car and went the other end. And I think that was a really wise marketing move as well as a wise
technology move. Yeah. It's difficult to figure out what right marketing move is for AI systems.
That's always been... I think it requires guts and risk taking, which is what Elon practices,
I mean, to the chagrin of perhaps investors or whatever.
It requires guts and risk taking. It also requires rethinking what you're doing.
I think way too many people are unimaginative, intellectually lazy. And when they take AI,
they basically say, what are we doing now? How can we make a machine do the same thing?
Maybe we'll save some costs, we'll have less labor. And yeah, it's not necessarily the worst
thing in the world to do, but it's really not leading to a quantum change in the way you do
things. When Jeff Bezos said, hey, we're going to use the internet to change how bookstores work
and we're going to use technology, he didn't go and say, okay, let's put a robot cashier
where the human cashier is and leave everything else alone. That would have been a very lame way
to automate a bookstore. He went from soup to nuts and said, let's just rethink it,
we get rid of the physical bookstore, we have a warehouse, we have delivery, we have people
order on a screen, and everything was reinvented. And that's been the story of these general purpose
technologies all through history. In my books, I write about electricity and how for 30 years,
there was almost no productivity gain from the electrification of factories a century ago.
And that's not because electricity is a wimpy, useless technology, we all know how awesome
electricity is. It's because at first, they really didn't rethink the factories. It was
only after they reinvented them and we describe how in the book, then you suddenly got a doubling
and tripling of productivity growth. But it's the combination of the technology with the new
business models, new business organization, that just takes a long time and takes more
creativity than most people have. Can you maybe linger on electricity because that's a fun one?
Yeah, well, I'll tell you what happened. Before electricity, there were basically steam engines
or sometimes water wheels. And to power the machinery, you had to have pulleys and crankshafts.
And you really can't make them too long because they'll break the torsion. So all the equipment
was kind of clustered around this one giant steam engine. You can't make small steam engines either
because of thermodynamics. So you have one giant steam engine, all the equipment clustered around
it, multi-story, they have it vertical to minimize the distance as well as horizontal.
And then when they did electricity, they took out the steam engine, they got the biggest electric
motor they could buy from General Electric or someone like that. And nothing much else changed.
Yeah. It took until a generation of managers retired or died 30 years later, that people
started thinking, wait, we don't have to do it that way. You can make electric motors big,
small, medium, you can put one with each piece of equipment. There's this big debate if you read
the management literature between what they call group drive versus unit drive, where every machine
would have its own motor. Well, once they did that, once they went to unit drive, those guys won
the debate, then you started having a new kind of factory, which is sometimes spread out over acres,
single story, and each piece of equipment had its own motor. And most importantly,
they weren't laid out based on who needed the most power. They were laid out based on what is the
workflow of materials, assembly line, let's have it go from this machine to that machine to that
machine. Once they rethought the factory that way, huge increases in productivity was just staggering.
People like Paul David have documented this in their research papers. And I think that there's
a lesson you see over and over. It happened when the steam engine changed manual production.
It's happened with the computerization. People like Michael Hammer said, don't automate, obliterate.
In each case, the big gains only came once smart entrepreneurs and managers basically
reinvented their industries. One other interesting point about all that is that during that
reinvention period, you often actually not only don't see productivity growth, you can
actually see a slipping back measured productivity actually falls. I just wrote a paper with Chad
Severson and Daniel Rock called the productivity J curve, which basically shows that in a lot of
these cases, you have a downward dip before it goes up. And that downward dip is when everyone's
trying to like reinvent things. And you could say that they're creating knowledge and intangible
assets, but that doesn't show up on anyone's balance sheet. It doesn't show up in GDP.
So it's as if they're doing nothing, like take self driving cars, we're just talking about it.
There have been hundreds of billions of dollars spent developing self driving cars.
And basically, no chauffeur has lost his job, no taxi driver. I got to check on the one. Yeah,
so there's a bunch of spending and no real consumer benefit. Now, they're doing that in the belief,
I think the justified belief that they will get the upward part of the J curve and they will be
some big returns. But in the short run, you're not seeing it. That's happening with a lot of other
AI technologies, just as it happened with earlier general purpose technologies. And it's one of
the reasons we're having relatively low productivity growth lately. You know, as an economist,
one of the things that disappoints me is that as I popping as these technologies are, you and I are
both excited about some of the things they can do, the economic productivity statistics are kind
of dismal. We actually believe it or not, have had lower productivity growth in the past about
15 years than we did in the previous 15 years in the 90s and early 2000s. And so that's not what
you would have expected if these technologies were that much better. But I think we're in kind of a
long J curve there. Personally, I'm optimistic we'll start seeing the upward tick, maybe as soon
as next year. But the past decade has been a bit disappointing if you thought there's a one to one
relationship between cool technology and higher productivity. What would you place your biggest
hope for productivity increases on? Like you kind of said, at a high level AI, but if I were to think
about what has been so revolutionary in the last 10 years, I would have 15 years and thinking about
the internet, I would say things like hope them not say anything ridiculous, but everything from
Wikipedia to Twitter. So these kind of websites, not so much AI, but I would expect to see some
kind of big productivity increases from just the connectivity between people and the access to
more information. Yeah. Well, that's another area I've done quite a bit of research on,
actually, is these free goods like Wikipedia, Facebook, Twitter, Zoom. We're actually doing
this in person, but almost everything else I do these days is online. The interesting thing about
all those is most of them have a price of zero. What do you pay for Wikipedia? Maybe a little
bit for the electrons to come to your house? Basically zero, right? Take a small pause and say,
I donate to Wikipedia often, you should too. Good for you. Yeah. But what does that do mean
for GDP? GDP is based on the price and quantity of all the goods things bought and sold. If
something has zero price, how much it contributes to GDP to a first approximation, zero. So these
digital goods that we're getting more and more of, we're spending more and more hours a day
consuming stuff off the screens, little screens, big screens, that doesn't get priced into GDP.
It's like they don't exist. That doesn't mean they don't create value. I get a lot of value from
watching cat videos and reading Wikipedia articles and listening to podcasts, even if I don't pay for
them. So we've got a mismatch there. Now, in fairness, economists, since Simon Kuznis invented
GDP and productivity, all those statistics back in the 1930s, he recognized, he in fact said,
this is not a measure of well-being. This is not a measure of welfare. It's a measure of production.
But almost everybody has kind of forgotten that he said that. And they just use it like, how well
off are we? What was GDP last year? It was 2.3% growth or whatever. That is how much physical
production, but it's not the value we're getting. We need a new set of statistics. And I'm working
with some colleagues, Avi Collis and others, to develop something we call GDP-B. GDP-B measures
the benefits you get, not the cost. If you get benefit from Zoom or Wikipedia or Facebook,
then that gets counted in GDP-B, even if you pay zero for it. So back to the original point,
I think there is a lot of gain over the past decade in these digital goods that doesn't show up in
GDP, doesn't show up in productivity. By the way, productivity is just defined as GDP divided
by hours worked. So if you mis-measure GDP, you mis-measure productivity by the exact same amount.
That's something we need to fix. I'm working with the statistical agencies
to come up with a new set of metrics. And over the coming years, I think we'll see,
we're not going to do away with GDP. It's very useful, but we'll see a parallel set of accounts
that measure the benefits. How difficult is it to get that B in the GDP-B?
It's pretty hard. One of the reasons it hasn't been done before is that you can measure it,
the cash register, what people pay for stuff. But how do you measure what they would have paid,
like what the value is? That's a lot harder. How much is Wikipedia worth to you? That's
what we have to answer. And to do that, what we do is we can use online experiments. We do massive
online choice experiments. We ask hundreds of thousands, not millions of people, to do lots
of sort of A-B tests. How much would I have to pay you to give up Wikipedia for a month?
How much would I have to pay you to stop using your phone? And in some cases, it's hypothetical.
In other cases, we actually enforce it, which is kind of expensive. We pay somebody $30 to stop
using Facebook and we see if they'll do it. And some people will give it up for $10. Some people
won't give it up even if you give them $100. That's awesome. And then you get a whole demand curve.
You get to see what all the different prices are and how much value different people get.
And not surprisingly, different people have different values. We find that women tend to
value Facebook more than men. Old people tend to value it a little bit more than young people.
I was interesting. I think young people maybe know about other networks that I don't know
the name of that are better than Facebook. And so you get to see these patterns, but
every person's individual. And then if you add up all those numbers, you start getting an estimate
of the value. Okay. First of all, that's brilliant. Is this work that will soon eventually be published?
Yeah. Well, there's a version of it in the proceedings of the National Academy of Sciences
about, I think we call it massive online choice experiments. I should remember the title,
but it's on my website. So yeah, we have some more papers coming out on it, but the first one
is already out. You know, it's kind of a fascinating mystery that Twitter, Facebook,
like all these social networks are free. And it seems like almost none of them, except for YouTube,
have experimented with removing ads for money. Can you like, do you understand that from both
economics and the product perspective? Yeah, it's something that, you know, I teach a course on
digital business models. So I used to do it at MIT at Stanford. I'm not quite sure. I'm not teaching
until next spring. I'm still thinking what my course is going to be. But there are a lot of
different business models. And when you have something that's zero marginal cost, there's a
lot of forces, especially if there's any kind of competition that pushes prices down to zero.
But you can have ad supported systems. You can bundle things together. You can have volunteer,
you mentioned Wikipedia. There's donations. And I think economists underestimate the power
of volunteerism and donations, you know, national public radio. Actually, how do you,
this podcast, how is this, what's the revenue model? There's sponsors at the beginning.
And then, and people, the funny thing is, I tell people, they can, it's very, I tell them to
timestamp. So if you want to skip the sponsors, you will be free. But the, it's funny that a
bunch of people, so I read the advertisement and a bunch of people enjoy reading it.
And well, they may learn something from it. And also, from the advertisers perspective,
those are people who are actually interested, you know, like, I mean, the example I sometimes
gave, like, I bought a car recently. And all of a sudden, all the car ads were like, interesting.
Exactly. And then like, now that I have the car, like, I just sort of zone out on, okay,
but that's fine. The car companies, they don't really want to be advertising to me if I'm not
going to buy their product. So there are a lot of these different revenue models. And, you know,
it's a little complicated, but the economic theory has to do with what the shape of the
demand curve is, when it's better to monetize it with charging people versus when you're better
off doing advertising. In short, when the demand curve is relatively flat and wide,
like generic news and things like that, then you tend to do better with advertising. If it's
a good that's only useful to a small number of people, but they're willing to pay a lot,
they have a very high value for it, then you're advertising isn't going to work as well, you're
better off charging for it. Both of them have some inefficiencies. And then when you get into
targeting and you get these other revenue models, it gets more complicated. But there's some economic
theory on it. I also think, to be frank, there's just a lot of experimentation that's needed,
because sometimes things are a little counterintuitive, especially when you get into
what are called two-sided networks or platform effects, where you may grow the market on one side
and harvest the revenue on the other side. Facebook tries to get more and more users,
and then they harvest the revenue from advertising. So that's another way of kind of thinking about it.
Is it strange to you that they haven't experimented?
Well, they are experimenting. They are doing some experiments about what the willingness
is for people to pay. I think that when they do the math, it's going to work out that they
still are better off with an advertising-driven model. What about a mix? This is what YouTube is,
right? You allow the customer to decide exactly which model they prefer.
No, that can work really well. And newspapers, of course, have known this for a long time,
the Wall Street Journal, the New York Times, they have subscription revenue,
they also have advertising revenue. And that can definitely work.
The online is a lot easier to have a dial that's much more personalized, and everybody can kind
of roll their own mix. And I could imagine having a little slider about how much advertising
you want or are willing to take. And if it's done right and it's incentive-compatible,
it could be a win-win where both the content provider and the consumer are better off than
they would have been before. The done-right part is a really good point. With Jeff Bezos and the
single-click purchase on Amazon, the frictionless effort there, if I could just rant for a second
about the Wall Street Journal, all the newspapers you mentioned, I have to click so many times
to subscribe to them that I literally don't subscribe just because of the number of times
I have to click. I'm totally with you. I don't understand why so many companies make it so hard.
Another example is when you buy a new iPhone or a new computer, whatever, I feel like, okay,
I'm going to lose an afternoon just loading up and getting all my stuff back. And for a lot of us,
that's more of a deterrent than the price. And if they could make it painless, we'd give them a
lot more money. So I'm hoping somebody listening is working on making it more painless for us
to buy your products. If we could just linger a little bit on the social network thing because
there's this Netflix social dilemma. Yeah, I saw that. And Tristan Harris and company, yeah.
And people's data, people are, it's really sensitive and social networks are at the core,
arguably, of many of societal tension and some of the most important things happening in society.
So it feels like it's important to get this right, both from a business model perspective
and just a trust perspective. I still got to, I mean, it just still feels like, I know there's
experimentation going on. It still feels like everyone is afraid to try different business
models, like really try. Well, I'm worried that people are afraid to try different business models.
I'm also worried that some of the business models may lead them to bad choices. And
Danny Kahneman talks about system one and system two, sort of like our reptilian brain that reacts
quickly to what we see, see something interesting, we click on it, we retweet it, versus our system
two, our frontal cortex that's supposed to be more careful and rational, that really doesn't
make as many decisions as it should. I think there's a tendency for a lot of these social
networks to really exploit system one, our quick instant reaction, make it so we just
click on stuff and pass it on and not really think carefully about it. And in that system,
it tends to be driven by sex, violence, disgust, anger, fear, these relatively primitive kinds
of emotions. Maybe they're important for a lot of purposes, but they're not a great way to organize
a society. And most importantly, when you think about this huge, amazing information infrastructure
we've had that's connected, you know, billions of brains across the globe, not just we can all
access information, but we can all contribute to it and share it. Arguably, the most important
thing that that network should do is favor truth over falsehoods. And the way it's been designed,
not necessarily intentionally, is exactly the opposite. My MIT colleagues, Aral and Deb Roy
and others at MIT did a terrific paper in the cover of Science, and they document what we all
feared, which is that lies spread faster than truth on social networks. They looked at a bunch of
tweets and weed tweets, and they found that false information was more likely to spread
further, faster to more people. And why was that? It's not because people like lies. It's because
people like things that are shocking, amazing. Can you believe this? Something that is not mundane,
not that something everybody else already knew. And what are the most unbelievable things? Well,
lies. And so you if you want to find something unbelievable, it's a lot easier to do that if
you're not constrained by the truth. So they found that the the emotional valence of false
information was just much higher. It was more likely to be shocking, and therefore more likely to be
spread. Another interesting thing was that that wasn't necessarily driven by the algorithms.
I know that there is some evidence, you know, Zennip Tafeki and others have pointed out in
YouTube, some of the algorithms unintentionally were tuned to amplify more extremist content.
But in the study of Twitter that Sinan and Deb and others did, they found that even
if you took out all the bots and all the automated tweets, you still had lies spreading
significantly faster. It's just the problems with ourselves that we just can't resist passing on
this salacious content. But I also blame the platforms because, you know, there's different
ways you can design a platform. You can design a platform in a way that makes it easy to spread
lies and to retweet and spread things on. Or you can kind of put some friction on that and
try to favor truth. I had dinner with Jimmy Wales once, you know, the guy who helped found
Wikipedia. And he convinced me that, look, you know, you can make some design choices,
whether it's at Facebook, at Twitter, at Wikipedia, or Reddit, whatever. And depending
on how you make those choices, you're more likely or less likely to have false news.
Create a little bit of friction, like you said. Yeah, you know, that's the
It could be friction or it could be speeding the truth, you know, either way. But
I don't totally understand speeding the truth. I love it. Yeah. Yeah. You know,
amplifying it and giving it more credit. And, you know, like in academia, which is far, far
from perfect. But, you know, when someone has important discovery, it tends to get more cited
and people kind of look to it more and sort of it tends to get amplified a little bit.
So you could try to do that too. I don't know what the silver bullet is, but the meta point
is that if we spend time thinking about it, we can amplify truth over falsehoods. And I'm
disappointed in the heads of these social networks that they haven't been as successful
or maybe haven't tried as hard to amplify truth. And part of it, going back to what we said earlier,
is, you know, these revenue models may push them more towards growing fast, spreading information
rapidly, getting lots of users, which isn't the same thing as finding truth. Yeah. I mean,
implicit in what you're saying now is a hopeful message that with platforms, we can
take a step towards greater and greater popularity of truth. But the more cynical view
is that what the last few years have revealed is that there's a lot of money to be made in
dismantling even the idea of truth, that nothing is true. And as a thought experiment,
I've been, you know, thinking about if it's possible that our future will have,
like the idea of truth is something we won't even have. Do you think it's possible,
like in the future, that everything is on the table in terms of truth and we're just swimming
in this kind of digital economy, where ideas are just little toys that are not at all connected to
reality? Yeah, I think that's definitely possible. I'm not a technological determinist. So I don't
think that's inevitable. I don't think it's inevitable that it doesn't happen. I mean,
the thing that I've come away with every time I do these studies, and I emphasize it in my books
and elsewhere is that technology doesn't shape our destiny, we shape our destiny. So just by
us having this conversation, I hope that your audience is going to take it upon themselves
as they design their products and they think about the used products as they manage companies.
How can they make conscious decisions to favor truth over falsehoods favor the better kinds of
societies and not abdicate and say, Well, we just build the tools. I think there was a saying that
was it the German scientists when they were working on the missiles in late World War II,
you know, they said, Well, our job is to make the missiles go up where they come down. That's
someone else's department. And, you know, that's obviously not the I think it's obvious that's
not the right attitude that technologists should have that engineers should have. They should
be very conscious about what the implications are. And if we think carefully about it, we can avoid
the kind of world that you just described where truth is all relative. There are going to be
people who benefit from a world of where people don't check facts and where truth is relative
and popularity or or fame or money is orthogonal to truth. But one of the reasons I suspect that
we've had so much progress over the past few hundred years is the invention of the scientific
method, which is a really powerful tool or meta tool for finding truth and favoring
things that are true versus things that are false. If they don't pass the scientific method,
they're less likely to be true. And that has the societies and the people and the organizations
that embrace that have done a lot better than the ones who haven't. And so I'm hoping
that people keep that in mind and continue to try to embrace not just the truth, but methods that
lead to the truth. So maybe on a more personal question, if one were to try to build a competitor
to Twitter, what would you advise? Is there, I mean, maybe the the bigger the matter question,
is that the right way to improve systems? Yeah, no, I think that the underlying premise behind
Twitter and all these networks is amazing that we can communicate with each other. And I use it a
lot. There's a subpart of Twitter called econ Twitter, where we economists tweet to each other
and talk about new papers, something came out in the NBER, the National Bureau of Economic
Research, and we share about it. People critique it. I think it's been a godsend because it's
really sped up the scientific process, if you can call it economic scientific. Does it get
divisive in that little? Sometimes, yeah, sure. Sometimes it does. It can also be done in nasty
ways. And there's the bad parts. But the good parts are great because you just speed up that
clock speed of learning about things. Instead of like in the old old days, waiting to read in a
journal or the not so old days when you'd see it posted on a website and you'd read it. Now on
Twitter, people will distill it down and there's a real art to getting to the essence of things.
So that's been great. But it certainly, we all know that Twitter can be a cesspool of
misinformation. And like I just said, unfortunately, misinformation tends to spread faster on Twitter
than truth. And there are a lot of people who are very vulnerable to it. I'm sure I've been fooled
at times. There are agents, whether from Russia or from political groups or others, that explicitly
create efforts at misinformation and efforts at getting people to hate each other. Or even more
importantly, lately, I've discovered is is not picking. You know, the idea of not picking.
No, what's that? It's a good term. Not picking is when you find like an extreme
nutcase on the other side, and then you amplify them and make it seem like that's typical of the
other side. So you're not literally lying. You're taking some idiot, you know, renting on the subway
or just, you know, whether they're in the KKK or Antifa or whatever, they're just,
and you normally nobody would pay attention to this guy, like 12 people would see him and be
the end. Instead, with video or whatever, you get tens of millions of people say it. And I've
seen this, you know, I look at it, I get angry. I'm like, I can't believe that person did something
so terrible. Let me tell all my friends about this terrible person. And it's a great way to
generate division. I talked to a friend who studied Russian misinformation campaigns,
and they're very clever about literally being on both sides of some of these debates.
They would have some people pretend to be part of BLM, some people pretend to be white nationalists,
and they would be throwing epithets at each other saying crazy things at each other.
And they're literally playing both sides of it. But their goal wasn't for one or the other to win.
It was for everybody to get be hating and distrusting everyone else. So these tools
can definitely be used for that. And they are being used for that. It's been super destructive
for our democracy and our society. And the people who run these platforms, I think have a
social responsibility, a moral and ethical, personal responsibility to do a better job
and to shut that stuff down. Well, I don't know if you can shut it down, but to design them in a
way that that, you know, as I said earlier, favors truth over falsehoods and favors positive types
of communication versus destructive ones. And just like you said, it's also on us. I try to
be all about love and compassion and empathy on Twitter. I mean, one of the things not picking
is a fascinating term. One of the things that people do that's, I think, even more dangerous
is not picking applied to individual statements of good people. So basically,
worst case analysis in computer science is taking sometimes out of context, but sometimes in context.
A statement, one statement by a person, like I've been, because I've been reading The Rise and
Fall of the Third Reich, I've often talked about Hitler on this podcast with folks, and it is so
easy. That's really dangerous. But I'm all leaning in. I'm 100% because, well, it's actually a
safer place than people realize because it's history and history in long form is actually very
fascinating to think about. But I could see how that could be taken totally out of context.
And it's very worrying. The thing about these digital infrastructure,
not just they send me things, but they're sort of permanent. So anything you say,
at some point, someone can go back and find something you said three years ago, perhaps
jokingly, perhaps not, maybe you're just wrong. And like that becomes, they can use that to
define you if they have an intent. And we all need to be a little more forgiving. I mean,
somewhere in my 20s, I told myself, I was going through all my different friends. And I was like,
you know, every one of them has at least like one nutty opinion.
There's like nobody who's like completely, except me, of course. But I'm sure they thought that
about me too. And, and he just kind of like learned to be a little bit tolerant that like,
okay, there's just, you know. Yeah, I wonder who the responsibility lays on there. Like,
I think ultimately it's about leadership, like the previous president, Barack Obama's been,
I think quite eloquent at walking this very difficult line of talking about cancel culture.
But it's a difficult, it takes skill. Yeah. Because you say the wrong thing and you piss off
a lot of people. And so you have to do it well. But then also the platform of the technology is
a should slow down, create friction and spreading this kind of nut picking in all its forms.
Absolutely. No. And your point that we have to like learn over time how to manage it.
I mean, we can't put it all on the platform and say, you guys design it. And because if we're
idiots about using it, you know, nobody can design a platform that withstands that. And every new
technology people learn its dangers, you know, when someone invented fire, it's great cooking
and everything. But then somebody burned himself. And then you had to like learn how to like avoid
or maybe somebody invented a fire extinguisher later and what so. So you kind of like figure out
ways of working around these technologies, someone invented seat belts, etc. And that's
certainly true with all the new digital technologies that we have to figure out, not just
technologies that protect us, but ways of using them that emphasize that are more likely to be
successful than dangerous. So you've written quite a bit about how artificial intelligence
might change our world. How do you think if we look forward again, it's impossible to predict
the future. But if we look at trends from the past and we try to predict what's going to happen in
the rest of the 21st century, how do you think AI will change our world? That's a big question.
I'm mostly a techno optimist. I'm not at the extreme, you know, the singularity is near
end of the spectrum. But I do think that we're likely in for some significantly improved living
standards, some really important progress, even just the technologies that are already kind of
like in the can that haven't diffused. You know, when I talked earlier about the J curve, it could
take 10, 20, 30 years for an existing technology to have the kind of profound effects. And when I
look at whether it's, you know, vision systems, voice recognition, problem solving systems,
even if nothing new got invented, we would have a few decades of progress. So I'm excited about
that. And I think that's going to lead to us being wealthier, healthier. I mean, the health care is
probably one of the applications I'm most excited about. So that's good news. I don't think we're
going to have the end of work anytime soon. There's just too many things that machines still can't do.
When I look around the world and think of whether it's child care or health care, clean the environment,
interacting with people, scientific work, artistic creativity. These are things that for now,
machines aren't able to do nearly as well as humans, even just something as mundane as,
you know, folding laundry or whatever. And many of these I think are going to be
years or decades before machines catch up. You know, I may be surprised on some of them,
but overall, I think there's plenty of work for humans to do. There's plenty of problems
in society that need the human touch. So we'll have to repurpose. We'll have to, as machines are
able to do some tasks, people are going to have to reskill and move into other areas. And that's
probably what's going to be going on for the next, you know, 10, 20, 30 years or more, kind of big
restructuring of society. We'll get wealthier and people will have to do new skills. Now,
if you turn the doubt further, I don't know, 50 or 100 years into the future, then, you know,
maybe all bets are off, then it's possible that machines will be able to do most of what people
do. You know, say one or 200 years, I think it's even likely. And at that point, then we're more
in the sort of abundance economy than we're in a world where there's really little for the humans
can do economically better than machines other than be human. And, you know, that will take a
transition as well, kind of more of a transition of how we get meaning in life and what our values
are. But shame on us if we screw that up. I mean, that should be like great, great news.
And it kind of saddens me that some people see that as like a big problem. I think it should
be wonderful if people have all the health and material things that they need and can focus on
loving each other and discussing philosophy and playing and doing all the other things that
don't require work. Do you think you'd be surprised to see what the 20, like if we were to travel in
time 100 years into the future, do you think you'll be able to, like if I gave you a month
to like talk to people, no, like let's say a week, would you be able to understand what the
house going on? You mean if I was there for a week? Yeah, if you were there for a week. 100 years
in the future? Yeah. So like, so I'll give you one thought experiment is like, isn't it possible
that we're all living in virtual reality by then? Yeah. No, I think that's very possible. You know,
I've played around with some of those VR headsets and they're not great, but I mean, the average
person spends many waking hours staring at screens right now. You know, they're kind of low res
compared to what they could be in 30 or 50 years. But certainly games and why not any other interactions
could be done with VR and that would be a pretty different world than we'd all, you know, in some
ways be as rich as we wanted, you know, we could have castles and I could be traveling anywhere
we want. And it could obviously be multisensory. So that would be, that would be possible, you
know, if there's people, you know, you've had Elon Musk on and others, you know, there are people,
Nick Bostrom, you know, makes the simulation argument that maybe we're already there.
We're already there. So but, but in general, or do you not even think about it in this kind of way,
you're self critically thinking how good are you as an economist at predicting what the future
looks like? Well, it starts getting, I mean, I feel reasonably comfortable next, you know, five,
10, 20 years in terms of that path. When you start getting truly superhuman artificial intelligence,
kind of by definition, be able to think of a lot of things that I couldn't have thought of
and create a world that I couldn't even imagine. And so I'm not sure I can, I can predict what
that world is going to be like. One thing that AI researchers, AI safety researchers worry about
is what's called the alignment problem. When an AI is that powerful, then they can do all
sorts of things. You really hope that their values are aligned with our values. And it's
even tricky to find what our values are. I mean, first off, we all have different values. And
secondly, maybe if we were smarter, we would have better values like, you know, I like to think
that we have better values than he did in 1860. And, or in, you know, the year 200 BC on a lot
of dimensions, things that we consider barbaric today. And it may be that if I thought about it
more deeply, I would also be morally evolved, maybe I'd be a vegetarian or do other things that
right now, whether my future self would consider kind of immoral. So that's a tricky problem,
getting the AI to do what we want, assuming it's even a friendly AI. I mean, I should probably
mention, there's a non trivial other branch where we destroy ourselves, right? I mean,
there's a lot of exponentially improving technologies that could be ferociously destructive,
whether it's in nanotechnology or biotech and weaponized viruses, AI, and other things that
nuclear weapons, nuclear weapons, of course, the old school technology, good old, good old nuclear
weapons that could could be devastating or even existential and new things yet to be invented.
So that's a branch that, you know, I think is is pretty significant. And there are those who
think that one of the reasons we haven't been contacted by other civilizations, right, is that
is that once you get to a certain level of complexity and technology, there's just too
many ways to go wrong. There's a lot of ways to blow yourself up. And people, or I should say,
species end up falling into one of those traps, the great filter, the great filter. I mean,
there's an optimistic view of that. If there is literally no intelligent life out there in the
universe, or at least in our galaxy, that means that we've passed at least one of the great
filters or some of the great filters that we survived. Yeah, no, I think I think it's Robin
Hansen has a good way of maybe others, they have a good way of thinking about this, that
if there are no other intelligence creatures out there, and that we've been able to detect,
one possibility is that there's a filter ahead of us. And when you get a little more advanced,
maybe in 100 or 1000 or 10,000 years, things just get destroyed for some reason. Yeah,
the other one is the great filters behind us. That will be good is that most
planets don't even evolve life, or if they don't evolve life, they don't involve intelligent life.
Maybe we've gotten past that. And so now maybe we're on the good side of the great filter.
So if we sort of rewind back and look at the thing where we could say something a little
bit more comfortably at five years and 10 years out, you've written about jobs and the impact on
sort of our economy and the jobs in terms of artificial intelligence that it might have.
It's a fascinating question of what kind of jobs are safe, what kind of jobs are not.
He maybe speak to your intuition about how we should think about AI changing the landscape of
work. Sure, absolutely. Well, this is a really important question because I think we're very
far from artificial general intelligence, which is AI that can just do the full breath of what
humans can do. But we do have human level or super human level, narrow intelligence,
narrow artificial intelligence. And, you know, obviously my calculator can do math a lot better
than I can. And there's a lot of other things machines can do better than I can. So which is
which we actually set out to address that question with Tom Mitchell. I wrote a paper called what
can machine learning do that was in science. And we went and interviewed a whole bunch of AI experts
and kind of synthesized what they thought machine learning was good at and wasn't good at. And we
came up with what we called a rubric, basically a set of questions you can ask about any task that
will tell you whether it's likely to score high or low on suitability for machine learning.
And then we've applied that to a bunch of tasks in the economy. In fact, there's a data set of all
the tasks in the US economy, believe it or not, it's called ONET. The US government put it together,
part of Bureau of Labor Statistics, they divide the economy into about 970 occupations like,
you know, bus driver, economist, primary school teacher, radiologist. And then for each one of
them, they describe which tasks need to be done. Like for radiologists, there are 27 distinct tasks.
So we went through all those tasks to see whether or not a machine could do them. And what we found
interestingly was brilliant study, whether that's so awesome. Yeah, thank you. So what we found was
that there was no occupation in our data set where machine learning just ran the table and did
everything. And there was almost no occupation where machine learning didn't have like a significant
ability to do things. Like take radiology, a lot of people I hear saying, you know,
at the end of radiology. And one of the 27 tasks is read medical images, really important one,
like it's kind of a core job. And machines have basically gotten as good or better than
radiologists. There was just an article in Nature last week, but you know, they've been
publishing them for the past few years, showing that machine learning can do as well as humans
on many kinds of diagnostic imaging tasks. But other things radiologists do, you know, they
sometimes administer conscious sedation. They sometimes do physical exams, they have to synthesize
the results and explain to the other doctors or to the patients. In all those categories,
machine learning isn't really up to snuff yet. So that job, we're going to see a lot of restructuring
parts of the job, they'll hand over to machines, others humans will do more of that's been more
or less the pattern all of them. So, you know, to oversimplify, but we see a lot of restructuring,
reorganization of work, and it's real going to be a great time. It is a great time for smart
entrepreneurs and managers to do that reinvention of work. I'm not going to see mass unemployment.
To get more specifically to your question, the kinds of tasks that machines tend to be good at
are a lot of routine problem solving, mapping inputs X into outputs Y, if you have a lot of data
on the X's and the Y's, the inputs and the outputs, you can do that kind of mapping and find the
relationships. They tend to not be very good at for even now, fine motor control and dexterity,
emotional intelligence and human interactions. And thinking outside the box, creative work,
if you give it a well structured task, machines can be very good at it. But even asking the right
questions, that's hard. There's a quote that Andrew McAfee and I use in our book, Second Machine Age.
Apparently, Pablo Picasso was shown an early computer and he came away kind of
unimpressed. He goes, well, I don't see all the fusses. All that does is answer questions.
And to him, the interesting thing was asking the questions.
Try to replace me, GPT-3, dare you, although some people think I'm a robot. You have this
cool plot that shows, I just remember where economists landed, where I think the X axis is the
income, and then the Y axis is, I guess, aggregating the information of how replaceable the job is,
or I think there's an index. There's a student ability for machine learning index, exactly.
So we have all 970 occupations on that chart. It's a cool plot.
And there's gathers in all four corners have some occupations. But there is a definite pattern,
which is the lower wage occupations tend to have more tasks that are suitable for machine
learning like cashiers. I mean, anyone who's gone to a supermarket or CVS knows that they
not only read barcodes, but they can recognize an apple and an orange, and a lot of things
that cashiers humans used to be needed for. At the other end of the spectrum, there are some jobs
like airline pilot that are among the highest paid in our economy, but also a lot of them
are suitable for machine learning. A lot of those tasks are. And then, yeah, you mentioned
economists, I couldn't help peaking at those. And they're paid a fair amount, maybe not as much
as some of us think they should be. But they have some tasks they're suitable for machine
learning. But for now, at least, most of the tasks that economists do didn't end up being
in that category. And I should say, I didn't create that data. We just took the analysis,
and that's what came out of it. And over time, that scatter plot will be updated as the technology
improves. But it was just interesting to see the pattern there. And it is a little troubling in so
far as if you just take the technology as it is today, it's likely to worsen income inequality
in a lot of dimensions. So on this topic of the effect of AI on our landscape of work,
one of the people that have been speaking about it in the public domain, public discourse,
is the presidential candidate, Andrew Yang. What are your thoughts about Andrew? What are your
thoughts about UBI, that Universal Basic Income, that he made one of the core ideas? By the way,
he has hundreds of ideas about everything. It's kind of interesting. But what are your thoughts
about him and what are your thoughts about UBI? Let me answer the question about his broader approach
first. I just love that. He's really thoughtful, analytical. I agree with his values. So that's
awesome. And he read my book and mentions it sometimes. That makes me even more exciting.
And the thing that he really made the centerpiece of his campaign was UBI. And I was originally
kind of a fan of it. And then as I studied it more, I became less of a fan, although I'm beginning
to come back a little bit. So let me tell you a little bit about my evolution. As an economist,
we are looking at the problem of people not having enough income. And the simplest thing is, well,
why don't we write them a check? Problem solved. But then I talked to my sociologist friends and
they really convinced me that just writing a check doesn't really get at the core values.
Voltaire once said that work solves three great ills, boredom, vice, and need. And you can deal
with the need thing by writing a check. But people need a sense of meaning. They need something to
do. And when, you know, say steelworkers or coal miners lost their jobs and were just given checks,
alcoholism, depression, divorce, all those social indicators, drug use all went way up. People just
weren't happy just sitting around collecting a check. Maybe it's part of the way they were raised.
And maybe it's something innate in people that they need to feel wanted and needed. So it's not
as simple as just writing people a check. You need to also give them a way to have a sense of purpose.
And that was important to me. And the second thing is that as I mentioned earlier, you know,
we are far from the end of work. You know, I don't buy the idea that there's just like not enough
work to be done. I see like our cities need to be cleaned up. And I mean, robots can't do most of
that. You know, we need to have better childcare, we need better healthcare, we need to take care
of people who are mentally ill or older. We need to repair our roads. There's so much work that
require at least partly, maybe entirely a human component. So rather than like write all these
people off, well, let's find a way to repurpose them and keep them engaged. Now that said,
I would like to see more buying power from people who are sort of at the bottom end of the spectrum.
The economy has been designed and evolved in a way that's, I think, very unfair to a lot of hard
working people. I see super hardworking people who aren't really seeing their wages grow over the
past 20, 30 years, while some other people who have been super smart and or super lucky have
have made billions or hundreds of billions. And I don't think they need those hundreds of billions
to have the right incentives to invent things. I think if you talk to almost any of them, as I
have, you know, they don't think that they need an extra $10 billion to do what they're doing. Most
of them probably would love to do it for only a billion or maybe for nothing. For nothing, many
of them. Yeah. I mean, you know, an interesting point to make is, you know, like, do we think that
Bill Gates would have founded Microsoft if tax rates were 70%? Well, we know he would have because
they were tax rates of 70% when he founded it. You know, so I don't think that's as big a deterrent
and we could provide more buying power to people. My own favorite tool is the earned income tax
credit, which is basically a way of supplementing income of people who have jobs and giving employers
an incentive to hire even more people. The minimum wage can discourage employment, but the earned
income tax credit encourages employment by supplementing people's wages. You know, if the
employer can only afford to pay him $10 for a task, the rest of us kick in another $5 or $10
and bring their wages up to $15 or $20 total. And then they have more buying power than
entrepreneurs are thinking, how can we cater to them? How can we make products for them? And it
becomes a self-reinforcing system where people are better off. Andrew Ng and I had a good discussion
where he suggested instead of a universal basic income, he suggested, or instead of an unconditional
basic income, how about a conditional basic income where the condition is you learn some new skills,
we need to rescale our workforce. So let's make it easier for people to find ways to get those
skills and get rewarded for doing them. And that's kind of a neat idea as well.
That's really interesting. So I mean, one of the questions, one of the dreams of UBI is that
you provide some little safety net while you retrain, while you learn a new skill.
But I think, I guess you're speaking to the intuition that that doesn't always,
like there needs to be some incentive to reskill, to train, to learn new things.
Well, I think it helps. I mean, there are lots of self-motivated people, but there are also
people that maybe need a little guidance or help. And I think it's a really hard question for someone
who is losing a job in one area to know what is the new area I should be learning skills in.
And we could provide a much better set of tools and platforms that map to, okay, here's a set
of skills you already have. Here's something that's in demand. Let's create a path for you to go
from where you are to where you need to be. So I'm a total, how do I put it nicely about myself?
I'm totally clueless about the economy. It's not totally true, but pretty good approximation.
If you were to try to fix our tax system,
or maybe from another side, if there is fundamental problems in taxation or some
fundamental problems about our economy, what would you try to fix? What would you try to speak to?
You know, I definitely think our whole tax system, our political and economic system has
gotten more and more screwed up over the past 20, 30 years. I don't think it's
that hard to make headway in improving it. I don't think we need to totally reinvent stuff.
A lot of it is what I've elsewhere with Andy and others called economics 101. You know,
there's just some basic principles that have worked really well in the 20th century that we
sort of forgot, you know, in terms of investing in education, investing in infrastructure,
welcoming immigrants, having a tax system that was more progressive and fair. At one point,
tax rates were on top incomes were significantly higher and they've come down a lot to the point
where in many cases they're lower now than they are for poorer people. So, and we could do things
like an earned income tax credit to get a little more wonky. I'd like to see more Pagoovian taxes.
What that means is you tax things that are bad instead of things that are good. So right now,
we tax labor, we tax capital, and which is unfortunate because one of the basic principles
of economics, if you tax something, you tend to get less of it. So, you know, right now,
there's still work to be done and still capital to be invested in. But instead, we should be taxing
things like pollution and congestion. And if we did that, we would have less pollution. So a carbon
tax is, you know, almost every economist would say it's a no brainer, whether they're Republican
or Democrat, Greg Mankiew, who's head of George Bush's Council of Economic Advisors, or Dick
Schmollensy, who is another Republican economist degree, and of course, a lot of Democratic
economists agree as well. If we taxed carbon, we could raise hundreds of billions of dollars.
We could take that money and redistribute it through an earned income tax credit or other
things so that overall, our tax system would become more progressive. We could tax congestion.
One of the things that kills me as an economist is every time I sit in a traffic jam, I know that
it's completely unnecessary. This is a complete waste of time. You could just visualize the cost
and productivity that this creates. Exactly, because they are taking costs from me and all
the people around me. And if they charged a congestion tax, they would take that same amount
of money and people would, it would streamline the roads. Like when you're in Singapore, the
traffic just flows because they have a congestion tax. They listen to economists. They invite it
be in others to go talk to them. And then I'd still be paying, I'd be paying a congestion tax
instead of paying in my time. But that money would now be available for healthcare,
be available for infrastructure, or be available just to give to people so they could buy food or
whatever. So it saddens me when you're sitting in a traffic jam, it's like taxing me and then
taking that money and dumping it in the ocean, just like destroying it. So there are a lot of
things like that that economists, and I'm not like doing anything radical here. Most good
economists would, I probably agree with me point by point on these things. And we could do those
things in our whole economy become much more efficient, become fair, invest in R&D and research,
which is close to a free lunch is what we have. My erstwhile MIT colleague, Bob Solo, got the
Nobel Prize, not yesterday, but 30 years ago, for describing that most improvements in living
standards come from tech progress. And Paul Romer later got a Nobel Prize for noting that
investments in R&D and human capital can speed the rate of tech progress. So if we do that,
then we'll be healthier and wealthier. Yeah, from an economics perspective,
I remember taking an undergrad econ, you mentioned econ 101. It seemed, from all the plots I saw,
that R&D is an obvious, that's close to free lunches, as we have, it seemed like obvious
that we should do more research. It is. Like, what? Like, there's no... Well, we should do
basic research. I mean, so let me just be clear. It'd be great if everybody did more research.
And I would make this interesting to apply development versus basic research. So apply
development, like, how do we get this self-driving car feature to work better in the Tesla? That's
great for private companies because they can capture the value from that. If they make a
better self-driving car system, they can sell cars that are more valuable and then make money.
So there's an incentive that there's not a big problem there. And smart companies, Amazon,
Tesla and others are investing in it. The problem is with basic research, like coming up with
core basic ideas, whether it's in nuclear fusion or artificial intelligence or biotech,
there, if someone invents something, it's very hard for them to capture the benefits from it.
It's shared by everybody, which is great in a way, but it means that they're not going to
have the incentives to put as much effort into it. There you need... It's a classic public good.
There you need the government to be involved in. And the US government used to be investing much
more in R&D, but we have slashed that part of the government really foolishly. And we're all poorer,
significantly poorer as a result. Growth rates are down. We're not having the kind of scientific
progress we used to have. It's been sort of a short term, eating the seed, corn, whatever
metaphor you want to use, where people grab some money, put it in their pockets today, but
five, 10, 20 years later, they're a lot poorer than they otherwise would have been.
So we're living through a pandemic right now globally in the United States.
From an economics perspective, how do you think this pandemic will change the world?
It's been remarkable. And it's horrible how many people have suffered the amount of death,
the economic destruction. It's also striking just the amount of change in work that I've seen.
In the last 20 weeks, I've seen more change than there were in the previous 20 years.
There's been nothing like it since probably the World War II mobilization in terms of
reorganizing our economy. The most obvious one is the shift to remote work. And I and many other
people stopped going into the office and teaching my students in person. I did a study on this with
a bunch of colleagues at MIT and elsewhere. And what we found was that before the pandemic,
in the beginning of 2020, about one in six, a little over 15% of Americans were working remotely.
When the pandemic hit, that grew steadily and hit 50%, roughly half of Americans working at home.
So a complete transformation. And of course, it wasn't even, it wasn't like everybody did it.
If you're an information worker, professional, if you work mainly with data,
then you're much more likely to work at home. If you're a manufacturing worker, working with
other people or physical things, then it wasn't so easy to work at home. And instead,
those people were much more likely to become laid off or unemployed. So it's been something that
has had very disparate effects on different parts of the workforce. Do you think it's going to be
sticky in a sense that after vaccine comes out and the economy reopens, do you think
remote work will continue? That's a great question. My hypothesis is, yes, a lot of it will,
of course, some of it will go back, but a surprising amount of it will stay. I personally,
for instance, I moved my seminars, my academic seminars to Zoom, and I was surprised how well
it worked. So it works. Yeah. I mean, obviously, we are able to reach a much broader audience.
So we have people tuning in from Europe and other countries, just all over the United States,
for that matter. I also actually found that in many ways, it's more egalitarian. We use the chat
feature and other tools. And grad students and others who might have been a little shy about
speaking up, we now kind of have more of a ability for lots of voices. And they're answering each
other's questions, so you kind of get parallel. If someone had a question about some of the data
or a reference or whatever, then someone else in the chat would answer it. And the whole thing
just became like a higher bandwidth, higher quality thing. So I thought that was kind of
interesting. I think a lot of people are discovering that these tools that, thanks to
technologists, have been developed over the past decade, they're a lot more powerful than we
thought. I mean, all the terrible things we've seen with COVID and the real failure of many of our
institutions that I thought would work better. One area that's been a bright spot is our
technologies. Bandwidth has held up pretty well. And all of our email and other tools have just
scaled up kind of gracefully. So that's been a plus. Economists call this question of whether
it'll go back a hysteresis. The question is, when you boil an egg, after it gets cold again,
it stays hard. And I think that we're going to have a fair amount of hysteresis in the economy.
We're going to move to this new, we have moved to a new remote work system. And it's not going to
snap all the way back to where it was before. One of the things that worries me
is that the people with lots of followers on Twitter and people with voices,
people that can, voices that can be magnified by, you know, reporters and all that kind of stuff
are the people that fall into this category that we were referring to just now,
where they can still function and be successful with remote work. And then there is a kind of
quiet suffering of what feels like millions of people whose jobs are disturbed profoundly
by this pandemic, but they don't have many followers on Twitter.
What do we, and again, I apologize, but I've been reading The Rise and Fall of the Third
Reich and there's a connection to the depression on the American side. There's a deep,
complicated connection to how suffering can turn into forces that potentially change the world in
in destructive ways. So like it's something I worry about is like, what is this suffering going
to materialize itself in five, 10 years? Is that something you worry about? It's like the
center of what I worry about. And let me break it down to two parts. You know, there's a moral
and ethical aspect to it that we need to relieve this suffering. I mean, I'm, I share the values
of, I think most Americans, we like to see shared prosperity or most people on the planet.
And we would like to see people not falling behind and they have fallen behind, not just
due to COVID, but in the previous couple of decades, median income has barely moved,
you know, depending on how you measure it. And the incomes at the top 1% have skyrocketed.
And our part of that is due to the ways technology has been used. Part of this
been due to, frankly, our political system has continually shifted more wealth into those people
who have the powerful interest. So there's just, I think, a moral imperative to do a better job.
And ultimately, we're all going to be wealthier if more people can contribute, more people have
the wherewithal. But the second thing is that there's a real political risk. I'm not a political
scientist, but you don't have to be one, I think, to see how a lot of people are really upset
with their getting a raw deal. And they are going to, you know, they want to smash the system in
different ways in 2016 and 2018. And now, I think there are a lot of people who are looking at the
political system, and they feel like it's not working for them, and they just want to do something
radical. Unfortunately, demagogues have harnessed that in a way that is pretty destructive to the
country. And an analogy I see is what happened with trade. You know, almost every economist
thinks that free trade is a good thing that when two people voluntarily exchange almost by definition,
they're both better off if it's voluntary. And so generally, trade is a good thing. But they
also recognize that trade can lead to uneven effects, that there can be winners and losers in
some of the people who didn't have the skills to compete with somebody else or didn't have
other assets. And so trade can shift prices in ways that are averse to some people. So there's a
formula that economists have, which is that you have free trade, but then you compensate the people
who were hurt. And free trade makes the pie bigger. And since the pie is bigger, it's possible for
everyone to be better off. You can make the winners better off, but you can also compensate those who
who don't win. And so they end up being better off as well. What happened was that we didn't
fulfill that promise. We did have some more increased free trade in the 80s and 90s. But we
didn't compensate the people who were hurt. And so they felt like the, you know, the people in power
reneged on the bargain. And I think they did. And so then there's a backlash against trade. And now,
both political parties, but especially Trump and company have really pushed back against free trade.
Ultimately, that's bad for the country. Ultimately, that's bad for living standards. But in a way,
I can understand that people felt they were betrayed. Technology has a lot of similar
characteristics. Technology can make us all better off. It makes the pie bigger. It creates wealth
and health. But it can also be uneven. Not everyone automatically benefits. It's possible for some
people, even a majority of people to get left behind, while a small group benefits.
What most economists would say, well, let's make the pie bigger, but let's make sure we adjust the
system so we compensate the people who are hurt. And since the pie is bigger, we can make the rich
richer, we can make the middle class richer, we can make the poor richer. Mathematically, everyone
could be better off. But again, we're not doing that. And again, people are saying this isn't
working for us. And again, instead of fixing the distribution, a lot of people are beginning to say,
hey, technology sucks. We've got to stop it. Let's throw rocks at the Google bus.
Let's blow it up.
Let's blow it up. And there were the Luddites almost exactly 200 years ago who smashed the
looms and the spinning machines, because they felt like those machines weren't helping them.
We have a real imperative, not just to do the morally right thing, but to do the thing that is
going to save the country, which is make sure that we create not just prosperity, but shared prosperity.
So you've been at MIT for over 30 years, I think.
Oh, don't tell everyone how old I am. Yeah, that's true. That's true.
And you're now moved to Stanford. I'm going to try not to say anything about how great MIT is.
What's that move been like? What is east coast to west coast?
Well, MIT is great. MIT has been very good to me. It continues to be very good to me.
It's an amazing place. I continue to have so many amazing friends and colleagues there.
I'm very fortunate to have been able to spend a lot of time at MIT.
Stanford is also amazing. And part of what attracted me out here was not just the weather,
but also Silicon Valley, let's face it, is really more of the epicenter of the technological
revolution. And I want to be close to the people who are inventing AI and elsewhere.
A lot of it is being invested at MIT for that matter in Europe and China and elsewhere in NIA.
But being a little closer to some of the key technologists was something that was important
to me. And it may be shallow, but I also do enjoy the good weather.
And I felt a little ripped off when I came here a couple of months ago,
and immediately there are the fires and my eyes were burning, the sky was orange,
and there's the heat waves. So it wasn't exactly what I'd been promised,
but fingers crossed it'll get back to better.
So maybe on a brief aside, there's been some criticism of academia and universities
and different avenues. And I, as a person who's gotten to enjoy universities from the
pure playground of ideas that it can be, always try to find the words to tell people that
these are magical places. Is there something that you can speak to that is beautiful or
powerful about universities? Well, sure. I mean, first off,
I mean, economists have this concept called revealed preference. You can ask people what
they say or you can watch what they do. And so obviously by reveal preferences, I love academia.
I could be doing lots of other things, but it's something I enjoy a lot.
And I think the word magical is exactly right. At least it is for me. I do what I love.
Hopefully my Dean won't be listening, but I would do this for free.
You know, it's just what I like to do. I like to do research. I love to have conversations
like this with you and with my students, with my fellow colleagues. I love being around the
smartest people I can find and learning something from them and having them challenge me. And
that just gives me joy. And every day I find something new and exciting to work on. And
a university environment is really filled with other people who feel that way. And so I feel
very fortunate to be part of it. And I'm lucky that I'm in a society where I can actually get
paid for it and put food on the table while doing the stuff that I really love. And I hope someday
everybody can have jobs that are like that. And I appreciate that it's not necessarily easy for
everybody to have a job that they both love and also they get paid for. So there are things that
don't go well in academia, but by and large, I think it's a kind of, you know, kinder,
gentler version of a lot of the world. You know, we sort of cut each other a little slack on things
like, you know, on just a lot of things. You know, of course, there's harsh debates and
discussions about things and some petty politics here and there. I personally, I try to stay away
from most of that sort of politics. It's not my thing. And so it doesn't affect me most of the
time, sometimes a little bit maybe. But, but, you know, being able to pull together something,
we have the digital economy lab. We get all these brilliant grad students and undergraduates and
postdocs that are just doing stuff that I learned from. And every one of them has some aspect of
what they're doing that's just, I couldn't even understand. It's like way, way more brilliant.
And it's that's really, to me, actually, I really enjoy that being in a room with lots of other
smart people. And, and Stanford has made it very easy to attract, you know, those people. I just,
you know, say, I'm going to do a seminar, whatever, and the people come, they come and want to work
with me. We get funding, we get data sets, and it's, it's come together real nicely.
And the rest is just fun. It's fun. Yeah. And we feel like we're working on important problems,
you know, and we're doing things that, you know, I think are our first order in terms of what's
important in the world. And that's very satisfying to me. Maybe a bit of a fun question. What three
books, technical fiction, philosophical, you've enjoyed, had a big, big impact in your life?
Well, I guess I go back to like my, my teen years. And, and, you know, I read Sid Arthur,
which is a philosophical book and kind of helps keep me, keep me centered.
I'm like, yeah, my Herman has exactly don't get too wrapped up in material things or other
things and just sort of, you know, try to find peace on things. A book that actually
influenced me a lot in terms of my career was called The Worldly Philosophers by Robert
Howe Brenner. It's actually about economists. It goes through a series of different companies
written in a very lively form. And it probably sounds boring, but it did describe whether it's
Adam Smith or Karl Marx or John Maynard Keynes and, and each of them sort of what their key
insights were, but also kind of their personalities. And I think that's one of the reasons I became
an economist was, was just understanding how they grappled with the big questions of the world.
So would you recommend it as a good whirlwind overview of the history of economics?
Yeah. Yeah. I think that's exactly right. It kind of takes you through the different things.
And, and, you know, so you can understand how they reach thinking, some of the strengths and
weaknesses. I mean, probably there's a little out of date now. It needs to be updated a bit,
but, you know, you could at least look through the, the first couple hundred years of economics,
which is not a bad place to start. More recently, I mean, a book I really enjoyed is by my,
my friend and colleague, Max Tagmark called Life 3.0. You should have on your podcast,
if you haven't already. He was episode number one. Oh my God. And he's back. He'll be back.
He'll be back soon. Yeah. No, he's terrific. I love the way his brain works. And he makes
you think about profound things. He's got such a joyful approach to life. And so that's been
a great book. And, you know, you learn, I learn a lot from it. I think everybody,
but he explains it in a way, even though he's so brilliant, that, you know,
everyone can understand that I can understand. You know, that's three, but let me mention
maybe one or two others. I mean, I recently read more from last by my, sometimes co-author Andrew
McAfee. It made me optimistic about how we can continue to have rising living standards
while living more lightly on the planet. In fact, because of higher living standards,
because of technology, because of digitization that I mentioned, we don't have to have as big
an impact on the planet. And that's a great story to tell. And he documents it very carefully.
You know, a personal kind of self-help book that I found kind of useful. People is Atomic Habits.
I think it's, what's his name? James Clear? Yeah, James Clear. He's just, yeah,
it's a good name because he writes very clearly. And, you know, most of the sentences I read in
that book, I was like, yeah, I know that, but it just really helps to have somebody like remind
you and tell you and kind of just reinforce it. And so build habits in your life that you hope to
have a positive impact and don't have to make it big things. It could be just tiny little.
Exactly. I mean, the word atomic, it's a little bit of a pun. I think he says, you know, one,
atomic means a really small thing to take these little things, but also like atomic power, it
can have like, you know, big impact. That's funny. Yeah. The biggest ridiculous question,
especially to ask an economist, but also a human being, what's the meaning of life?
I hope you've gotten the answer to that from somebody else. I think we're all still working
on that one. But what is it? You know, I actually learned a lot from my son, Luke, and he's 19 now,
but he's always loved philosophy. And he reads way more sophisticated philosophy than I do.
I once took him to Oxford and he spent the whole time like pulling all these obscure books down
and reading them. And a couple of years ago, we had this argument, and he was trying to convince
me that hedonism was the ultimate, you know, meaning of life, just pleasure, seeking and
well, how old was he at the time? 17.
But he made a really good like intellectual argument for it too. And you know,
of course, I just didn't strike me as right. And I think that, you know, while I am kind of
a utilitarian, like, you know, I do think we should do the greatest good for the greatest number,
that's just too shallow. And I think I've convinced myself that real happiness doesn't
come from seeking pleasure. It's kind of a little, it's ironic. Like if you really focus on
being happy, I think it doesn't work. You got to like be doing something bigger. It's,
I think the analogy I sometimes use is, you know, when you look at a dim star in the sky,
if you look right at it, it kind of disappears, but you have to look a little to the side and then
the parts of your retina that are better at absorbing light, you know, can pick it up better.
It's the same thing with happiness. I think you need to sort of find something other
goal, something, some meaning in life. And that ultimately makes you happier than if you go
squarely at just pleasure. And so for me, you know, the kind of research I do that I think is
trying to change the world, make the world a better place. And I'm not like an evolutionary
psychologist, but my guess is that our brains are wired, not just for pleasure, but we're social
animals, and we're wired to like help others. And ultimately, you know, that's something that's
really deeply rooted in our psyche. And if we do help others, if we do, or at least feel like
we're helping others, you know, our reward systems kick in, and we end up being more
deeply satisfied than if we just do something selfish and shallow.
Beautifully put, I don't think there's a better way to end it. Eric, you're one of the people when I
first showed up at MIT that made me proud to be at MIT. So it's so sad that you're now at Stanford,
but I'm sure you'll do wonderful things at Stanford as well. I can't wait till
future books and people should definitely read. Well, thank you so much. And I think we're all
part of the invisible college, as we call it. You know, we're all part of this intellectual
and human community where we all can learn from each other. It doesn't really matter
physically where we are so much anymore. Beautiful. Thanks for talking today. My pleasure.
Thanks for listening to this conversation with Eric Brinjalson, and thank you to our sponsors.
And Sarah Watches, the maker of classy, well-performing watches, FourSigmatic, the maker of delicious
mushroom coffee, ExpressVPN, the VPN I've used for many years to protect my privacy on the Internet,
and Cash App, the app I use to send money to friends. Please check out these sponsors in
the description to get a discount and to support this podcast. If you enjoy this thing, subscribe
on YouTube, review it with five stars on Apple Podcast, follow on Spotify, support on Patreon,
or connect with me on Twitter at Lex Friedman. And now, let me leave you with some words from
Albert Einstein. It has become appallingly obvious that our technology has exceeded our humanity.
Thank you for listening, and hope to see you next time.