logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 12h 13m 31s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Eric Schmidt.
He was the CEO of Google for 10 years
and a chairman for six more,
guiding the company through an incredible period of growth
and a series of world-changing innovations.
He is one of the most impactful leaders
in the era of the internet and the powerful voice
for the promise of technology in our society.
It was truly an honor to speak with him
as part of the MIT course
on artificial general intelligence
and the artificial intelligence podcast.
And now here's my conversation with Eric Schmidt.
What was the first moment
when you fell in love with technology?
I grew up in the 1960s as a boy
where every boy wanted to be an astronaut
and part of the space program.
So like everyone else of my age,
we would go out to the cow pasture behind my house,
which was literally a cow pasture
and we would shoot model rockets off.
And that I think is the beginning.
And of course, generationally today,
it would be video games and all the amazing things
that you can do online with computers.
There's a transformative, inspiring aspect of science
and math that maybe rockets would bring
wood and stone in individuals.
You've mentioned yesterday that eighth grade math
is where the journey through mathematical universe
diverges for many people.
It's this fork in the roadway.
There's a professor of math at Berkeley, Edward Franco.
He, I'm not sure if you're familiar with him.
I am.
He has written this amazing book.
I recommend to everybody called Love and Math,
two of my favorite words.
He says that if painting was taught like math,
then students would be asked to paint a fence,
which is his analogy of essentially how math is taught.
And so you never get a chance to discover the beauty
of the art of painting or the beauty of the art of math.
So how, when, and where did you discover that beauty?
I think what happens with people like myself
is that your math enabled pretty early,
and all of a sudden you discover that you can use that
to discover new insights.
The great scientists will all tell a story,
the men and women who are fantastic today,
that somewhere when they were in high school
or in college, they discovered that they could
discover something themselves.
And that sense of building something,
of having an impact that you own,
drives knowledge acquisition and learning.
In my case, it was programming,
and the notion that I could build things
that had not existed, that I had built,
that it had my name on it.
And this was before open source,
but you could think of it as open source contributions.
So today, if I were a 16 or 17 year old boy,
I'm sure that I would aspire as a computer scientist
to make a contribution like the open source heroes
of the world today.
That would be what would be driving me,
and I'd be trying and learning and making mistakes
and so forth in the ways that it works.
The repository that GitHub represents
and that open source libraries represent
is an enormous bank of knowledge
of all of the people who are doing that.
And one of the lessons that I learned at Google
was that the world is a very big place
and there's an awful lot of smart people,
and an awful lot of them are underutilized.
So here's an opportunity, for example,
building parts of programs, building new ideas
to contribute to the greater of society.
So in that moment in the 70s,
the inspiring moment where there was nothing,
and then you created something through programming,
that magical moment.
So in 1975, I think you've created a program called Lex,
which I especially like because my name is Lex.
So thank you, thank you for creating a brand
that establish a reputation that's long lasting, reliable,
and has a big impact on the world and still used today.
So thank you for that.
But more seriously, in that time, in the 70s,
as an engineer, personal computers were being born.
Do you think you would be able to predict
the 80s, 90s, and the aughts of where computers would go?
I'm sure I could not and would not have gotten it right.
I was the beneficiary of the great work
of many, many people who saw it clearer than I did.
With Lex, I worked with a fellow named Michael Lesk,
who was my supervisor, and he essentially helped me
architect and deliver a system that's still in use today.
After that, I worked at Xerox Palo Alto Research Center,
where the Alto was invented, and the Alto is the predecessor
of the modern personal computer, or Macintosh, and so forth.
And the Altos were very rare, and I had to drive an hour
from Berkeley to go use them, but I made a point
of skipping classes and doing whatever it took
to have access to this extraordinary achievement.
I knew that they were consequential.
What I did not understand was scaling.
I did not understand what would happen
when you had 100 million as opposed to 100.
And so since then, and I have learned the benefit of scale,
I always look for things which are going to scale
to platforms, right?
So mobile phones, Android, all those things.
The world is in numerous, there are many, many people
in the world, people really have needs,
they really will use these platforms,
and you can build big businesses on top of them.
So it's interesting, so when you see a piece
of technology, now you think, what will this technology
look like when it's in the hands of a billion people?
That's right.
So an example would be that market is so competitive now
that if you can't figure out a way for something
to have a million users or a billion users,
it probably is not gonna be successful
because something else will become the general platform
and your idea will become a lost idea
or a specialized service with relatively few users.
So it's a path to generality,
it's a path to general platform use,
it's a path to broad applicability.
Now, there are plenty of good businesses that are tiny,
so luxury goods, for example,
but if you wanna have an impact at scale,
you have to look for things which are of common value,
common pricing, common distribution,
and solve common problems.
They're problems that everyone has.
And by the way, people have lots of problems,
information, medicine, health, education, and so forth,
work on those problems.
Like you said, you're a big fan of the middle class.
Because there's so many of them.
There's so many of them, by definition.
So any product, any thing that has a huge impact
that improves their lives is a great business decision
and it's just good for society.
And there's nothing wrong with starting off in the high end
as long as you have a plan to get to the middle class.
There's nothing wrong with starting with a specialized market
in order to learn and to build and to fund things.
So you start with a luxury market
to build a general purpose market.
But if you define yourself as only a narrow market,
someone else can come along with a general purpose market
that can push you to the corner,
can restrict the scale of operation,
can force you to be a lesser impact than you might be.
So it's very important to think in terms of broad businesses
and broad impact, even if you start
in a little corner somewhere.
So as you look to the 70s, but also in the decades to come
and you saw computers, did you see them as tools
or was there a little element of another entity?
I remember a quote saying AI began
with our dream to create the gods.
Is there a feeling when you wrote that program
that you were creating another entity,
giving life to something?
I wish I could say otherwise,
but I simply found the technology platforms so exciting.
That's what I was focused on.
I think the majority of the people that I've worked with,
and there are a few exceptions, Steve Jobs being an example,
really saw this as a great technological play.
I think relatively few of the technical people
understood the scale of its impact.
So I used NCP, which is a predecessor to TCP IP.
It just made sense to connect things.
We didn't think of it in terms of the internet
and then companies and then Facebook and then Twitter
and then politics and so forth.
We never did that build.
We didn't have that vision.
And I think most people, it's a rare person
who can see compounding at scale.
Most people can see, if you ask people
to predict the future, they'll say,
they'll give you an answer of six to nine months
or 12 months, because that's about
as far as people can imagine.
But there's an old saying, which actually was attributed
to a professor at MIT a long time ago,
that we overestimate what can be done in one year
and we underestimate what can be done in a decade.
And there's a great deal of evidence
that these core platforms at hardware and software
take a decade, right?
So think about self-driving cars.
Self-driving cars were thought about in the 90s.
There were projects around them.
The first DARPA Durand challenge was roughly 2004.
So that's roughly 15 years ago.
And today we have self-driving cars operating
in a city in Arizona, right?
It's 15 years and we still have a ways to go
before they're more generally available.
So you've spoken about the importance.
You just talked about predicting into the future.
You've spoken about the importance of thinking five years
ahead and having a plan for those five years.
The way to say it is that almost everybody
has a one-year plan.
Almost no one has a proper five-year plan.
And the key thing to have on the five-year plan
is to having a model for what's going to happen
under the underlying platforms.
So here's an example.
Compute Moore's law as we know it,
the thing that powered improvements in CPUs
has largely halted in its traditional shrinking mechanism
because the costs have just gotten so high
and it's getting harder and harder.
But there's plenty of algorithmic improvements
and specialized hardware improvements.
So you need to understand the nature of those improvements
and where they'll go in order to understand
how it will change the platform.
In the area of network connectivity,
what are the gains that are going to be possible in wireless?
It looks like there's an enormous expansion
of wireless connectivity at many different bands.
And that we will primarily, historically,
I've always thought that we were primarily
going to be using fiber.
But now it looks like we're going to be using fiber plus
very powerful high bandwidth short-distance connectivity
to bridge the last mile.
That's an amazing achievement.
If you know a lot of the things that we've done
and if you know that,
then you're going to build your systems differently.
By the way, those networks have different latency properties.
Because they're more symmetric,
the algorithms feel faster for that reason.
And so when you think about whether it's fiber
or just technologies in general,
so there's this Barbara Wooden poem or quote
that I really like.
It's from the champions of the impossible
rather than the slaves of the possible
that evolution draws its creative force.
So in predicting the next five years,
I'd like to talk about the impossible and the possible.
Well, and again, one of the great things about humanity
is that we produce dreamers, right?
We literally have people who have a vision and a dream.
They are, if you will, disagreeable
in the sense that they disagree with the,
they disagree with what the sort of zeitgeist is.
They say there is another way,
they have a belief, they have a vision.
If you look at science,
science is always marked by such people
who went against some conventional wisdom,
collected the knowledge of the time
and assembled it in a way
they produced a powerful platform.
And you've been amazingly honest about,
in an inspiring way,
about things you've been wrong about predicting.
And you've obviously been right about a lot of things,
but in this kind of tension,
how do you balance as a company
predicting the next five years,
the impossible, planning for the impossible,
so listening to those crazy dreamers,
letting them do, letting them run away
and make the impossible real, make it happen.
And slow, you know, that's how programmers often think
and slowing things down and saying,
well, this is the rational, this is the possible,
the pragmatic, the dreamer versus the pragmatist.
So it's helpful to have a model
which encourages a predictable revenue stream
as well as the ability to do new things.
So in Google's case, we're big enough
and well enough managed and so forth
that we have a pretty good sense of what our revenue will be
for the next year or two, at least for a while.
And so we have enough cash generation
that we can make bets.
And indeed, Google has become Alphabet.
So the corporation is organized around these bets.
And these bets are in areas of fundamental importance
to the world, whether it's artificial intelligence,
medical technology, self-driving cars,
connectivity through balloons, on and on and on.
And there's more coming and more coming.
So one way you could express this
is that the current business is successful enough
that we have the luxury of making bets.
And another one that you could say
is that we have the wisdom of being able to see
that a corporate structure needs to be created
to enhance the likelihood of the success of those bets.
So we essentially turned ourselves
into a conglomerate of bets
and then this underlying corporation, Google,
which is itself innovative.
So in order to pull this off,
you have to have a bunch of belief systems.
And one of them is that you have to have
bottoms up and tops down.
The bottoms up we call 20% time.
And the idea is that people can spend 20% of the time
on whatever they want.
And the top down is that our founders in particular
have a keen eye on technology
and they're reviewing things constantly.
So an example would be they'll hear about an idea
or I'll hear about something and it sounds interesting.
Let's go visit them.
And then let's begin to assemble the pieces
to see if that's possible.
And if you do this long enough,
you get pretty good at predicting what's likely to work.
So that's a beautiful balance that struck.
Is this something that applies at all scale?
So in the sense-
Seems to be that Sergei, again, 15 years ago,
it came up with a concept called 10% of the budget
should be on things that are unrelated.
It was called 70-20-10.
70% of our time on core business,
20% on adjacent business and 10% on other.
And he proved mathematically,
of course he's a brilliant mathematician,
that you needed that 10% right,
to make the sum of the growth work.
And it turns out he was right.
So getting into the world of artificial intelligence,
you've talked quite extensively and effectively
to the impact in the near term,
the positive impact of artificial intelligence,
whether it's machine, especially machine learning
in medical applications and education
and just making information more accessible, right?
In the AI community, there is a kind of debate.
There's this shroud of uncertainty
as we face this new world
with artificial intelligence in it.
And there is some people like Elon Musk,
you've disagreed on, at least on the degree of emphasis
he places on the existential threat of AI.
So I've spoken with Stuart Russell, Max Tegmark,
who share Elon Musk's view,
and Yoshio Bengio, Steven Pinker, who do not.
And so there's a lot of very smart people
who are thinking about this stuff, disagreeing,
which is really healthy, of course.
So what do you think is the healthiest way
for the AI community to,
and really for the general public to think about AI
and the concern of the technology being mismanaged
in some kind of way?
So the source of education for the general public
has been a robot killer movies.
Right.
And Terminator, et cetera.
And the one thing I can assure you we're not building
are those kinds of solutions.
Furthermore, if they were to show up,
someone would notice and unplug them, right?
So as exciting as those movies are and they're great movies,
were the killer robots to start,
we would find a way to stop them, right?
So I'm not concerned about that.
And much of this has to do
with the timeframe of conversation.
So you can imagine a situation a hundred years from now
when the human brain is fully understood
and the next generation and next generation
of brilliant MIT scientists have figured all this out.
We're gonna have a large number of ethics questions, right?
Around science and thinking and robots and computers
and so forth and so on.
So it depends on the question of the timeframe.
In the next five to 10 years,
we're not facing those questions.
What we're facing in the next five to 10 years
is how do we spread this disruptive technology
as broadly as possible to gain the maximum benefit of it?
The primary benefits should be in healthcare
and in education.
Healthcare because it's obvious.
We're all the same, even though we don't,
we somehow believe we're not.
As a medical matter, the fact that we have big data
about our health will save lives,
allow us to get, you know, deal with skin cancer
and other cancers, ophthalmological problems.
There's people working on psychological diseases
and so forth using these techniques.
I go on and on.
The promise of AI in medicine is extraordinary.
There are many, many companies and startups
and funds and solutions
and we will all live much better for that.
The same argument in education.
Can you imagine that for each generation of child
and even adult, you have a tutor educator that's AI-based,
that's not a human but is properly trained,
that helps you get smarter,
helps you address your language difficulties
or your math difficulties or what have you.
Why don't we focus on those two?
The gain societally of making humans smarter
and healthier are enormous, right?
And those translate for decades and decades
and we'll all benefit from them.
There are people who are working on AI safety,
which is the issue that you're describing,
and there are conversations in the community
that should there be such problems,
what should the rules be like?
Google, for example, has announced its policies
with respect to AI safety, which I certainly support
and I think most everybody would support
and they make sense, right?
So it helps guide the research
but the killer robots are not arriving this year
and they're not even being built.
And on that line of thinking, you said the time scale.
So in this topic or other topics,
have you found it useful on the business side
or the intellectual side to think beyond five, 10 years,
to think 50 years out?
Has it ever been useful or productive?
In our industry, there are essentially no examples
of 50 year predictions that have been correct.
Let's review AI, right?
AI, which was largely invented here at MIT
and a couple of other universities in the 1956, 1957,
1958, the original claims were a decade or two.
And when I was a PhD student, I studied AI a bit
and it entered during my looking at it a period
which is known as AI winter,
which went on for about 30 years,
which is a whole generation of science scientists
and a whole group of people
who didn't make a lot of progress
because the algorithms had not improved
and the computers had not approved.
It took some brilliant mathematicians
starting with a fellow named Jeff Hinton
at Toronto and Montreal
who basically invented this deep learning model
which empowers us today.
Those, the seminal work there was 20 years ago
and in the last 10 years, it's become popularized.
So think about the timeframes for that level of discovery.
It's very hard to predict.
Many people think that we'll be flying around
in the equivalent of flying cars, who knows?
My own view, if I want to go out on a limb
is to say that we know a couple of things
about 50 years from now.
We know that there'll be more people alive.
We know that we'll have to have platforms
that are more sustainable
because the earth is limited in the ways we all know
and that the kind of platforms that are gonna get billed
will be consistent with the principles that I've described.
They will be much more empowering of individuals.
They'll be much more sensitive to the ecology
because they have to be, they just have to be.
I also think that humans are gonna be a great deal smarter
and I think they're gonna be a lot smarter
because of the tools that I've discussed with you
and of course, people will live longer.
Life extension is continuing apace.
A baby born today has a reasonable chance
of living to 100, right?
Which is pretty exciting.
It's well past the 21st century,
so we better take care of them.
And you mentioned an interesting statistic
on some very large percentage,
60, 70% of people may live in cities.
Today, more than half the world lives in cities
and one of the great stories of humanity
in the last 20 years has been the rural to urban migration.
This has occurred in the United States.
It's occurred in Europe.
It's occurring in Asia and it's occurring in Africa.
When people move to cities, the cities get more crowded
but believe it or not, their health gets better,
their productivity gets better,
their IQ and educational capabilities improve.
So it's good news that people are moving to cities
but we have to make them livable and safe.
So you, first of all, you are but you've also worked
with some of the greatest leaders in the history of tech.
What insights do you draw from the difference
in leadership styles of yourself?
Steve Jobs, Elon Musk, Larry Page,
now the new CEO, Sandra Pichai and others
from the, I would say, calm sages to the mad geniuses.
One of the things that I learned as a young executive
is that there's no single formula for leadership.
They try to teach one but that's not how it really works.
There are people who just understand what they need to do
and they need to do it quickly.
Those people are often entrepreneurs.
They just know and they move fast.
There are other people who are systems thinkers
and planners, that's more who I am,
somewhat more conservative, more thorough in execution,
a little bit more risk averse.
There's also people who are sort of slightly insane, right?
In the sense that they are emphatic and charismatic
and they feel it and they drive it and so forth.
There's no single formula to success.
There is one thing that unifies all of the people
that you named, which is very high intelligence, right?
At the end of the day, the thing that characterizes
all of them is that they saw the world quicker, faster,
they processed information faster.
They didn't necessarily make the right decisions
all the time, but they were on top of it.
And the other thing that's interesting
about all those people is they all started young.
So think about Steve Jobs starting Apple
roughly at 18 or 19.
Think about Bill Gates starting at roughly 2021.
Think about by the time they were 30,
Mark Zuckerberg, a good example at 19, 20.
By the time they were 30, they had 10 years.
At 30 years old, they had 10 years of experience
of dealing with people and products and shipments
and the press and business and so forth.
It's incredible how much experience they had
compared to the rest of us who were busy getting our PhDs.
Yes, exactly.
So we should celebrate these people
because they've just had more life experience, right?
And that helps inform the judgment.
At the end of the day, when you're at the top
of these organizations, all the easy questions
have been dealt with, right?
How should we design the buildings?
Where should we put the colors on our product?
What should the box look like, right?
The problems, that's why it's so interesting
to be in these rooms, the problems that they face, right?
In terms of the way they operate,
the way they deal with their employees,
their customers, their innovation
are profoundly challenging.
Each of the companies
is demonstrably different culturally, right?
They are not in fact cut of the same.
They behave differently based on input.
Their internal cultures are different.
Their compensation schemes are different.
Their values are different.
So there's proof that diversity works.
So when faced with a tough decision in need of advice,
in need of advice, it's been said that the best thing
one can do is to find the best person in the world
who can give that advice
and find a way to be in a room with them,
one-on-one and ask.
So here we are.
And let me ask in a long-winded way, I wrote this down.
In 1998, there were many good search engines,
Lycos, Excite, Altavista, Infoseek, Ask Jeeves maybe,
Yahoo even.
So Google stepped in and disrupted everything.
They disrupted the nature of search,
the nature of our access to information,
the way we discover new knowledge.
So now it's 2018, actually 20 years later.
There are many good personal AI assistants,
including of course the best from Google.
So you've spoken in medical and education,
the impact of such an AI assistant could bring.
So we arrive at this question.
So it's a personal one for me,
but I hope my situation represents that of many other,
as we said, dreamers and the crazy engineers.
So my whole life,
I've dreamed of creating such an AI assistant.
Every step I've taken has been towards that goal.
Now I'm a research scientist in human centered AI here
at MIT.
So the next step for me as I sit here,
so facing my passion is to do what Larry and Sergey did
in 98, this simple startup.
And so here's my simple question.
Given the low odds of success, the timing and luck required,
the countless other factors that can't be controlled
or predicted, which is all the things
that Larry and Sergey faced,
is there some calculation, some strategy to follow
in this step, or do you simply follow the passion
just because there's no other choice?
I think the people who are in universities
are always trying to study the extraordinarily chaotic nature
of innovation and entrepreneurship.
My answer is that they didn't have that conversation.
They just did it.
They sensed a moment when in the case of Google,
there was all of this data that needed to be organized
and they had a better algorithm.
They had invented a better way.
So today with human centered AI,
which is your area of research,
there must be new approaches.
It's such a big field.
There must be new approaches,
different from what we and others are doing.
There must be startups to fund.
There must be research projects to try.
There must be graduate students to work on new approaches.
Here at MIT, there are people who are looking at learning
from the standpoint of looking at child learning.
How do children learn starting at age one?
And the work is fantastic.
Those approaches are different from the approach
that most people are taking.
Perhaps that's a bet that you should make,
or perhaps there's another one.
But at the end of the day,
the successful entrepreneurs are not as crazy as they sound.
They see an opportunity based on what's happened.
Let's use Uber as an example.
As Travis sells the story,
he and his co-founder were sitting in Paris
and they had this idea because they couldn't get a cab.
And they said, we have smartphones and the rest is history.
So what's the equivalent of that Travis Eiffel Tower,
where is a cab moment that you could as an entrepreneur
take advantage of,
whether it's in human centered AI or something else.
That's the next great startup.
And the psychology of that moment.
So when Sergey and Larry talk about,
and listen to a few interviews,
it's very nonchalant.
Well, here's the very fascinating web data.
And here's an algorithm we have for,
we just kind of want to play around with that data.
And it seems like that's a really nice way
to organize this data.
Well, I should say what happened, remember,
is that they were graduate students at Stanford
and they thought this was interesting.
So they built a search engine and they kept it in their room.
And they had to get power from the room next door
because they were using too much power in the room.
So they ran an extension cord over, right?
And then they went and they found a house
and they had Google world headquarters of five people,
right, to start the company.
And they raised $100,000 from Andy Bechtelchein,
who was the sun founder to do this,
and Dave Cheriton and a few others.
The point is their beginnings were very simple,
but they were based on a powerful insight.
That is a replicable model for any startup.
It has to be a powerful insight, the beginnings are simple,
and there has to be an innovation.
In Larry and Sergey's case, it was PageRank,
which was a brilliant idea,
one of the most cited papers in the world today.
What's the next one?
So you're one of, if I may say,
richest people in the world.
And yet it seems that money is simply a side effect
of your passions.
And not an inherent goal.
But you're a fascinating person to ask.
So much of our society at the individual level
and at the company level and as nations
is driven by the desire for wealth.
What do you think about this drive?
And what have you learned about,
if I may romanticize the notion,
the meaning of life,
having achieved success on so many dimensions?
There have been many studies of human happiness
and above some threshold,
which is typically relatively low for this conversation,
there's no difference in happiness about money.
The happiness is correlated with meaning and purpose,
a sense of family, a sense of impact.
So if you organize your life,
assuming you have enough to get around
and have a nice home and so forth,
you'll be far happier if you figure out
what you care about and work on that.
It's often being in service to others.
It's a great deal of evidence that people are happiest
when they're serving others and not themselves.
This goes directly against the sort of
press induced excitement about powerful
and wealthy leaders of one kind.
And indeed these are consequential people.
But if you are in a situation where you've been
very fortunate as I have,
you also have to take that as a responsibility
and you have to basically work both to educate others
and give them that opportunity,
but also use that wealth to advance human society.
In my case, I'm particularly interested in using the tools
of artificial intelligence and machine learning
to make society better.
I've mentioned education, I've mentioned inequality
and middle class and things like this,
all of which are a passion of mine.
It doesn't matter what you do,
it matters that you believe in it,
that it's important to you
and that your life will be far more satisfying
if you spend your life doing that.
I think there's no better place to end
than a discussion of the meaning of life.
Well, thank you very much, X.