logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 9h 33m 5s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Elon Musk, part two. The second time we spoke on the podcast
with parallels, if not in quality, than an outfit to the objectively speaking greatest
sequel of all time, Godfather part two. As many people know, Elon Musk is a leader of Tesla,
SpaceX, Neolink, and the Boren company. What may be less known is that he's a world-class
engineer and designer, constantly emphasizing first principles thinking and taking on big
engineering problems that many before him will consider impossible. As scientists and engineers,
most of us don't question the way things are done, we simply follow the momentum of the crowd.
But revolutionary ideas that change the world on the small and large scales happen
when you return to the fundamentals and ask, is there a better way?
This conversation focuses on the incredible engineering and innovation
done in brain-computer interfaces at Neolink. This work promises to help treat
neurobiological diseases to help us further understand the connection between the individual
neuron to the high-level function of the human brain. And finally, to one day expand the capacity
of the brain through two-way communication with computational devices, the internet,
and artificial intelligence systems. This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe by YouTube, Apple Podcasts, Spotify, support on Patreon,
or simply connect with me on Twitter at Lex Friedman, spelled F-R-I-D-M-A-N. And now,
as an anonymous YouTube commenter referred to our previous conversation as the quote,
historical first video of two robots conversing without supervision, here's the second time,
the second conversation with Elon Musk.
Let's start with an easy question about consciousness. In your view, is consciousness
something that's unique to humans, or is it something that permeates all matter, almost
like a fundamental force of physics? I don't think consciousness permeates all matter.
Panpsychists believe that. Yeah. There's a philosophical...
How would you tell?
That's true. That's a good point.
I believe in scientific methods. I don't really mind or anything, but the scientific method is
like if you cannot test the hypothesis, then you cannot reach meaningful conclusion that it is true.
Do you think consciousness, understanding consciousness, is within the reach of science
of the scientific method?
We can dramatically improve our understanding of consciousness.
I would be hard-pressed to say that we understand anything with complete accuracy,
but can we dramatically improve our understanding of consciousness? I believe the answer is yes.
Does an AI system, in your view, have to have consciousness in order to achieve human level
or superhuman level intelligence? Does it need to have some of these human qualities
like consciousness, maybe a body, maybe a fear of mortality, capacity to love,
those kinds of silly human things?
There's a scientific method which I very much believe in where something is
true to the degree that it is, testably so.
Otherwise, you're really just talking about preferences or untestable beliefs or that kind
of thing. It ends up being somewhat of a semantic question where we are conflating
a lot of things with the word intelligence. If we parse them out and say,
all we headed towards the future where an AI will be able to outthink us in every way,
then the answer is unequivocally yes.
In order for an AI system that needs to outthink us in every way, it also needs to have
a capacity to have consciousness, self-awareness, and understanding.
It will be self-aware, yes. That's different from consciousness.
I mean, to me, in terms of what consciousness feels like, it feels like consciousness is in
a different dimension, but this could be just an illusion.
If you damage your brain in some way physically, you damage your consciousness,
which implies that consciousness is a physical phenomenon, in my view.
The things that I think are really quite likely is that digital intelligence will outthink us
in every way, and it will suddenly be able to simulate what we consider consciousness,
so to a degree that you would not be able to tell the difference.
And from the aspect of the scientific method, it might as well be consciousness if we can
simulate it perfectly. If you can't tell the difference, and this is sort of the Turing test,
but think of a more sort of advanced version of the Turing test.
If you're talking to digital superintelligence and can't tell if that is a computer or a human,
like let's say you're just having a conversation over a phone or a video conference or something
where you think you're talking, looks like a person makes all of the right
inflections and movements and all the small subtleties that constitute a human,
and talks like human makes mistakes like a human. And you literally just can't tell,
is this, are you video conferencing with a person or an AI?
Might as well.
Might as well.
Be human. So on a darker topic, you've expressed serious concern about existential
threats of AI. It's perhaps one of the greatest challenges our civilization faces.
But since I would say we're kind of an optimistic descendants of apes,
perhaps we can find several paths of escaping the harm of AI.
So if I can give you three options, maybe you can comment which do you think is the most promising.
So one is scaling up efforts on AI safety and beneficial AI research in hope of finding an
algorithmic or maybe a policy solution. Two is becoming a multi-planetary species as quickly as
possible. And three is merging with AI and riding the wave of that increasing intelligence as it
continuously improves. What do you think is most promising, most interesting as a civilization
that we should invest in?
I think there's a lot of tremendous amount of investment going on in AI.
Where there's a lack of investment is in AI safety. And there should be, in my view, a government
agency that oversees anything related to AI to confirm that it does not represent a public safety
risk. Just as there is a regulatory authority for the Food and Drug Administration, there's
NHTSA for automotive safety, there's the FAA for aircraft safety.
We're generally come to the conclusion that it is important to have a government referee or
referee that is serving the public interest in ensuring that things are safe when there's
potential danger to the public. I would argue that AI is unequivocally something that has
potential to be dangerous to the public and therefore should have a regulatory agency just as
other things that are dangerous to the public have a regulatory agency. But let me tell you
the problem with this is that the government moves very slowly. And the rate of, usually the way
the regulatory agency comes into being is that something terrible happens. There's a huge public
outcry. And years after that, there's a regulatory agency or a rule put in place. It takes something
like seatbelts. It was known for a decade or more that seatbelts would have a massive impact on
safety and save so many lives and serious injuries. And the car industry fought the
requirement to put seatbelts in tooth and nail. That's crazy. And hundreds of thousands of people
probably died because of that. And they said people wouldn't buy cars if they had seatbelts,
which is obviously absurd. Or look at the tobacco industry and how long they fought
any thing about smoking. That's part of why I helped make that movie. Thank you for smoking.
You can sort of see just how pernicious it can be when you have these companies effectively
achieve regulatory capture of government is bad. People in the AI community refer to the advent
of digital superintelligence as a singularity. That is not to say that it is good or bad, but
that it is very difficult to predict what will happen after that point. And that there's some
probability it will be bad, some probability it will be good. But I want to affect that probability
and have it be more good than bad. Well, let me on the merger with AI question and the incredible
work that's being done at Neuralink. There's a lot of fascinating innovation here across
different disciplines going on. So the flexible wires, the robotic sewing machine, the responsive
brain movement, everything around ensuring safety and so on. So we currently understand
very little about the human brain. Do you also hope that the work at Neuralink will help us
understand more about the human mind, about the brain? Yeah, I think the work in Neuralink will
definitely shut a lot of insight into how the brain and the mind works. Right now, just the
data we have regarding how the brain works is very limited. We've got FMRI, which is that that's
kind of like putting a stethoscope on the outside of a factory wall and then putting it like all
over the factory wall and you can sort of hear the sounds, but you don't know what the machines
are doing really. It's hard. You can infer a few things, but it's a very broad brushstroke.
In order to really know what's going on in the brain, you really need, you have to have
high precision sensors and then you want to have stimulus and response. If you trigger a neuron,
how do you feel? What do you see? How does it change your perception of the world?
You're speaking to physically just getting close to the brain, being able to measure signals from
the brain will give us sort of open the door inside the factory. Yes, exactly. Being able to
have high precision sensors that tell you what individual neurons are doing and then being
able to trigger a neuron and see what the response is in the brain. You can see the consequences of
if you fire this neuron, what happens? How do you feel? What does it change? It'll be really
profound to have this in people because people can articulate their change. If there's a change in
mood or if they can see better or hear better or be able to form sentences better or worse,
their memories are jogged or that kind of thing. On the human side, there's this incredible general
malleability, plasticity of the human brain. The human brain adapts, adjusts, and so on.
That's not that plastic, if you're totally frank. There's a firm structure, but there's
some plasticity and the open question is sort of if I could ask a broad question is how much
that plasticity can be utilized. On the human side, there's some plasticity in the human brain.
On the machine side, we have neural networks, machine learning, artificial intelligence,
it's able to adjust and figure out signals. There's a mysterious language that we don't
perfectly understand that's within the human brain. Then we're trying to understand that
language to communicate both directions. The brain is adjusting a little bit. We don't know
how much. The machine is adjusting. Where do you see, as they try to sort of reach together,
almost like with an alien species, try to find a communication protocol that works,
where do you see the biggest benefit arriving from on the machine side or the human side?
Do you see both of them working together? I think the machine side is far more malleable
than the biological side, by a huge amount. It will be the machine that adapts to the brain.
That's the only thing that's possible. The brain can't adapt that well to the machine.
You can't have neurons start to regard an electrode as another neuron, because a neuron
just is like the pulse. Something else is pulsing. There is that elasticity in the interface,
which we believe is something that can happen. The vast majority of the malleability will
have to be on the machine side. It's interesting when you look at that synaptic
plasticity at the interface side, there might be an emergent plasticity, because it's a whole
another extension of the brain. We might have to redefine what it means to be malleable for the
brain. Maybe the brain is able to adjust to external interfaces. There will be some adjustments to
the brain, because there's going to be something reading and simulating the brain. It will adjust
to that thing. Most, the vast majority of the adjustment will be on the machine side.
This has to be that. Otherwise, it will not work. Ultimately, we currently operate on two layers.
We have a limbic prime primitive brain layer, which is where all of our impulses are coming
from. It's like a monkey brain with a computer stuck on it. That's the human brain. A lot of
our impulses and everything are driven by the monkey brain. The computer, the cortex,
is constantly trying to make the monkey brain happy. It's not the cortex that's
steering the monkey brain. It's the monkey brain steering the cortex.
The cortex is the part that tells the story of the whole thing. We convince ourselves it's
more interesting than just the monkey brain. The cortex is what we call human intelligence.
That's the advanced computer relative to other creatures. Other creatures do not have either
really, they don't have the computer, or they have a very weak computer relative to humans.
It sort of seems like surely the really smart thing should control the dumb thing,
but actually don't think controls the smart thing. Do you think some of the same kind of machine
learning methods, or whether that's natural language processing applications, are going
to be applied for the communication between the machine and the brain to learn how to do
certain things like movement of the body, how to process visual stimuli, and so on? Do you see
the value of using machine learning to understand the language of the two-way communication with
the brain? Yeah, absolutely. We're neural net. AI is basically neural net. It's like digital neural
net. It will interface with biological neural net, and hopefully bring us along for the ride.
But the vast majority of our intelligence will be digital. Think of the difference
in intelligence between your cortex and your limbic system is gigantic. Your limbic system
really has no comprehension of what the hell the cortex is doing. It's just literally hungry,
you know, or tired, or angry, or sexy, or something, you know. And then in that case,
that's that impulse to the cortex and to the cortex to go satisfy that. Then a lot of great
deal of like a massive amount of thinking, like truly stupendous amount of thinking has gone into
sex without purpose, without procreation, which is actually quite a silly action in the absence
of procreation. It's a bit silly. So why are you doing it? Because it makes the limbic system
happy. That's why. That's why. But it's pretty absurd, really. Well, the whole of existence
is pretty absurd in some kind of sense. Yeah. But I mean, this is a lot of computation has gone into
how can I do more of that with procreation, not even being a factor. This is, I think,
a very important era of research by NSFW. Any agency that should receive a lot of funding,
especially after this conversation. If I propose the formation of a new agency.
Oh, boy. What is the most exciting or some of the most exciting things that you see
in the future impact of Neuralink? Both on the science, the engineering and societal broad impact.
So Neuralink, I think, at first will solve a lot of brain related diseases. So
creating from like autism, schizophrenia, memory loss, like everyone experiences memory loss at
certain points in age. Parents can't remember their kid's names and that kind of thing.
So there's a tremendous amount of good that Neuralink can do in solving critical
damage to brain or the spinal cord. There's a lot that can be done to improve quality of life
of individuals. And that will be those will be steps along the way. And then ultimately,
it's intended to address the the risk, the existential risk associated with
digital superintelligence. Like we will not be able to be smarter than a digital supercomputer.
So therefore, if you cannot beat them, join them. And at least we won't have that option.
So you have hope that Neuralink will be able to be a kind of connection to allow us
to to merge to ride the wave of the improving AI systems. I think the chance is above zero percent.
So it's non zero. Yes, there's a chance. And that's what I've seen dumb and dumber.
Yes. So I'm saying there's a chance. He's saying one in a billion or one in a million,
whatever it was, a dumb and dumber, you know, it went from maybe one in a million to improving,
maybe it'll be one in a thousand and then one in a hundred, then one in ten.
It depends on the rate of improvement of Neuralink and how fast we're able to do
to make progress, you know. Well, I've talked to a few folks here that are quite brilliant
engineers. So I'm I'm excited. Yeah, I think it's like fundamentally good, you know,
you're giving somebody back full motor control after they've had a spinal cord injury.
You know, restoring brain functionality after a stroke, solving, debilitating,
genetically orange brain diseases. These are all incredibly great, I think. And in order to do these,
you have to be able to interface with neurons at a detail level and need to be able to fire
the right neurons, read the right neurons and and then effectively you can create a circuit,
replace what's broken with with silicon and essentially fill in the missing functionality.
And then over time, we can have we develop a tertiary layer. So if like limbic system is
the primary layer, then the cortex is like the second layer. And I said that, you know,
the cortex is vastly more intelligent than the limbic system. But people generally like the
fact that they have a limbic system and a cortex. I haven't met anyone who wants to
lead either one of them. They're like, okay, I'll keep them both. That's cool.
The limbic system is kind of fun. That's what the fun is. Absolutely. And then
people generally don't want to lose the cortex either. Right. So they're like having the cortex
and the limbic system. Yeah. And and then there's a tertiary layer, which will be digital super
intelligence. And I think there's room for optimism, given that the cortex, the cortex is very
intelligent and limbic system is not yet they work together well. Perhaps they can be a tertiary
layer where digital super intelligence lies. And that that will be vastly more intelligent
than the cortex, but still coexist peacefully and in a benign manner with the cortex and limbic
system. That's a super exciting future, both on the low level engineering that I saw as being done
here and the actual possibility in the next few decades. It's important that New Orleans solve
this problem sooner rather than later, because the point at which we have digital super intelligence,
that's when we pass the singularity and things become just very uncertain. It doesn't mean that
they're necessarily bad or good, but the point at which we pass singularity, things become extremely
unstable. So we want to have a human brain interface before the singularity or at least not long after
it to minimize existential risk for humanity and consciousness as we know it. But there's a
lot of fascinating actual engineering low level problems here at Neuralink that are quite exciting.
The problems that we face Neuralink are material science, electrical engineering, software,
mechanical engineering, micro fabrication. It's a bunch of engineering disciplines essentially.
That's what it comes down to. You have to have a tiny electrode. It's so small it doesn't hurt
neurons, but it's got to last for as long as a person. So it's got to last for decades.
And then you've got to take that signal. You've got to process that signal locally at low power.
So we need a lot of chip design engineers because we're going to signal processing
and do so in a very power efficient way so that we don't heat your brain up,
because the brain's very heat sensitive. And then we've got to take those signals,
we're going to do something with them, and then we've got to stimulate the back to,
so you could bi-directional communication. So if somebody's good at material science, software,
mechanical engineering, electrical engineering, chip design, micro fabrication,
that's those are the things we need to work on. We need to be good at material science so that we
can have tiny electrodes that last a long time. And it's a tough thing with the material science
problem. It's a tough one because you're trying to read and simulate electrically in an electrically
active area. Your brain is very electrically active and electrochemically active. So how do you have
say a coating on the electrode that doesn't dissolve over time and is safe in the brain?
This is a very hard problem. And then how do you collect those signals
in a way that is most efficient because you really just have very tiny amounts of power
to process those signals. And then we need to automate the whole thing so it's like
lazy. If this is done by neurosurgeons, there's no way it can scale to large numbers of people.
And it needs to scale large numbers of people because I think ultimately we want the future
repeated to be determined by a large number of humans. Do you think that this has a chance to
revolutionize surgery period? So neurosurgery and surgery all across? Yeah, for sure. It's
got to be like Lasik. If Lasik had to be hand done, done by hand by a person, that wouldn't be great.
Yeah. It's done by a robot. And the ophthalmologist kind of just needs to make sure
your head's in my position and then they just press button and go.
So smart summon and soon auto park takes on the full beautiful mess of parking lots and their
human to human nonverbal communication. I think it has actually the potential to have a profound
impact in changing how our civilization looks at AI and robotics because this is the first time
human beings, people that don't own a Tesla may have never seen a Tesla or heard about a Tesla
get to watch hundreds of thousands of cars without a driver. Do you see it this way almost like an
education tool for the world about AI? Do you feel the burden of that, the excitement of that,
or do you just think it's a smart parking feature? I do think you are getting at something
important which is most people have never really seen a robot and what is the car that is autonomous?
It's a four-wheeled robot. Yeah. It communicates a certain sort of message with everything from
safety to the possibility of what AI could bring to its current limitations, its current challenges,
it's what's possible. Do you feel the burden of that almost like a communicator, educator to the
world about AI? We're just really trying to make people's lives easier with autonomy.
But now that you mention it, I think it will be an eye-opener to people about robotics because
most people have never seen a robot and there are hundreds of thousands of Teslas,
won't be long before there's a million of them that have autonomous capability
and the drive without a person in it. You can see the evolution of the car's personality
and thinking with each iteration of autopilot. You can see it's uncertain about this or it gets
now it's more certain. Now it's moving in a slightly different way. I can tell immediately
if a car is on Tesla autopilot because it's got just little nuances of movement. It just moves in
a slightly different way. Cars on Tesla autopilot, for example, on the highway are far more precise
about being in the center of the lane than a person. If you drive down the highway and look
at where the human-driven cars are within their lane, they're like bumper cars. They're like
moving all over the place. The car on autopilot, dead center. Yeah, so the incredible work that's
going into that neural network is learning fast. Autonomy is still very, very hard. We don't
actually know how hard it is fully, of course. You look at most problems you tackle. This one
included with an exponential lens, but even with an exponential improvement, things can take longer
than expected sometimes. So where does Tesla currently stand on its quest for full autonomy?
What's your sense? When can we see successful deployment of full autonomy?
Well, on the highway already, the probability of intervention is extremely low. So for highway
autonomy, with the latest release, especially the probability of needing to intervene
is really quite low. In fact, I'd say for stop-and-go traffic, it's as far safer than a person
right now. In stop-and-go, the probability of an injury or an impact is much, much lower for
autopilot than a person. And then with navigating autopilot, you can change lanes, take highway
interchanges, and then we're coming at it from the other direction, which is low-speed full autonomy.
And in a way, this is like, how does a person learn to drive? You learn to drive in the parking lot.
First time you learn to drive probably wasn't jumping on Marcus Street in San Francisco. That'd
be crazy. You learn to drive in the parking lot, get things right at low speed, and then the missing
piece that we're working on is traffic lights and stuff streets. Stuff streets, I would say,
actually, are also relatively easy because you kind of know where the stuff street is,
worst-case-ing geocoders, and then use visualization to see where the line is and stuff with the line
to eliminate the GPS error. So actually, I'd say there's probably complex traffic lights and
very windy roads are the two things that need to get solved.
What's harder, perception or control for these problems? So being able to perfectly perceive
everything or figuring out a plan once you perceive everything, how to interact with all the agents
in the environment, in your sense, from a learning perspective, is perception or action harder
in that giant, beautiful, multi-task learning neural network?
The hottest thing is having accurate representation of the physical objects in vector space.
So taking the visual input, primarily visual input, some sonar and radar, and then creating an
accurate vector space representation of the objects around you. Once you have an accurate
vector space representation, the planning and control is relatively easier. That says relatively
easy. Basically, once you have accurate vector space representation, then you're kind of like
a video game. Cars in grand theft auto or something, they work pretty well. They drive down the road,
they don't crash pretty much unless you crash into them. That's because they've got an accurate
vector space representation of where the cars are, and then they're rendering that as the output.
Do you have a sense, high level, that Tesla's on track on being able to achieve full autonomy,
so on the highway? Yeah, absolutely.
And still no driver state, driver sensing.
And we have driver sensing with torque on the wheel.
That's right. Yeah.
By the way, just a quick comment on karaoke. Most people think it's fun, but I also think it is a
driving feature I've been seeing for a long time. Singing in the car is really good for attention
management and vigilance management. That's great. Tesla karaoke is great. It's one of the most
fun features of the car. Do you think of the connection between fun and safety sometimes?
Yeah, you can do both at the same time. That's great.
I just met with Andrew and wife of Carl Sagan, the director of Cosmos.
I'm generally a big fan of Carl Sagan. He's super cool and had a great way of putting things.
All of our consciousness, all civilization, everything we've ever known and done is on
this tiny blue dot. People also get, they get too trapped in there. This is like squabbles
amongst humans. Let's not think of the big picture. They take civilization and our continued
existence for granted. I shouldn't do that. Look at the history of civilizations.
They rise and they fall. And now civilization is all, it's globalized.
And so civilization, I think now rises and falls together. There's no, there's not geographic
isolation. This is a big risk. Things don't always go up. That should be, that's an important
lesson of history. In 1990 at the request of Carl Sagan, the Voyager 1 spacecraft, which is
a spacecraft that's reaching out farther than anything human made into space,
turned around to take a picture of Earth from 3.7 billion miles away. And as you're talking about
the pale blue dot, that picture, there takes up less than a single pixel in that image.
Appearing as a tiny blue dot, as pale blue dot as Carl Sagan called it. So he spoke about this
dot of ours in 1994. And if you could humor me, I was wondering if in the last two minutes,
you could read the words that he wrote describing this pale blue dot.
Sure. Yes, it's funny, the universe appears to be 13.8 billion years old.
Earth is like four and a half billion years old.
You know, another half billion years or so, the sun will expand and probably evaporate the oceans
and make life impossible on Earth. Which means that if it had taken consciousness 10 percent
longer to evolve, it would never have evolved at all. It's 10 percent longer.
And I wonder, I wonder how many dead one planet civilizations that are out there in the cosmos
that never made it to the other planet and ultimately extinguished themselves or were
destroyed by external factors. Probably a few. It's only just possible to travel to Mars,
just barely. If G was 10 percent more, wouldn't work, really.
If G was 10 percent lower, it would be easy.
Like you can go single stage from the surface of Mars all the way to the surface of the Earth,
because Mars is 37 percent Earth's gravity.
We need a giant booster to get off Earth.
Channeling Carl Sagan.
Look again at that dot. That's here. That's home. That's us. On it, everyone you love,
everyone you know, everyone you've ever heard of, every human being who ever was, lived out their
lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies,
and economic doctrines, every hunter and fighter, every hero and coward, every creator and destroyer
of civilization, every king and peasant, every young couple in love, every mother and father,
every hopeful child, inventor and explorer, every teacher of morals, every corrupt politician,
every superstar, every supreme leader, every saint and sinner in the history of our species
live there, on a mode of dust suspended in a sunbeam. Our planet is a lonely speck in the
great enveloping cosmic dark. In our obscurity, in all those vastness, there is no hint that help
will come from elsewhere to save us from ourselves. The Earth is the only world known so far to
harbor life. There is nowhere else, at least in the near future, to which our species could migrate.
This is not true. This is false. Mars. And I think Carl Sagan would agree with that. He couldn't
even imagine it at that time. So thank you for making the world dream and thank you for talking
to me. I really appreciate it. Thank you.