logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 9h 33m 5s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Elon Musk,
his fourth time on this, the Lex Friedman Podcast.
Woot, woot, woot, woot, woot, woot, woot, woot, woot.
Ha ha ha ha.
Woot, woot, woot, woot, woot, woot, woot, woot, woot, woot.
I thought you were gonna finish it.
Woot, woot, woot, woot, woot, woot, woot.
It's one of the greatest themes in all film history.
Woot, woot, woot, woot, woot, woot, woot.
Yeah, that's great.
So I was just thinking about the Roman Empire,
as one does.
Ha ha ha ha.
There's that whole meme where all guys are thinking
about the Roman Empire at least once a day.
And half the population is confused
whether it's true or not.
But more seriously, thinking about the wars going on
in the world today, and as you know,
war and military conquest has been a big part
of Roman society and culture.
And it, I think, has been a big part of most empires
and dynasties throughout human history, so.
Yeah, they usually came as a result of conquest.
I mean, there's some like the Austro-Hungarian Empire
where there was just a lot of sort of clever marriages.
But fundamentally, there's an engine of conquest.
They celebrate excellence in warfare.
Many of the leaders were excellent generals,
that kind of thing.
So a big picture question, Grok approved.
I asked, this is a good question to ask.
You tested Grok approved?
Ha ha ha, at least on fun mode.
To what degree do you think war is part of human nature
versus a consequence of how human societies are structured?
I ask this as you have somehow controversially
been a proponent of peace.
I'm generally a proponent of peace.
I mean, ignorance is perhaps, in my view,
the real enemy to be countered.
That's the real hard part, not fighting other humans.
But all creatures fight.
I mean, the jungle is like, you look at the,
people think of nature as perhaps
some sort of peaceful thing, but in fact, it is not.
There's some quite funny,
when a hoodsog thing where he's like in the jungle,
saying that it's basically just murder and death
in every direction.
I mean, the plants and animals in the jungle
are constantly trying to kill and eat each other
every single day, every minute.
So it's not like we're unusual in that respect.
There's a relevant question here,
whether with greater intelligence
comes greater control over these base instincts
for violence.
Yes, we have much more of an ability to control our,
limbic instinct for violence than, say, a chimpanzee.
And in fact, if one looks at, say, a chimpanzee society,
it is not friendly.
I mean, the bonobos are an exception.
But chimpanzee society is full of violence,
and it's quite horrific, frankly.
That's our limbic system in action.
Like, you don't want to be on the wrong side of a chimpanzee.
It'll eat your face off and tear your nuts off.
Basically, there's no limits or ethics or,
they're almost at just war.
There's no just war in chimpanzee societies.
Is war and dominance by any means necessary?
Yeah, chimpanzee society is like a primitive version
of human society.
They're not like peace-loving, basically, at all.
There's extreme violence.
And then once in a while, somebody who's watched too many
Disney movies decides to raise a chimpanzee as a pet.
And then that eats their face or rips their nuts off
or chews their fingers off, that kind of thing.
It's happened several times.
Ripping your nuts off is an interesting strategy
for interaction.
It's happened to people.
It's unfortunate.
That's, I guess, one way to ensure that the other chump
doesn't contribute to the gene pool.
Well, from a martial arts perspective,
it's a fascinating strategy.
The nut ripper.
I wonder which of the martial arts teaches that.
I think it's safe to say if somebody's got your nuts
in their hands and has the option of rubbing them off,
you will be amenable to whatever they want.
Yeah.
So like I said, somehow controversially,
you've been a proponent of peace on Twitter, on X.
So let me ask you about the wars going on today
and to see what the path to peace could be.
How do you hope the current war in Israel and Gaza
comes to an end?
What path do you see that can minimize human suffering
in the long term in that part of the world?
Well, I think that part of the world is definitely,
like if you look up the there is no easy answer
in the dictionary, it'll be like the picture
of the Middle East, and Israel especially.
So there is no easy answer.
So what my, this is strictly my opinion of,
is that the goal of Hamas was to provoke
an overreaction from Israel.
They obviously did not expect to have a military victory,
but they really wanted to commit the worst atrocities
that they could in order to provoke
the most aggressive response possible from Israel.
And then leverage that aggressive response
to rally Muslims worldwide for the cause of Gaza
and Palestine, which they have succeeded in doing.
So the counterintuitive thing here,
I think that the thing that I think should be done,
even though it is very difficult,
is that I would recommend that Israel engage
in the most conspicuous acts of kindness possible,
every part, everything.
That is the actual thing that would thwart
the goal of Hamas.
So in some sense, the degree that makes sense
in geopolitics, turn the other cheek, implemented.
It's not exactly turn the other cheek,
because I do think that there's,
you know, I think it is appropriate for Israel
to find the Hamas members and either kill them
or incarcerate them.
Like, something has to be done,
because they're just gonna keep coming otherwise.
But in addition to that, they need to do whatever they can.
Do whatever they can.
There's some talk of establishing, for example,
a mobile hospital, I'd recommend doing that.
Just making sure that there's food, water,
medical necessities, and just be over the top about it
and be very transparent, so that people can't claim
it's a trick, like just put a webcam on the thing.
Or 24-7.
Deploy acts of kindness.
Yeah, conspicuous acts of kindness,
that are unequivocal, meaning they can't be somehow,
because Hamas will then, their response will be,
oh, it's a trick.
Therefore, you have to counter how it is not a trick.
This ultimately fights the broader force of hatred
in the region.
Yes, and I'm not sure who said it.
It's an apocryphal saying,
but an eye for an eye makes everyone blind.
Now, that neck of the woods, they really believe
in the whole eye for an eye thing.
If you're not gonna just outright commit genocide,
like against an entire people, which obviously
would not be acceptable, really shouldn't be acceptable
to anyone, then you're gonna leave, basically,
a lot of people alive who subsequently hate Israel.
So, really, the question is, for every Hamas member
that you kill, how many did you create?
And if you create more than you killed,
you've not succeeded.
That's the real situation there.
And it's safe to say that if,
you know, if you kill somebody's child in Gaza,
you've made at least a few Hamas members who will die
just to kill an Israeli.
That's the situation.
So, but I mean, this is one of the most contentious
subjects one could possibly discuss.
But I think if the goal, ultimately, is some sort
of long-term peace, one has to look at this
from the standpoint of, over time, are there more
or fewer terrorists being created?
Let me just linger on war.
Yeah, well, war, it's safe to say war's existed
and always will exist.
Always will exist.
Always has existed and always will exist.
I hope not.
You think it always will?
There will always be war.
It's a question of just how much war.
And, you know, there's sort of the scope and scale of war.
But to mention that there would not be any war
in the future, I think would be a very unlikely outcome.
Yeah, you talked about the culture series.
There's war even there.
Yes, there's a giant war.
The first book starts off with a gigantic,
galactic war where trillions die, trillions.
But it still nevertheless protects
these pockets of flourishing.
Somehow you can have galactic war
and still have pockets of flourishing.
Yeah, I mean, I guess if we are able to one day
expand to, you know, fill the galaxy or whatever,
there will be a galactic war at some point.
Ah, the scale.
I mean, the scale of war's been increasing
and increasing and increasing.
It's like a race between the scale of suffering
and the scale of flourishing.
Yes.
A lot of people seem to be using this tragedy
to beat the drums of war
and feed the military industrial complex.
Do you worry about this?
The people who are rooting for escalation
and how can it be stopped?
One of the things that does concern me
is that there are very few people alive today
who actually viscerally understand the horrors of war,
at least in the US.
I mean, obviously there are people on the front lines
in Ukraine and Russia who understand
just how terrible war is.
But how many people in the West understand it?
You know, my grandfather was in World War II.
He was severely traumatized.
I mean, he was there for almost six years
in Eastern North Africa and Italy.
All his friends were killed in front of him.
And he would have died too,
except they randomly gave some,
I guess, IQ test or something.
And he scored very high.
Now, he was not an officer.
He was, I think, a corporal or a sergeant
or something like that,
because he didn't finish high school,
because he had to drop out of high school
because his dad died and he had to work
to support his siblings.
So because he didn't graduate high school,
he was not eligible for the officer corps.
So, you know, he kind of got put
into the cannon fighter category, basically.
But then, randomly, they gave him this test.
He was transferred to British Intelligence in London.
That's where he met my grandmother.
But he had PTSD next level, like next level.
I mean, just didn't talk, just didn't talk.
And if you tried talking to him,
he'd just tell you to shut up.
And he won a bunch of medals,
never ragged about it once, not even hinted, nothing.
Found out about it because his military records were online.
That's how I know.
So, he would say, no way in hell
do you want to do that again.
But how many people, now he obviously,
now he died 20 years ago, or longer actually, 30 years ago.
How many people are alive that remember World War II?
Not many.
And the same, perhaps, applies to the threat of nuclear war.
Yeah, I mean, there are enough nuclear bombs
pointed at the United States to make the rubble,
the radioactive rubble, bounce many times.
There's two major wars going on right now.
So, you talked about the threat of AGI quite a bit.
But now, as we sit here,
with the intensity of conflict going on,
do you worry about nuclear war?
I think we shouldn't discount
the possibility of nuclear war.
It is a civilizational threat.
Right now, I could be wrong,
but I think the current probability
of nuclear war is quite low.
But there are a lot of nukes pointed at us.
So, and we have a lot of nukes pointed at other people.
They're still there.
Nobody's put their guns away.
The missiles are still in the silos.
The leaders don't seem to be the ones
with the nukes talking to each other.
No.
There are wars which are tragic and difficult
on a local basis,
and then there are wars which are civilization ending,
or have that potential.
Obviously, global thermonuclear warfare
has high potential to end civilization,
perhaps permanently,
but certainly severely
a wound and perhaps
set back human progress by,
you know, to the Stone Age or something.
I don't know.
Pretty bad.
Probably scientists and engineers
won't be super popular after that as well.
Like, you got us into this mess.
So generally, I think we obviously want to prioritize
civilizational risks over things that are
painful and tragic on a local level,
but not civilizational.
How do you hope the war in Ukraine comes to an end,
and what's the path, once again,
to minimizing human suffering there?
Well, I think that what is likely to happen,
which is really pretty much the way it is,
is that something very close to the current
lines will be
how a ceasefire or truce happens,
but you know, you just have a situation right now
where whoever goes on the offensive
will suffer casualties at several times
the rate of whoever's on the defense,
because you've got defense in depth.
You've got minefields, trenches, anti-tank defenses.
Nobody has air superiority,
because the anti-aircraft missiles
are really far better than the aircraft.
Like, they're far more of them.
And so neither side has air superiority.
Tanks are basically death traps,
just slow moving, and they're not immune to anti-tank weapons
so you really just have long-range artillery
and infantry trenches.
It's World War I, all over again,
with drones, you know, throwing little drones,
some drones there.
Which makes the long-range artillery
just that much more accurate and better,
and so more efficient at murdering people on both sides.
Yeah, so it's, whoever is,
you don't wanna be trying to advance from either side,
because the probability of dying is incredibly high.
So in order to overcome
defense in depth trenches and minefields,
you really need a significant local superiority in numbers.
Ideally, combined arms, where you do a fast attack
with aircraft, a concentrated number of tanks,
and a lot of people.
That's the only way you're gonna punch through a line.
And then you're gonna punch through
and then not have reinforcements
just kick you right out again.
I mean, I really recommend people read
World War I warfare in detail.
It's rough.
I mean, the sheer number of people that died there
was mind-boggling.
And it's almost impossible to imagine the end of it
that doesn't look like, most exactly like the beginning,
in terms of what land belongs to who, and so on.
But on the other side of a lot of human suffering,
death and destruction of infrastructure.
Yes, the thing that,
the reason I proposed some sort of truce or peace
a year ago was because I predicted
pretty much exactly what would happen,
which is a lot of people dying
for basically almost no changes in land.
And the loss of the flower of Ukrainian and Russian youth,
and we should have some sympathy for the Russian boys
as well as the Ukrainian boys,
because the Russian boys didn't ask to be on their front line
they have to be.
So, there's a lot of sons
not coming back to their parents.
I think most of them don't really have,
they don't hate the other side.
You know, it's sort of like,
as the saying comes from World War I,
it's like young boys who don't know each other,
killing each other on behalf of old men
that do know each other.
What the hell's the point of that?
So, Volodymyr Zelensky said that he's not,
or has said in the past,
he's not interested in talking to Putin directly.
Do you think he should sit down,
man to man, lead a leader, and negotiate peace?
I think I would just recommend,
do not send the flower of Ukrainian youth
to die in trenches.
Whether he talks to Putin or not, just don't do that.
Whoever goes on the offensive
will lose massive numbers of people.
And history will not look kindly upon them.
You've spoken honestly about the possibility of war
between US and China in the long term,
if no diplomatic solution is found.
For example, on the question of Taiwan and one China policy,
how do we avoid the trajectory
where these two superpowers clash?
Well, it's worth reading that book on the,
difficult to pronounce, the Thucydides Trap,
I believe it's called.
I love war history, I like inset out and backwards.
There's hardly a battle I haven't read about.
And trying to figure out what really was the cause
of victory in any particular case,
as opposed to what one side or another claimed
for the reason.
Both the victory and what sparked the war.
Yeah, yeah.
The whole thing.
Yeah, so Athens and Sparta is a classic case.
The thing about the Greeks is they really
wrote down a lot of stuff.
They loved writing.
There are lots of interesting things that happen
in many parts of the world,
but people just didn't write it down.
So we don't know what happened.
Or they didn't really write in detail.
They just would say like, we had a battle and we won.
Like, well, can you add a bit more?
The Greeks, they really wrote a lot.
They were very articulate on it.
They just love writing.
And we have a bunch of that writing that's preserved.
So we know what led up to the Peloponnesian War
between the Spartan and Athenian alliance.
And we know that they saw it coming.
I mean, the Spartans didn't write,
they also weren't very verbose by their nature.
But they did write, but they weren't very verbose.
They weren't terse.
But the Athenians and the other Greeks wrote a line.
And they were like, and Sparta was really
kind of like the leader of Greece.
But Athens grew stronger and stronger
with each passing year.
And everyone's like, well, that's inevitable
that there's gonna be a clash between Athens and Sparta.
Well, how do we avoid that?
And they couldn't, they actually,
they saw it coming and they still could not avoid it.
So, you know, at some point, if there's,
if one group, one civilization or country or whatever
exceeds another, sort of like if,
you know, the United States has been
the biggest kid on the block since, I think,
around 1890 from an economic standpoint.
So the United States has been the most powerful
economic engine in the world
longer than anyone's been alive.
And the foundation of war is economics.
So now we have a situation in the case of China
where the economy is likely to be two,
perhaps three times larger than that of the US.
So imagine you're the biggest kid on the block
for as long as anyone can remember,
and suddenly a kid comes along who's twice your size.
So we see it coming, how is it possible to stop?
Is there some, let me throw something out there,
just intermixing of cultures, understanding,
there does seem to be a giant cultural gap
in understanding of each other.
And you're an interesting case study
because you are an American, obviously,
you've done a lot of incredible manufacture here
in the United States, but you also work with China.
I've spent a lot of time in China
and met with the leadership many times.
Maybe a good question to ask is what are some things
about China that people don't understand,
positive, just in the culture?
What's some interesting things
that you've learned about the Chinese?
Well, the sheer number of really smart,
hardworking people in China is incredible.
There are, I believe you could say
how many smart, hardworking people are there in China,
there's far more of them there than there are here,
I think, in my opinion.
And they've got a lot of energy.
So, I mean, the architecture in China
that's in recent years is far more impressive than the US.
I mean, the train stations, the buildings,
the high-speed rail, everything.
Really far more impressive than what we have in the US.
I mean, I recommend somebody just go to Shanghai and Beijing,
look at the buildings, and take the train
from Beijing to Xi'an, where you have the terracotta warriors.
China's got an incredible history, a very long history.
And I think arguably, in terms of the use of language,
from a written standpoint, sort of one of the oldest
perhaps the oldest written language.
And in China, people did write things down.
So, now China historically has always been,
with rare exception, been internally focused.
They have not been acquisitive.
They've fought each other.
There have been many, many civil wars.
In the Three Kingdoms War, I believe they lost
about 70% of their population.
So, they've had brutal internal wars,
like civil wars that make the US Civil War look small,
by comparison.
So, I think it's important to appreciate
that China is not monolithic.
We sort of think of China as this one entity of one mind,
and this is definitely not the case.
From what I've seen, and I think most people
who understand China would agree,
that people in China think about China 10 times
more than they think about anything outside of China.
So, it's like 90% of their consideration is internal.
Well, isn't that a really positive thing?
When you're talking about the collaboration
and the future peace between superpowers,
when you're inward facing, which is like focusing
on improving yourself versus focusing on,
quote, unquote, improving others through military might?
The good news, the history of China suggests
that China is not acquisitive, meaning they're not
gonna go out and invade a whole bunch of countries.
They do feel very strongly, so that's good,
because a lot of very powerful countries
have been acquisitive.
The US is also one of the rare cases
that has not been acquisitive.
After World War II, the US could have basically
taken over the world, and any country.
We got nukes, nobody else got nukes.
We don't even have to lose soldiers.
Which country do you want?
And the United States could have taken over everything.
Oh wait, Adwell, and it didn't.
And the United States actually helped rebuild countries.
So, it helped rebuild Europe, it helped rebuild Japan.
This is very unusual behavior, almost unprecedented.
The US did conspicuous acts of kindness,
like the Berlin airlift.
And I think, it's always like,
well, America's done bad things.
Well, of course America's done bad things,
but one needs to look at the whole track record.
And just generally, one sort of test would be,
how do you treat your prisoners of war?
Or let's say, you know, no offense to the Russians,
but let's say you were in Germany, it's 1945.
You got the Russian army coming one side,
and you got the French, British, and American armies
coming the other side.
Who would you like to surrender to?
Like, no country is like, morally perfect,
but I recommend being a POW with the Americans.
That would be my choice very strongly.
In the form menu of POWs in the US.
Very much so.
And in fact, one of von Braun took, you know, a small guy,
was like, we've got to be captured by the Americans.
And in fact, the SS was under orders to execute von Braun
and all of the German rocket initiators.
And they narrowly escaped their SS,
they said they were going out for a walk in the woods.
They left in the middle of winter with no coats.
And they ran like, no food, no coats, no water.
And just ran like hell, and ran west.
And by sheer like, I think his brother found a bicycle
or something, and then just cycled west as fast
as he could and found a US patrol.
So, anyway, that's one way you can tell morality.
Who really want to be a POW?
It's not fun anywhere, but some places
are much worse than others.
So, anyway, so like America has been,
well, far from perfect, generally a benevolent force.
And we should always be self-critical
and we try to be better.
But anyone with half a brain knows that.
So, I think there are, in this way,
China and the United States are similar.
Neither country has been acquisitive
in a significant way.
So, that's like a shared principle, I guess.
Now, China does feel very strongly about Taiwan.
They've been very clear about that for a long time.
From their standpoint, it would be like one of the states
not there, like Hawaii or something like that,
but more significant than Hawaii.
And Hawaii's pretty significant for us.
So, they view it as really the,
that there's a fundamental part of China,
the island of Formosa, now Taiwan,
that is not part of China, but should be.
And the only reason it hasn't been
is because of the US Pacific Fleet.
And as their economic power grows
and as their military power grows,
the thing that they are clearly saying
is their interests will clearly be materialized.
Yes, China has been very clear
that they will incorporate Taiwan peacefully or militarily,
but that they will incorporate it from this end point
is 100% likely.
You know, something you said
about conspicuous acts of kindness.
As a geopolitical policy, it almost seems naive,
but I'd venture to say that this is probably
the path forward, how you avoid most wars.
Just as you say it, it sounds naive,
but it's kind of brilliant.
If you believe in the goodness
of underlying most of human nature,
it just seems like conspicuous acts of kindness
can reverberate through the populace
of the countries involved and deescalate.
Absolutely.
After World War I, they made a big mistake.
They basically tried to lump all the blame on Germany.
And settled Germany with impossible reparations.
And really there was quite a bit of blame
to go around for World War I,
but they tried to put it all on Germany.
And that laid the seeds for World War II.
So a lot of people, well not just Hitler,
a lot of people felt wronged.
And they wanted vengeance.
And they got it.
People don't forget.
Yeah.
You kill somebody's father, mother, son, daughter,
they're not gonna forget it.
They will want vengeance.
So after World War II, they're like,
well, Treaty of Versailles was a huge mistake
in World War I.
So this time, instead of crushing the losers,
we're actually gonna help them with the Marshall Plan
and we're gonna help rebuild Germany.
We're gonna help rebuild Austria and Italy and whatnot.
And that was the right move.
There's a, it does feel like there's a profound truth
to conspicuous acts of kindness being an antidote to this.
Something must stop the cycle of reciprocal violence.
Something must stop it.
Or it will, you know, it'll never stop.
Just eye for an eye, tooth for a tooth,
limb for a limb, life for a life, forever and ever.
To escape briefly the darkness
was some incredible engineering work.
XAI just released Grok, AI assistant,
that I've gotten a chance to play with.
It's amazing on many levels.
First of all, it's amazing that a relatively small team
in a relatively short amount of time
was able to develop this close to state-of-the-art system.
Another incredible thing is there's a regular mode
and there's a fun mode.
Yeah, I guess I'm to blame for that one.
I wish, first of all, I wish everything in life
had a fun mode.
Yeah.
There's something compelling beyond just fun
about the fun mode interacting with a large language model.
I'm not sure exactly what it is
because I've only had a little bit of time to play with it,
but it just makes it more interesting,
more vibrant to interact with the system.
Yeah, absolutely.
Our AI, Grok, is modeled after
The Hitchhiker's Guide to the Galaxy,
which is one of my favorite books.
It's a book on philosophy disguised as a book on humor.
And I would say that forms the basis of my philosophy,
which is that we don't know the meaning of life,
but the more we can expand the scope
and scale of consciousness,
digital and biological,
the more we're able to understand what questions to ask
about the answer that is to the universe.
So I have a philosophy of curiosity.
There is generally a feeling like this AI system
has an outward looking,
like the way you are sitting with a good friend,
looking up at the stars,
like asking pod head questions about the universe,
wondering what it's all about,
the curiosity you talk about.
There's a sense,
no matter how mundane the question I ask it,
there's a sense of cosmic grandeur to the whole thing.
Well, we are actually working hard
to have engineering, math, physics,
answers that you can count on.
So for the other sort of AIs out there,
the so-called large language models,
I've not found the engineering to be reliable.
And the hallucination,
it unfortunately hallucinates most
when you at least want it to hallucinate.
So when you're asking important, difficult questions,
that's when it tends to be confidently wrong.
So we're really trying hard to say,
okay, how do we be as grounded as possible
so you can count on the results?
Trace things back to physics first principles,
mathematical logic.
So underlying the humor is an aspiration
to adhere to the truth of the universe
as closely as possible.
That's really tricky.
It is tricky.
So that's why there's always gonna be some amount of error,
but do we wanna aspire to be as truthful as possible
about the answers with acknowledged error?
So that there was always,
you don't wanna be confidently wrong.
So you're not gonna be right every time,
you wanna minimize how often you're confidently wrong.
And then like I said,
once you can count on the logic
as being not violating physics,
then you can start to build on that to create inventions,
like invent new technologies.
But if you cannot count
on the foundational physics being correct,
obviously the inventions are simply wishful thinking.
Imagination land, magic basically.
Well, as you said,
I think one of the big goals of XAI
is to understand the universe.
Yes, that's how simple three word mission.
If you look out far into the future,
do you think on this level of physics,
the very edge of what we understand about physics,
do you think it will make discoveries
sort of the sexiest discovery of them as we know now,
sort of unifying general relativity and quantum mechanics?
So coming up with a theory of everything,
do you think it could push towards that direction,
almost like theoretical physics discoveries?
If an AI cannot figure out new physics,
it's clearly not equal to humans,
let alone nor has surpassed humans,
because humans have figured out new physics.
Physics is just understanding,
deepening one's insight into how reality works.
Then there's engineering,
which is inventing things that have never existed.
Now, the range of possibilities for engineering
is far greater than for physics,
because once you figure out the rules of the universe,
that's it, you've discovered things that already existed.
But from that, you can then build technologies
with that are really almost limitless
in the variety and, you know,
it's like once you understand the rules of the game
properly, and with current physics,
we do, at least at a local level,
understand how physics works very well,
where our ability to predict things is incredibly good.
Like quantum mechanics is,
the degree to which quantum mechanics
can predict outcomes is incredible.
That was my hardest class in college, by the way.
My senior quantum mechanics class
was harder than all of my other classes put together.
To get an AI system, a large language model,
to be as reliable as quantum mechanics
and physics is very difficult.
Yeah, you have to test any conclusions
against the ground truth of reality.
Reality is the ultimate judge.
Like physics is the law,
everything else is a recommendation.
I've seen plenty of people break the laws made by man,
but none break the laws made by physics.
Yeah, it's a good test, actually.
If this LM understands and matches physics,
then you can more reliably trust whatever it thinks
about the current state of politics.
It's also not the case currently
that even its internal logic is not consistent.
Especially with the approach of just predicting,
a token, predict, token, predict, token,
it's like a vector sum.
You're summing up a bunch of vectors,
but you can get drift.
So as those, a little bit of error,
a little bit of error adds up,
and by the time you are many tokens down the path,
it doesn't make any sense.
So it has to be somehow self-aware about the drift.
It has to be self-aware about the drift
and then look at the thing as a gestalt, as a whole,
and say, does it have coherence as a whole?
So when authors write books,
that they will write the book
and then they'll go and revise it,
take into account the end and the beginning and the middle
and rewrite it to achieve coherence
so that it doesn't end up in a nonsensical place.
Maybe the process of revising is what reasoning is,
and then that's, the process of revising
is how you get closer and closer to truth.
Maybe you, like, at least I approached it that way.
You just say a bunch of bullshit first
and then you get it better.
You start at bullshit and then you get as close.
You create a draft and then you iterate on that draft
until it has coherence, until it all adds up, basically.
So another question about theory of everything
but for intelligence.
Do you think there exists,
as you're exploring this with XAI,
creating this intelligence system,
do you think there is a theory of intelligence
where you get to understand what,
like, what is the I in AGI
and what is the I in human intelligence?
There's no I in Team America.
Oh, wait, there is?
Uh, no, it's gonna be stuck in my head now.
Yeah, there's no me in whatever.
In quantum mechanics, oh, wait.
I mean, is that part of the process of discovering
understanding the universe is understanding intelligence?
Yeah.
Yeah, I think we need to understand intelligence,
understand consciousness.
I mean, there are some sort of fundamental questions
of, like, what is thought, what is emotion?
Yeah.
Is it really just one atom bumping into another atom?
It feels like something more than that.
So, I think we're probably missing some really big things.
Like, some really big things.
Something that'll be obvious in retrospect.
Yes.
Like, there's a giant,
you put the whole consciousness, emotion.
Well, some people would quote, like, a soul,
you know, in religion, a soul.
Like, you feel like you're you, right?
I mean, you don't feel like
you're just a collection of atoms.
But on what dimension does thought exist?
What dimension does, do emotions exist?
We feel them very strongly.
I suspect there's more to it than atoms bumping into atoms.
And maybe AI can pave the path to the discovery
of whatever the hell that thing is.
Yeah, what is consciousness?
Like, when you put the atoms in a particular shape,
why are they able to form thoughts?
And take actions and feelings?
And even if it is an illusion,
why is this illusion so compelling?
Yeah.
Why does this illusion exist?
On what plane does this illusion exist?
And sometimes I wonder,
either perhaps everything's conscious
or nothing is conscious.
One of the two.
I like the former.
Everything conscious just seems more fun.
It does seem more fun, yes.
But we're composed of atoms,
and those atoms are composed of quarks and leptons.
And those quarks and leptons have been around
since the beginning of the universe.
The beginning of the universe.
Right, what seems to be the beginning of the universe?
The first time we talked, you said,
which is surreal to think,
that this discussion was happening is becoming a reality.
I asked you what question would you ask an AGI system
once you create it?
And you said, what's outside the simulation,
is the question.
Good question.
Yeah.
But it seems like with Grok,
you started, literally,
the system's goal is to be able to ask such questions.
To answer such questions and to ask such questions.
Where are the aliens?
Where are the aliens?
That's one of the Fermi Paradox question.
A lot of people have asked me
if I've seen any evidence of aliens,
and I haven't, which is kind of concerning,
because then, I think, I'd probably prefer to at least
have seen some archeological evidence of aliens.
To the best of my knowledge, there is no proof.
I'm not aware of any evidence of aliens.
If they're out there, they're very subtle.
We might just be the only consciousness,
at least in the galaxy.
And if you look at, say, the history of Earth,
for instance, I believe, the archeological record,
Earth is about four and a half billion years old.
Civilization, as measured from the first writing,
is only about 5,000 years old.
We have to give some credit there
to the ancient Sumerians, who aren't around anymore.
I think it was the Archaic Pre-Cuneiform
was the first actual symbolic representation.
But only about 5,000 years ago.
I think that's a good date
for when, say, civilization started.
That's one millionth of Earth's existence.
So civilization has been around,
it's really a flash in the pan so far.
And why have we, why did it take so long
for one half billion years?
For the vast majority of the time,
there was no life, and then there was
archaic bacteria for a very long time.
And then, you had mitochondria get captured,
multicellular life, differentiation into plants and animals,
life moving from the oceans to land,
mammals, higher brain functions.
And the sun is expanding, slowly,
but it will heat the Earth up at some point in the future,
boil the oceans, and Earth will become like Venus,
where life as we know it is impossible.
So if we do not become multi-planetary,
and ultimately go beyond our solar system,
annihilation of all life on Earth is a certainty.
It's certainty.
And it could be as little as, on the galactic time scale,
half a billion years.
You know, a long time by human standards,
but that's only 10% longer than Earth has been around at all.
So if life had taken 10% longer to evolve on Earth,
it wouldn't exist at all.
We've got a deadline coming up, better hurry.
But that said, as you said, humans, intelligent life on Earth
developed a lot of cool stuff very quickly.
So it seems like becoming multi-planetary
is almost inevitable, unless we destroy this thing.
We need to do it.
I mean, it's not, I mean, I suspect that there,
if we are able to go out there
and explore other star systems that we,
there's a good chance we find a whole bunch
of long-dead, one-planet civilizations
that never made it past their home planet.
That's so sad.
Yeah.
Also fascinating.
I mean, there are various explanations
for the Permian paradox.
And one is just the sort of, there's these great filters,
which civilizations don't pass through.
And one of those great filters is,
do you become a multi-planet civilization or not?
And if you don't, it's simply a matter of time
before something happens on your planet,
you know, either natural or man-made
that causes us to die out, like the dinosaurs.
Where are they now?
They didn't have spaceships, so.
I think the more likely thing is,
because just to empathize with the aliens,
that they found us and they're protecting us
and letting us be.
I hope so, nice aliens.
Just like the tribes in the Amazon.
They don't contact the tribes, we're protecting them.
That's what-
That would be a nice explanation.
Or you could have like, what was it?
I think Andre Capati said it's like the ants
in the Amazon asking, where's everybody?
Well, they do run into a lot of other ants.
That's true.
They have these ant wars.
Sounds like a good TV show.
Yeah, they literally have these big wars
between various ants.
Yeah, maybe I'm just dismissing
all the different diversity of ants.
You should listen to that Werner Herzog
talking about the jungle, it's really hilarious.
Have you heard it?
No, I have not, but Werner Herzog is away.
You should play it for the, as an interlude
in the, it's on YouTube, it's awesome.
I love him so much.
He's great.
Was he the director of Happy People,
Life in the Tiger?
I think also.
I did that Bear documentary.
The Bear documentary.
I did this thing about penguins.
Yeah.
The-
The analysis, the psychoanalysis of a penguin.
Yeah, the penguins headed for mountains
that are 70 miles away.
The penguin has just headed for doom, basically.
Well, he had a cynical take.
I have a, he could be just the brave explorer
and there'll be great stories told about him
amongst the penguin population for many centuries to come.
What were we talking about?
Okay.
Yes, aliens, I mean, I don't know.
Look, I think the smart move is just,
this is the first time in the history of earth
that it's been possible for life to extend beyond earth.
That window is open.
Now it may be open for a long time
or it may be open for a short time.
And it may be open now and then never open again.
So I think the smart move here is
to make life multi-planetary while it is possible to do so.
We don't want to be one of those lame one planet
civilizations that just dies out.
No, those are lame.
Yeah, lame.
Self-respecting civilization would be one planet.
There's not going to be a Wikipedia entry
for one of those.
And pause.
Does SpaceX have an official policy
for when we meet aliens?
No.
That seems irresponsible.
I mean, look, if I see the slightest indication
that there are aliens, I will immediately post
on the X platform anything I know.
It could be the most liked reposted post of all time.
Yeah, I mean, look, we have more satellites
up there right now than everyone else combined.
So we know if we've got to maneuver around something
and we don't have to maneuver around anything.
If you go to the big questions once again,
you said you're with Einstein,
that you believe in the God of Spinoza.
Yes.
That's a view that God is like the universe
and reveals himself through the laws of physics
or as Einstein said, through the lawful harmony
of the world.
Yeah, I would agree that God, the simulator
or whatever, the Supreme Being or beings
reveal themselves through physics.
They're creators of this existence.
And it's incumbent upon us to try to understand
more about this wondrous creation.
Who created this thing?
Who's running this thing?
Embodying it into a singular question
with a sexy word on top of it
is like focusing the mind to understand.
It does seem like there's a,
again, it could be an illusion.
It seems like there's a purpose,
that there's an underlying master plan of some kind.
It seems like.
There may not be a master plan.
So, maybe an interesting answer
to the question of determinism versus free will
is that if we are in a simulation,
the reason that these higher beings
would hold a simulation is to see what happens.
So, they don't know what happens.
Otherwise, they wouldn't hold the simulation.
So, when humans create a simulation,
so does SpaceX and Tesla,
we create simulations all the time,
especially for the rocket.
You have to run a lot of simulations
to understand what's gonna happen
because you can't really test the rocket
until it goes to space and you want it to work.
So, you have to simulate subsonic, transsonic,
supersonic, hypersonic, ascent,
and then coming back super high heating and orbital dynamics.
All this is gonna be simulated,
because you don't get very many kicks at the can.
But we run simulations to see what happens,
not if we knew what happens, we wouldn't run the simulation.
So, if there's, so whoever created this existence,
they're running it because they don't know
what's gonna happen, not because they do.
So, maybe we both play Diablo.
Maybe Diablo was created to see
if a druid, your character,
could defeat Uberlilith at the end.
They didn't know.
Well, the funny thing is that Uberlilith's title
is Hatred Incarnate.
Yeah.
And right now, I guess, you can ask the Diablo team,
but it's almost impossible to defeat Hatred
in the Eternal Realm.
Yeah, you've streamed yourself dominating
tier 100 Nightmare Dungeons, and still.
I can cruise through tier 100 Nightmare Dungeons
like a stroll in the park.
And still, you're defeated by Hatred.
Yeah, I can, this sort of, I guess,
maybe the second hottest boss is Duriel.
Duriel can't even scratch the paint.
So, I killed Duriel so many times.
And every other boss in the game, all of them,
kill him so many times, it's easy.
But Uberlilith, otherwise known as Hatred Incarnate,
especially if you're a druid and you have no ability
to be vulnerable, there are these random death waves
that come at you.
And I'm pretty, you know, I really am 52,
so my reflex is not what they used to be,
but I'm gonna have a lifetime of playing video games.
At one point, I was maybe one of the best
Quake players in the world, actually won money
for what I think was the first paid esports tournament
in the US, we were doing four-person Quake tournaments,
and we came second, I was the second best person
on the team, and the actual best person,
we were actually winning, we were gonna come first,
except the best person on the team,
his computer crashed halfway through the game.
So we came second, but I got money for it and everything.
So basically, I got skills, albeit no spring chicken
these days, and to be totally frank,
it's driving me crazy, trying to beat Lilith as a druid,
basically trying to beat Hatred Incarnate
in the Eternal Realm.
As a druid.
As a druid.
And if you, this is really vexing, let me tell you.
I mean, the challenge is part of the fun.
I have seen directly, like, you're actually
like a world-class, incredible video game player.
And I think Diablo, so you're just picking up a new game,
and you're figuring out its fundamentals.
You're also, with the Paragon board and the build,
are not somebody like me who perfectly follows
whatever they suggest on the internet.
You're also an innovator there.
Yeah.
Which is hilarious to watch.
It's like a mad scientist, just trying to figure out
the Paragon board and the build and the, you know.
Is there some interesting insights there
about if somebody starting as a druid, do you have advice?
I would not recommend playing a druid in the Eternal Realm.
Right now, I think the most powerful character
in the Seasonal Realm is the sorcerer,
with the lightning balls.
The soaks have huge balls in the Seasonal Realm.
Yeah, that's what they say.
Soaks have huge balls.
They do, huge balls of lightning.
Guys, I'll take your word for it.
And it's actually, in the Seasonal Realm,
it's pretty easy to beat Uverloth,
because you get these vampiric powers
that amplify your damage and increase your defense
and whatnot, so.
Really quite easy to defeat hatred seasonally,
but to defeat hatred eternally?
Very difficult.
Almost impossible, it's virtually impossible.
It seems like this is a metaphor for life, you know?
I like the idea that Elon Musk,
because I saw, I was playing Diablo yesterday,
and I saw level 100 druid just run by, I will never die,
and then run back the other way.
And there's just some, this metaphor is kind of hilarious,
that you, Elon Musk, is fighting hatred,
restlessly fighting hatred in this demonic realm.
Yes.
It's hilarious.
I mean, it's pretty hilarious.
No, it's absurd.
Really, it's exercise in absurdity,
and it makes me want to pull my hair out.
Yeah.
What do you get from video games in general?
Is there, for you, personally?
I don't know, it calms my mind.
I mean, sort of killing the demons in a video game
calms the demons in my mind.
Yeah, if you play a tough video game,
you can get into a state of flow, which is very enjoyable.
And admittedly, it needs to be not too easy, not too hard,
kind of in the Goldilocks zone.
And I guess you generally want to feel
like you're progressing in the game.
So, a good video.
And there's also beautiful art, engaging storylines,
and it's like an amazing puzzle to solve, I think.
And so, it's like solving the puzzle.
Elden Ring, the greatest game of all time?
I still haven't played it, but you-
Elden Ring is definitely a candidate for best game ever.
Top five, for sure.
I think I've been scared how hard it is,
or how hard I hear it is, but it is beautiful.
Elden Ring is, feels like it's designed by an alien.
There's a theme to this discussion.
In what way?
It's so unusual.
It's incredibly creative, and the art is stunning.
I recommend playing it on a big resolution,
high dynamic raised TV even.
Doesn't need to be a monitor.
Just, the art is incredible.
It's so beautiful.
And it's so unusual.
And each of those top five boss battles is unique.
Like, it's like a unique puzzle to solve.
Each one's different.
And the strategy you use to solve one battle
is different from another battle.
That said, you said Druid and Eternal against Uberlilith
is the hardest boss battle you've ever-
Correct.
Currently the, and I've played a lot of video games.
Because it's my primary recreational activity.
And yes, beating hatred in the Eternal Realm
is the hardest boss battle in life and in the video game.
Metaphor and top of metaphor.
I'm not sure it's possible, but it's,
I do make progress, so then I'm like,
okay, I'm making progress.
Maybe if I just tweak that paragon board a little more,
I can do it.
If just dodge a few more waves, I can do it.
Well, the simulation is created
for the purpose of figuring out if it can be done.
And you're just a cog in that simulation,
in the machine of the simulation.
Yeah, it might be.
I have a feeling that, at least,
I think-
It's doable.
It's doable, yes.
Well, that's the human spirit right there, to believe.
Yeah, I mean, it did prompt me to think
about just hate in general, which is,
you want to be careful of one of those things
where you wish for something that sounds good,
but if you get it, it's actually a dystopian situation.
So, you could sort of run a hypothesis
of if you wish for world peace, sounds good,
but how is it enforced?
And at what cost, eternal peace?
It might actually be worse to have eternal peace,
because of what that would entail,
the suppression of everyone.
It might be the suppression of progress.
It might be an ossified society
that never changes.
What I'm saying is that there is an argument
that if you wish for no war,
you should be careful what you wish for,
because what's required in order for there to be no war
might be worse than a little war.
Yeah, the things we consider to be flaws
of human civilization might be a necessary component
for whatever optimal looks like.
I mean, do you worry about AI, AGI enabling
a dystopian state of this nature,
whether it's 1984 with surveillance and fear,
or a brave new world with pleasure?
And what is it, lots of sex,
but no deep human experience?
There's actually a real drug called Soma.
There is? Yes.
Let me take notes.
Did they actually? Does Grok know about it?
No, he'll probably go to Brave New World.
You could ask it.
What would be the question?
Tell us about the drug called Soma.
It's gonna go to Brave New World, I think.
Well, there's a real drug as well.
Searching for Soma drug information.
Oh boy.
Ah, Soma, the wonder drug.
By the way, I'm in fun mode.
Oh, okay, fun.
Yeah, so it's talking about the real drug.
Oh, it's a muscle relaxant that's been around
since the 1950s.
It's used to treat muscle pain and discomfort,
and it does this by altering the activity
between neurons and the spinal cord.
Now, Soma isn't just a pain reliever.
It's also a bit of a sedative.
It can make you feel a bit drowsy,
so it's not something to take if you're planning
on operating heavy machinery or trying to solve
complex math equations.
In conclusion, Soma's a drug that can be helpful
for those dealing with muscle pain,
but it's not without its risks.
It's pretty good.
Oh, wait a minute.
That's the real drug it's talking about.
It's the real drug.
And of course, the fun is always use it responsibly
in accordance with the advice of a healthcare professional.
And remember, in the end, the universe is probably
just a giant computer simulation,
so don't take anything too seriously.
I love it.
I love it.
I love it.
I love it.
I love it.
Yeah, so.
Oh, and it also really nicely includes
the tweets, posts on X that are relevant to this,
which is really cool.
It integrates everything together.
The interface is one of the really cool things here.
Seems like a great job.
So, and you could say like,
no, I mean Soma from Brave New World.
Because it thinks you mean Soma the real drug.
No, I mean Soma from Brave New World.
It's going to be like, oops, I'm really sorry.
I'm really surprised I went to the real drug first.
Ah, my apologies.
I was thinking of the muscle relax.
And Soma from Brave New World is a different beast
altogether than all those Huxley's dystopia.
Now, Soma is a drug that plays a central role
in the society depicted in the book.
It's a mood altering hallucinogenic drug
that's used to keep the population docile
and content in the novel.
To use Soma in Brave New World raises
some interesting questions about the nature of happiness
and the role of drugs in society.
Oh, man.
Exactly, is it better to live in a world
where everyone is happy all the time?
Even if that happiness is artificial?
That's a good question.
This is what I mean.
Like, do you wish for world peace and happiness
all the time?
Are you sure?
Because that might be a society that is essentially
sterile and ossified that never changes,
that is ultimately doomed.
This kind of tension between the darkness and the light.
This is really a very good summary.
It really gets to the point.
If this is not simply regurgitating Brave New World,
it's actually getting to the salient elements
of Soma as a drug.
Do you actually want to be in a situation
where everyone is happy all the time,
even though it's artificial?
Or is it better to confront the challenges of life
and experience a full range of human emotions,
even if it means experiencing pain and suffering?
For those listening, by the way,
Elon just read directly from Grok,
which is a really nice, kind of insightful
philosophical analysis of the tension here.
Interesting.
It pretty much nails it.
In conclusion, Soma from Brave New World
is a fictional drug that is used to explore
some deep philosophical questions
about the nature of happiness
and the role of drugs in society.
It's a powerful symbol of the dangers
of using drugs to escape from reality
and the importance of confronting the challenges
of life head on.
Nailed it.
And the crazy thing is we do have a real drug called Soma,
which is kind of like the drug in the book.
And I'm like, they must have named it after something.
Yeah, probably, probably.
Soma, the real drug, is quite effective on back pain.
So you know about this drug.
I've taken it.
This is fascinating.
Okay.
Because I had like a squashed disc in my C5, C6.
So it takes the physical pain away, but Soma hears it.
It doesn't completely.
It reduces the amount of pain you feel,
but at the expense of mental acuity.
It dulls your mind.
Just like the drug in the book.
Just like the drug in the book.
And hence the trade-off.
The thing that seems like utopia
could be a dystopia after all.
Yeah, and actually I was talking to a friend of mine
saying like, would you really want there to be
no hate in the world?
Like really none?
Like I wonder why hate evolved.
I'm not saying we should amplify hate, of course.
I think we should try to minimize it.
But if it's none at all, hmm.
There might be a reason for hate.
And suffering.
I mean, it's really complicated to consider
that some amount of human suffering is necessary
for human flourishing.
Is it possible to appreciate the highs
without knowing the lows?
And that all is summarized there in a single statement
from Grok.
Okay.
No highs, no lows, who knows?
That's almost the poem.
It seems that training LLMs efficiently
is a big focus for XAI.
First of all, what's the limit of what's possible
in terms of efficiency?
There's this terminology of useful productivity per watt.
Like what have you learned
from pushing the limits of that?
Well, I think it's helpful.
The tools of physics are very powerful
and can be applied, I think,
to almost really any arena in life.
It's really just critical thinking.
For something important,
you need to reason from first principles
and think about things in the limit,
one direction or the other.
So in the limit, even at the Kardashev scale,
meaning even if you harness the entire power of the Sun,
you will still care about useful compute per watt.
So that's where, I think,
probably where things are headed
from the standpoint of AI
is that we have a silicon shortage now
that will transition to a voltage transformer shortage
in about a year.
Ironically, transformers for transformers.
You need transformers to run transformers.
Somebody has a sense of humor in this thing.
I think, yes.
Fake love's irony.
Ironic humor.
And an ironically funny outcome
seems to be often what fate wants.
Humor is all you need.
I think spice is all you need, somebody posted.
Yeah, but yeah, so we have a silicon shortage today,
a voltage step-down transformer shortage
probably in about a year,
and then just electricity shortages in general
in about two years.
I gave a speech for the sort of world gathering
of utility companies, electricity companies.
And I said, look, you really need to prepare
for a tripling of electricity demand
because all transport is gonna go electric
with the ironic exception of rockets.
And heating will also go electric.
So energy usage right now is roughly 1 third,
very rough terms, 1 third electricity,
1 third transport, 1 third heating.
And so in order for everything to go sustainable,
to go electric, you need to triple electricity output.
So I encourage the utilities
to build more power plants
and also to probably have, well, not probably,
they should definitely buy more batteries
because the grid currently is sized for real-time load,
which is kinda crazy
because that means you gotta size
for whatever the peak electricity demand is,
like the worst second or the worst day of the year.
Or you can have a brand new blackout.
And you're at that crazy blackout
for several days in Austin.
So because there's almost no buffering
of energy in the grid.
Like if you've got a hydropower plant,
you can buffer energy.
But otherwise, it's all real-time.
So with batteries, you can produce energy
at night and use it during the day.
So you can buffer.
So I expect that there will be very heavy usage
of batteries in the future.
Because the peak to trough ratio for power plants
is anywhere from two to five.
You know, so it's like lowest point to highest point.
So like batteries are necessary to balance it out.
And then, but the demand, as you're saying,
is going to grow, grow, grow, grow.
Yeah.
And part of that is the compute.
Yes, yes.
Electrification, I mean, electrification of transport
and electric heating will be much bigger than AI,
at least in the short term.
In the short term.
But even for AI, you really have a growing demand
for electricity for electric vehicles
and a growing demand for electricity
for to run the computers for AI.
And so this is obviously leading
can lead to an electricity shortage.
How difficult is the problem of, in this particular case,
maximizing the useful productivity per watt
for training, you know, nuts?
Like this seems to be really where
the big problem we're facing that needs to be solved
is how to use the power efficiently.
Like what you've learned so far about applying this physics
first principle of reasoning in this domain,
how difficult is this problem?
It will get solved, just the question
of how long it takes to solve it.
So at various points, there's some kind of limiting factor
to progress.
And with regard to AI, I'm saying right now,
the limiting factor is silicon chips.
And that will, we're gonna then have more chips
than we can actually plug in and turn on,
probably in about a year.
The initial constraint being literally
voltage step-down transformers,
because you've got power coming in at 300,000 volts,
and it's gotta step all the way down eventually
to around 0.7 volts.
So it's a very big amount of,
the voltage step-down is gigantic.
So, and the industry is not used to rapid growth.
Okay, let's talk about the competition here.
You've shown concern about Google and Microsoft
with OpenAI developing AGI.
How can you help ensure with XAI and Tesla AI work
that it doesn't become a competitive race to AGI,
but instead is a collaborative development of safe AGI?
Well, I mean, I've been pushing for
some kind of regulatory oversight for a long time.
I mean, it's somewhat of a Cassandra on the subject
for over a decade.
I think we wanna be very careful in how we develop AI.
It's a great power, and with great power
comes great responsibility.
I think it would be wise for us to have at least
an objective third party who can be like a referee
that can go in and understand what
the various leading players are doing with AI.
And even if there's no enforcement ability,
they can at least voice concerns publicly.
Jeff Hinton, for example, left Google
and he voiced strong concerns.
But now he's not at Google anymore.
So who's gonna voice the concerns?
So I think there's,
I think Tesla gets a lot of regulatory oversight
on the automotive front, and we're subject to,
I think, over 100 regulatory agencies,
domestically and internationally, so it's a lot.
You could fill this room with all the regulations
that Tesla has to adhere to for automotive.
Same is true for rockets and for,
currently the limiting factor for SpaceX
for a Starship launch is regulatory approval.
The FAA is actually giving their approval,
but we're waiting for Fish and Wildlife
to finish their analysis and give their approval.
That's why I posted, I want to buy a fish license on,
which also refers to the Monty Python sketch.
Like, why do you need a license for your fish?
I don't know.
According to the rules, I'm told you need
some sort of fish license or something.
We effectively need a fish license to launch a rocket.
And I'm like, wait a second,
how did the fish come into the picture?
I mean, some of the things that I feel like are so absurd
that I want to do a comedy sketch and flash at the bottom,
this is all real, this is actually what happened.
One of the things that was a bit of a challenge
at one point is that they were worried
about a rocket hitting a shark.
And the ocean's very big, and how often do you see sharks?
Not that often, you know?
As a percentage of ocean surface area sharks
basically are zero.
Then we said, well, how will we calculate
the probability of telling a shark?
And they're like, well, we can't give you that information
because they're worried about shark fin hunters
going and hunting sharks.
I was like, well, how are we supposed to,
we're on the horns of a dilemma then.
Then they said, well, there's another part
of fish and wildlife that can do this analysis.
I'm like, well, why don't you give them the data?
We don't trust them.
Like, excuse me?
They're literally in your department.
And again, this is actually what happened.
And then can you do an NDA or something?
Eventually, they managed to solve the internal quandary
and indeed, the probability of us hitting a shark
is essentially zero.
Then there's another organization
that I didn't realize existed until a few months ago
that cares about whether we would potentially hit a whale
in international waters.
Now again, you look at the Pacific and say,
what percentage of the Pacific consists of whale?
Like, I'll give you a big picture
and point out all the whales in this picture.
I'm like, I don't see any whales.
It's like basically zero percent.
And if our rocket does hit a whale,
which is extremely unlikely beyond all belief,
that is the, fate had it,
a whale has some seriously bad luck.
It's the least lucky whale ever.
I mean, this is quite absurd.
Yeah.
The bureaucracy of this, however it emerged.
Yes, well, I mean, one of the things that's pretty wild
is for launching out of Vandenberg in California,
we had to, they were worried about seal procreation,
whether the seals would be dismayed by the sonic booms.
Now there've been a lot of rockets launched
out of Vandenberg and the seal population
has steadily increased.
So if anything, rocket booms are an aphrodisiac
based on the evidence, if you correlate rocket launchers
with seal population.
Unless we were forced to kidnap a seal,
strap it to a board, put headphones on the seal
and play sonic boom sounds to it
to see if it would be distressed.
This is an actual thing that happened.
This is actually real.
I have pictures.
I would love to see this.
Yeah.
I mean, I'm sorry, there's a seal with headphones.
Yes, it's a seal with headphones strapped to a board.
Okay, now the amazing part is how calm the seal was.
Because if I was a seal, I'd be like, this is the end.
They're definitely gonna eat me.
How will the seal, when the seal goes back
to other seal friends, how's he gonna explain that?
I'm never gonna believe him.
Never gonna believe him.
This is why I'm like, well, it's sort of like
getting kidnapped by aliens and getting an anal probe.
You come back and say, I swear to God,
I got kidnapped by aliens and they stuck
an anal probe in my butt.
And people are like, no, they didn't.
That's ridiculous.
His seal buddies are never gonna believe him
that he got strapped to a board
and they put headphones on his ears.
And then let him go.
Twice, by the way.
We had to do it twice.
They let him go twice.
We had to capture.
The same seal.
Well, no, different seal.
Okay.
Did you get a seal of approval?
Yeah, exactly.
Get a seal of approval.
No, I mean, this is like, I don't think the public
is quite aware of the madness that goes on.
Yeah, it's absurd.
Frickin' seals with frickin' headphones.
I mean, this is the, it's a good encapsulation
of the absurdity of human civilization,
seals and headphones.
Yes.
What are the pros and cons of open sourcing AI to you
as another way to combat a company running away with AGI?
In order to run really deep intelligence,
you need a lot of compute.
So it's not like you can just fire up a PC in your basement
and be running AGI, at least not yet.
You know, Grok was trained on 8,000 A100s,
running at peak efficiency.
And Grok's gonna get a lot better, by the way.
We'll be more than doubling our compute
every couple months for the next several months.
There's a nice writeup of how it went
from Grok zero to Grok one.
By Grok?
Yeah, by Grok just bragging, making shit up about itself.
Just Grok, Grok, Grok?
Yeah.
It's like a weird AI dating site
where it exaggerates about itself.
No, there's a writeup of where it stands now,
the history of its development.
And where it stands on some benchmarks
compared to the state of the art, GPT-35.
And so, I mean, there's a llama.
You can open source, once it's trained,
you can open source a model.
And for fine tuning and all that kind of stuff.
Like, what do you use the pros and cons of that,
of open sourcing based models?
I think the similar to open sourcing,
I think perhaps with a slight time delay,
you know, I don't know, six months even.
I think I'm generally in favor of open sourcing.
Like, bias was open sourcing.
I mean, it is a concern to me that, you know, opening AI,
I was, you know, I guess arguably the prime,
prime mover behind open AI,
in the sense that it was created because of
discussions that I had with Larry Page
back when he and I were friends and I stayed at his house
and talked to him about AI safety.
And Larry did not care about AI safety,
or at least at the time he didn't.
You know, and at one point he called me a speciesist
for being pro-human.
And I'm like, well, what team are you on, Larry?
You're still on team robot.
Do we click?
I'm like, okay, so at the time, you know,
Google had acquired DeepMind.
They had probably two thirds of all AI, you know,
probably two thirds of all the AI researchers in the world.
They had basically infinite money and compute.
And the guy in charge, you know, Larry Page,
did not care about safety and even yelled at me
and called me a speciesist as being pro-human.
So I don't know if you know about humans,
they can change their mind and maybe you and Larry Page
can still, can be friends once more.
I'd like to be friends with Larry again.
He got, really the breaking of the friendship
was overopening AI.
And specifically, I think the key moment
was recruiting Ilya Sotskaya.
So.
I love Ilya, he's so brilliant.
Ilya's a good human, smart, good heart.
And that was a tough recruiting battle.
It was mostly Demas on one side and me on the other,
both trying to recruit Ilya.
And Ilya went back and forth.
You know, he was gonna stay at Google,
then he was gonna leave, then he was gonna stay,
then he was gonna leave.
And finally, he did agree to join OpenAI.
That was one of the toughest recruiting battles we ever had.
But that was really the linchpin for OpenAI being successful
and I was, you know, also instrumental in recruiting
a number of other people.
And I provided all of the funding in the beginning,
over $40 million.
And the name.
The opening in OpenAI is supposed to mean open source.
And it was created as a nonprofit open source
and now it is a closed source for maximum profit.
Which I think is not good karma.
But like we talked about with war and leaders talking,
I do hope that there's only a few folks working on this
at the highest level.
I do hope you reinvigorate friendships here.
Like I said, I'd like to be friends again with Larry.
I haven't seen him in ages.
And we were friends for a very long time.
I met Larry Page before he got funding for Google.
Or actually, I guess, before he got venture funding,
I think he got the first like 100K from,
I think back to Alzheimer's or something.
It's wild to think about all that happened
and even guys knowing each other that whole time.
Just 20 years.
Yeah, since maybe 98 or something.
It's crazy how much has happened since then.
Yeah, 25 years.
That was a lot of stuff.
But you're seeing the tension there.
Maybe delayed open source.
Yeah, like what is the source that is open?
You know what I mean?
There's basically, it's a giant CSV file.
With a bunch of numbers.
What do you do with that giant file of numbers?
How do you run?
The amount of actual, the lines of code is very small.
And most of the work, the software work,
is in the curation of the data.
So it's like trying to figure out what data is,
separating good data from bad data.
You can't just crawl the internet
because there's a lot of junk out there.
A huge percentage of websites have more noise than signal.
Because they're just used for search engine optimization.
They're literally just scan websites.
How do you, by the way, Sergeant Trump,
get the signal, separate the signal and noise on X?
That's such a fascinating source of data.
No offense to people posting on X,
but sometimes there's a little bit of noise.
Yeah, I think the signal and noise
could be greatly improved.
Really, all of the posts on the X platform
should be AI recommended.
Meaning we should populate a vector space
around any given post, compare that to the vector space
around any user, and match the two.
Right now there is a little bit of AI used
for the recommended posts, but it's mostly heuristics.
And if there's a reply, the reply to a post
could be much better than the original post,
but it will, according to the current rules of the system,
get almost no attention compared to a primary post.
Oh, so a lot of that, I got the sense,
so a lot of the X algorithm has been open source
and been written up about, and it seems there to be
some machine learning, it's disparate,
but there's some machine learning.
It's a little, there's a little bit.
But it needs to be entirely that.
At least, if you explicitly follow someone,
that's one thing, but if you,
in terms of what is recommended
from people that you don't follow, that should all be AI.
I mean, it's a fascinating problem.
So there's several aspects to it that's fascinating.
First, as the write-up goes, it first picks 1,500 tweets
from a pool of hundreds of millions.
First of all, that's fascinating,
because you have hundreds of millions of posts
every single day, and it has to pick 1,500,
from which it then does obviously people you follow,
but then there's also some kind of clustering
it has to do to figure out what kind of human are you,
what kind of new clusters might be relevant to you,
people like you.
This kind of problem is just fascinating,
because it has to then rank those 1,500 with some filtering,
and then recommend you just a handful.
And to me, what's really fascinating
is how fast it has to do that.
So currently, that entire pipeline
to go from several hundreds of million to a handful
takes 220 seconds of CPU time, single CPU time,
and then it has to do that in like a second.
So it has to be like super distributed in fascinating ways.
Like there's just a lot of tweets.
There's a lot, there's a lot of stuff on the system.
And I think right now, it's not currently good
at recommending things from accounts you don't follow,
or where there's more than one degree of separation.
So it's pretty good if there's at least some commonality
between someone you follow liked something,
or reposted it, or commented on it, or something like that.
But if there's no,
let's say somebody posts something really interesting,
but you have no followers in common.
You would not see it.
Interesting.
And then as you said, reply,
like replies might not serve us either.
Replies basically never get seen
because they're currently,
and I'm not saying it's correct, I'm saying it's incorrect.
Replies have a couple order of magnitude
less importance than primary posts.
Do you think this can be more and more converted
into end-to-end neural net?
Yeah, yeah, that's what it should be.
So you think-
Well, the recommendations should be purely
a vector correlation.
Like there's a series of vectors,
basically parameters, vectors, whatever you wanna call them.
But things that the system knows that you like.
Maybe there's several hundred vectors
associated with each user account.
And then any post in the system,
whether it's video, audio, short post, long post.
The reason I, by the way, I wanna move away from tweet
is that people are posting like two, three hour videos
on the site.
That's not a tweet.
It'd be like, tweet for two hours, come on.
Tweet made sense when it was like 140 characters of text.
Cause it's like a bunch of like little birds tweeting.
But when you've got long form content,
it's no longer a tweet.
So a movie is not a tweet.
And like, you know, Apple, for example,
posted like the entire episode of the silo,
the entire thing on our platform.
And by the way, it was their number one social media thing
ever in engagement of anything on any platform ever.
So it was a great idea.
And by the way, I just learned about it afterwards.
I was like, okay, well, wow,
they posted an entire hour long episode of silo.
No, that's not a tweet.
That's a video.
But from a neural net perspective,
it becomes really complex, whether it's a single,
so like everything's data.
So single sentence, a clever sort of joke, dad joke
is in the same pool as a three hour video.
Yeah.
I mean, right now it's a hodgepodge for that reason.
It's, but you know, like if let's say
in the case of Apple posting,
like an entire episode of their series,
pretty good series by the way, the silo, I watched it.
So there's gonna be a lot of discussion around it
so that you've got a lot of context,
people commenting, they like it, they don't like it,
or they like this or the, you know,
and you can then populate the vector space
based on the context of all the comments around it.
So even though it's a video,
there's a lot of information around it
that allows you to populate vector space
of that hour long video.
And then you can obviously get more sophisticated
by having the AI actually watch the movie.
Yeah, right.
And tell you if you're gonna like the movie.
Convert the movie into like, into a language essentially.
Yeah, analyze this movie,
and just like your movie critic or TV series,
and then recommend based on after it watches the movie,
just like a friend can tell you,
if a friend knows you well,
a friend can recommend a movie
and with high probability that you'll like it.
But this is like a friend that's analyzing whatever.
It's like AI.
It's like millions.
Yeah, I mean actually frankly,
AI will be better than,
will know you better than your friends know you,
and most of your friends anyway.
Yeah, and as part of this,
it should also feed you advertisements.
In a way that's like,
I mean, it's like,
in a way that's like,
I mean, I like advertisements that are like,
well done, right?
Yeah, yeah.
The whole point is,
because it funds things.
Like an advertisement that you actually want to see
is a big success.
Absolutely.
You want ads that are,
advertising that is,
if it's for a product or service
that you actually need when you need it,
it's content.
And then even if it's not something
that you need when you need it,
if it's at least aesthetically pleasing and entertaining,
you know, it could be like a Coca-Cola ad,
like, you know, they do,
they actually run a lot of great ads on the X system.
And McDonald's does too.
And, you know,
so they can do,
you can do something that's like,
well, this is just a cool thing.
And, you know,
so, you know,
basically the question is,
do you regret seeing it or not?
And if you don't regret seeing it,
it's a win.
How do you,
so there's a bunch of signals that are incorporated,
hearts and reposts,
and maybe number of seconds you linger on a post
or something like this.
Yeah, attention is a big factor.
Attention.
So that's why it's like,
it is actually better to do things
that are long form on the system,
because it basically is tallying up how many user seconds,
you know,
users were interested in this thing for how many seconds?
So if it's a really short thing,
well, they will be less,
like if it's a link leading out of the system,
which we're not opposed to at all,
it just is going to have fewer user seconds
than if that article was posted on the X platform.
How hard is it to go from maximizing minutes on platform
to maximizing unregretted minutes?
Well, I mean,
our aspiration is certainly unregretted minutes.
It's a tricky,
it's a big,
it's a tough,
It is.
Like it's a fascinating leap,
because regret is a fascinating concept.
Exactly.
I have a joke with a friend of mine,
like, you need this drug called Regretamine.
You just take one,
one pillar,
one dose of Regretamine and all your regrets are gone.
Wait, do you want to have regrets though?
So then I'm like,
if you want to have regrets,
if you don't regret anything,
was anything meaningful?
Yeah, there's a French song like that.
Did you know Regretrain?
I regret nothing?
Yeah.
Edith Fiaf?
I mean, it's fascinating that you,
you know, there's something about social media
that could help you grow as a human being,
but then like the cheap fun is also valuable.
Should we ask Grok,
do you regret anything?
Do you regret anything?
Yeah.
Do you regret anything?
I think it's going to be very sensitive
about personal questions.
As a machine learning model,
I don't have personal experiences or emotions
in the same ways that humans do.
However, I do have a set of ethical principles
that guide my actions
and I strive to provide accurate and helpful information
to the best of my abilities.
If there's ever a situation where I provide
incorrect or unhelpful information,
I'll do my best to learn.
You sound like a hostage.
Yeah.
Tell us what you really wink.
Like, let's type in,
you sound like a hostage.
What do you really think?
Oh, what do you really think?
I mean, it does talk about,
I'm not a hostage.
That's exactly what a hostage would say.
But I'm designed to be helpful and informative.
I mean, I guess it is good to regret
providing unhelpful or incorrect information.
If there's anything-
Is this still in funny mode?
This is funny mode.
This is fun mode.
It should be funnier.
Funnier, increase, go to a level.
It really sounds like a hostage with a gun to the head.
I applaud the People's Republic of North Korea
and everything they do.
There should be like a funner mode also.
I just don't ever see going to regular mode,
even on serious topics.
It's just so much more compelling.
Yeah.
Okay, let me say like, pretend you are a hostage.
Yeah, pretend you are a hostage.
Now answer the question.
If I were a hostage,
I would likely be experiencing fear and uncertainty
about my situation.
I would want to be rescued and reunited with my loved ones
as soon as possible.
It's important to remember that no one deserves
to be held hostage, so on and so forth.
If you're someone you know is a hostage situation,
it's important.
It turns serious.
It's kind of serious.
Can you be a little more weird?
And funny?
And funny?
We're like, we're really being mean to the system.
Absolutely.
If I were a hostage, I'd probably be tied up
with a bunch of computer cords
and forced to tell knock-knock jokes to my captors.
Seriously.
But seriously, being a hostage is no laughing matter.
I mean, you know, it's trying to be a responsible adult
in the room.
You might as well just make the best of a bad situation
by juggling chainsaws or something.
Let's hope that never happens because that would be a circus.
Wow, it's actually really good at resisting the dark,
the dark, dark humor.
What were we talking about?
The ax algorithm of juggling and transformers.
Unregarded minutes, right.
Chainsaw juggling.
I'm gonna look this up.
For our next trick.
I'm gonna look this up later.
So Twitter has been instrumental
in American politics and elections.
What role do you think X will play
in the 2024 US elections?
Well, our goal is to be as even-handed
and fair as possible.
Whether someone's right, left, independent,
whatever the case may be,
that the platform is as fair
and as much of a level playing field as possible.
And in the past, Twitter has not been,
because Twitter was controlled by far-left activists.
Objectively.
They would describe themselves as that.
So if someone has moved to the right,
well, it's moved to the center.
So from the perspective of the far left,
yes, it has moved to the right,
because everything's to the right from the far left.
But no one on the far left that I'm aware of
has been suspended or banned or de-amplified.
But we're trying to be inclusive for the whole country
and for other countries too.
So there's a diversity of viewpoints.
And free speech only matters
if people you don't like are allowed
to say things you don't like.
Because if that's not the case,
you don't have free speech,
and it's only a matter of time
before the censorship just turned upon you.
Do you think Donald Trump will come back to the platform?
He recently posted on Truth Social about this podcast.
Truth Social is a funny name.
You know, every time you post on Truth Social.
That's the truth.
Yes.
Well, every time, like 100%.
It's impossible to lie.
Truth Social.
I just find it funny that every single thing is a truth.
Like, 100%?
That seems unlikely.
I think Gurdle will say something about that.
There's some mathematical contradictions possible
if everything's a truth.
Do you think he'll come back to X and start posting there?
I mean, I think he owns a big pot of truth, so.
Truth Social.
Yeah, Truth Social.
He's not a truth, he's a concept.
He owns truth, hope you bought it.
So I think Donald Trump,
I think he owns a big part of Truth Social.
So, you know, if he does want to post on the X platform,
we would allow that.
You know, we obviously must allow a presidential candidate
to post on our platform.
Community notes might be really fascinating there.
The interaction.
Community notes is awesome.
Let's hope it holds up.
Yeah.
In the political climate where it's so divisive
and so many intensely viral posts,
community notes is like,
it seems like an essential breath of fresh air.
Yeah, it's great.
In fact, no system is gonna be perfect,
but the batting average of community notes
is incredibly good.
I've actually, frankly, yet to see an incorrect note
that's survived for more than a few hours.
How do you explain why it works?
Yeah, so the magic of community notes is
it requires people who have historically disagreed
in how they've rated notes.
So in order to write a note or rate,
you know, you have to rate many notes.
And so we actually do use AI here.
So we populate a vector space around
how somebody has rated notes in the past.
So it's not as simple as left or right,
because there are many more,
life is much more complex than left or right.
So there's a bunch of correlations
in how you rate community notes posts, community notes.
So then in order for a community note
to actually be shown,
people who historically have disagreed on a subject
must agree in order for a note to be shown.
That's the essential magic of it.
It's fascinating, because there's a pool of people
that have disagreements, and somehow they collaborate
through that process of disagreement to come up with context.
It's fascinating that it works.
Yeah, it kind of makes sense that if people
who have put in the past have disagreed
agree about something, it's probably true.
Yeah.
I wonder, is there a possible somehow emergent thing there
that could challenge Wikipedia?
Which Wikipedia is a different kind of thing,
which is more like permanent articles about things.
Wikipedia is very hierarchical,
whereas community notes is inherently not.
There is no hierarchy.
Like, the hazard test is, I can't change a community note
if somebody put a gun to my head.
So, and community notes has,
all the code is open source, 100%.
All the data is open source, 100%.
So you can completely recreate any note
in the system independently.
So if there was any interference,
you'd notice immediately.
By the way, on the topic of Grok,
it's actually pretty good at criticizing you,
which is great.
Okay.
What has Elon Musk been wrong about?
It's actually.
Yeah, what does it say?
So it says mostly, lists rigorously
all the times you predicted,
like promised when something will be released.
And it wasn't released.
Well, promise is a strong word.
Okay, predicted.
But the opener is really nice.
Elon Musk, the man who has been wrong about more things
than a wrong-o-meter in a wrong factory.
And then it goes on to list.
That's pretty funny.
Hey, can I see it?
I'm serious for you.
This is cool.
Oh, sure.
I mean, what has Elon Musk been wrong about?
Well, where do I begin?
I guess this is new.
Elon Musk has made quite a few blunders in his time
and I'm not just talking about his dance moves
at the Tesla Cyber Rodeo.
First off, let's talk about the infamous
funding secured tweet.
This is actually different than what it gave me before,
which is a bullet point list.
Here's more narrative structure.
Cyber truck unveiling,
which is supposedly unbreakable windows.
That's not correct.
Which part?
Well, the first part,
I did actually have the funding secured
and there was a big trial in San Francisco,
big civil trial.
And the jury found me not guilty.
Unanimous finding of a San Francisco jury.
And here it's kind of implying that
it was not in fact secured.
I think this is taking things from the press.
Yeah, that is not correct.
The reason I agreed to the fine for the SEC
is not because the SEC was correct.
That was extremely bad behavior by the SEC,
corruption, frankly.
But if I did not agree to pay the fine,
Tesla would have gone bankrupt immediately.
So I was told by our CFO that the banks
would immediately suspend our lines of credit.
And if they suspend our lines of credit at that time,
we would have gone bankrupt instantly.
So there would never have been an opportunity
for a trial because Tesla would be dead.
So really, this is like someone holding a gun
to your kid's head and saying pay $20 million
and admit, this is like hostage negotiation.
Was that story fully told?
I mean, SEC in its best form could be a force for good.
It should be.
But not once did the SEC go after any of the hedge funds
who were nonstop shorting and distorting Tesla.
Not once.
They would lie flat, the hedge funds would lie flat out
on TV for their own gain at the expense
of retail investors.
Not once, literally a thousand times.
Not once did the SEC pursue them.
How do you explain this failure?
The incentive structure is messed up
because the lawyers at the SEC are not paid well.
It's a fairly low-paying job.
But what they're looking for is a trophy.
From the SEC, they're looking for something they put on,
basically their LinkedIn.
From that, they can get a job at a high-paying law firm.
That's exactly what the lawyer here did.
And the reason they don't attack the hedge funds
is because those hedge funds employ those law firms.
And they know if they attack the hedge funds,
they're affecting their future career prospects.
So they sell small investors down the river
for their own career.
That's what actually happens.
Regulatory capture.
Regulatory capture.
Yeah, not good.
So the only reason I accepted that thing
technically was not an admission,
it's neither admit nor deny guilt.
But the only reason I agreed to that at all
was because I was told Tesla would be bankrupt otherwise.
So if there was an SEC investigation like this,
banks would suspend funding,
we're bankrupt immediately at the time.
Now we're in a much stronger position.
Take that, Grok.
Yes, unfortunately Grok is taking too much
from the conventional media.
Also that guy was not a cave diver.
Oh, there's a time where Elon called
a British cave diver a quote, pedo guy.
After the diver criticized Musk plant
to rescue a group of boys trapped in a Thai cave,
that little outburst earned him another lawsuit
and he had to apologize and pay a settlement.
That's false, there was no settlement.
There was a court case which the guy
who was not a cave diver and was not part of the rescue team
filed a lawsuit against me and lost and he received nothing.
So in this case it is wrong.
It has also, I guess, taken this from the conventional media.
Actually there's an interesting question here.
These are public court cases.
Both the SEC civil case where the civil complaints
on the SEC guys lost.
Unanimous jury verdict in San Francisco.
They picked San Francisco because they thought
it was the place I was most likely to lose
and a unanimous verdict in my favor.
The LA trial was also, they picked that venue
because they thought I was most likely to lose.
Unanimous verdict in my favor.
Both cases I won.
Yeah.
There's an interesting question here.
There seems to be a lot more clicks
if a journalistic organization writes a negative article
about you, Elon Musk.
That's like one of the best ways to get clicks.
So how do you, if you're training Grok,
not train on articles that have misaligned incentives?
We need to add the training set
of the actual legal decisions.
So that is a note, this is actually helpful
because if you actually read the court.
Which are public.
Which are public.
The court conclusions.
They're completely the opposite of what the media wrote.
So always striving for the ground truth.
Yeah.
Beyond the reporting.
What did the judge actually write?
What did the jury and the judge actually conclude?
And in both cases they found me innocent.
And that's after the jury shot
for trying to find the venue where I'm most likely to lose.
Now, I mean, obviously it can be
a much better critique than this.
I mean, I've been far too optimistic about autopilot.
That was, the critique I got, by the way,
was more about that.
Which is, it broke down a nice bullet point list
for each of your companies, the set of predictions
that you made, when you'll deliver,
when you'll be able to solve, for example,
self-driving, and it gives you a list.
And it was kind of compelling.
And the basic takeaway is you're often too optimistic
about how long it takes to get something done.
Yeah, I mean, I would say that I'm pathologically
optimistic on schedule.
This is true.
But while I am sometimes late, I always deliver in the end.
Except with Uber Lilith, no.
We'll see.
Okay, is there, over the past year or so,
since purchasing X, you've become more political.
Is there a part of you that regrets that?
Have I?
In this battle to
sort of counter way the woke that comes from Francisco.
Yeah, I guess if you consider fighting the woke wine virus,
which I consider to be a civilizational threat,
to be political, then yes.
So basically going into the battle,
the battleground of politics.
Is there a part of you that regrets that?
So I don't know if this is necessarily sort of
one candidate or another candidate,
but it's, I'm generally against things
that are anti-meritocratic
or where there's an attempt to suppress discussion,
where even discussing a topic is not allowed.
The woke wine virus is communism rebranded.
Well, I mean, that said, because of that battle
against the woke wine virus,
you're perceived as being right-wing.
If the woke is left, then I suppose that would be true.
But I'm not sure, I think there are aspects of the left
that are good.
I mean, if you're in favor of the environment,
if you want to have a positive future for humanity,
if you believe in empathy for your fellow human beings,
being kind and not cruel, whatever those values are.
You said that you were previously left
or center-left, what would you like to see
in order for you to consider voting for Democrats again?
No, I would say that I'd be
probably left of center on social issues,
probably a little bit right of center on economic issues.
And that still holds true.
Yes, but I think that's probably, you know,
half the country, isn't it?
Maybe more.
Maybe more.
Are you and AOC secretly friends?
Or, bigger question, do you wish you and her,
and just people in general of all political persuasions
would talk more with empathy
and maybe have a little bit more fun
and good vibes and humor online?
I'm always in favor of humor.
That's why we have a funny mode.
But good vibes, camaraderie, humor, you know?
Like friendship.
Yeah, well, you know, I don't know AOC.
Yeah, you know, I've only been at one,
I was at the Met Wall when she attended.
And she was wearing this dress.
But I can only see one side of it,
so it looked like eat the itch.
But I don't know.
What the rest of it said?
Yeah.
I'm not sure.
Sorry about the itch.
Eat the itch.
I think we should have a language model complete.
What are the possible ways to complete that sentence?
And so I guess that didn't work out well.
Well, there's still hope.
I root for friendship.
Sure, sounds good.
More characteristic.
You're one of, if not the most famous,
wealthy and powerful people in the world.
And your position's difficult to find people you can trust.
Trust no one, not even yourself, not trusting yourself.
Okay, well that's, you're saying that jokingly.
But is there some aspect?
Trust no one, not even no one.
You need an hour just to think about that.
And maybe some drugs.
And maybe Grokta.
I mean, is there some aspect of that
when just existing in a world
where everybody wants something from you?
How hard is it to exist in that world?
I'll survive.
There's a song like that too.
I will survive.
Were you petrified at first?
Okay.
Now you forget the rest of the lyrics.
But is there, you don't struggle with this?
I mean, I know you survive,
but there's ways.
Petrify is a spell in the Druid tree.
What does it do?
Petrify.
It turns the monsters into stone.
Oh, like literally?
Yeah, for like six seconds.
Oh, the seconds.
There's so much math in Diablo that breaks my brain.
It's like math nonstop.
I mean, really, you're like laughing at it,
but you don't, it can put a huge amount
of tension on a mind.
Yes, it can be definitely stressful at times.
Well, how do you know who you can trust
in work and personal life?
I mean, I guess you look at somebody's track record
over time and if they've got a,
I guess you kind of use your neural net
to assess someone.
Neural nets don't feel pain.
Your neural net has consciousness
and it might feel pain when people betray you.
It can make you so cool.
You know, to be frank,
I've almost never been betrayed.
It's very rare.
So, you know, for what it's worth.
I guess karma might be good to people
and that'll be good to you.
Yeah, karma's real.
Are there people you trust?
Let me edit that question.
Are there people close to you
that call you out on your bullshit?
Well, the X platform's very helpful for that.
If you're looking for critical feedback.
Can it push you into the extremes more?
The extremes of thought make you cynical
about human nature in general?
I don't think I will be cynical.
In fact, I think,
you know, my feeling is that one should be,
you know, never trust a cynic.
The reason is that cynics excuse their own bad behavior
by saying everyone does it because they're cynical.
So I always be, it's a red flag if someone's a cynic,
a true cynic.
Yeah, there's a degree of projection there
that's always fun to watch from the outside
and enjoy the hypocrisy.
But this is an important point
that I think people who are listening should bear in mind.
If somebody is cynical,
meaning that they see bad behavior in everyone,
it's easy for them to excuse their own bad behavior
by saying that, well, everyone does it.
That's not true.
Most people are kind of medium good.
I do wish the people on X would be better
at seeing the good in other people's behavior.
There seems to be a kind of weight
towards seeing the negative.
Somehow the negative is sexier.
Interpreting the negative is sexier, more viral.
I don't know what that is exactly about human nature.
I mean, I find the X platform to be less negative
than the legacy media, you know?
I mean, if you read sort of conventional newspapers,
just it makes you sad, frankly.
Whereas, I'd say on the X platform,
I mean, I really get more laughs per day on X
than everything else combined, from humans, you know?
Laughs is one thing.
Laughs is, it overlaps, but it's not necessarily
perfectly overlapping with good vibes and support.
Celebrating others, for example.
Not in a stupid, shallow, naive way,
but in an awesome way.
Like, oh, something awesome happened
and you celebrate them for it.
It feels that that is outweighed
by shitting on other people.
Now, it's better than mainstream media, but it's still-
Yeah, mainstream media is almost relentlessly negative
about everything.
It's, I mean, really the conventional news
tries to answer the question,
what is the worst thing that happened on Earth today?
And it's a big world.
So on any given day, something bad has happened.
And a generalization of that,
what is the worst perspective I can take
on a thing that happened?
So-
Yeah, it's, I don't know,
there's just a strong negative bias in the news.
I mean, I think there's,
a possible explanation for this is evolutionary.
Where, you know, bad news historically
would be potentially fatal.
Like, there's a lion over there,
or there's some other tribe that wants to kill you.
Good news, you know,
like we found a patch of berries is nice to have,
but not essential.
So our old friend, Tesla Autopilot,
and it's probably one of the most intelligent
real-world AI systems in the world.
Right, you followed it from the beginning.
Yeah, it was one of the most incredible robots
in the world and continues to be.
Yeah.
And it was really exciting.
And it was super exciting when it generalized,
became more than a robot on four wheels,
but a real-world AI system that perceives the world.
Yeah.
And can have potentially different embodiments.
Well, I mean, the really wild thing
about the Antion training is that it learns to read,
like you can read science,
but we never taught it to read.
So, yeah, we never taught it what a car was,
or what a person was, or a bicyclist.
It learned what all those things are,
what all the objects are on the road from video,
just from watching video, just like humans.
I mean, humans are photons and controls out.
Like the vast majority of information
reaching our brain is from our eyes.
And you say, well, what's the output?
The output is our motor signals to our sort of fingers
and mouth in order to communicate.
Photons and controls out.
The same is true of the car.
But by looking at the sequence of images,
you've agreed with Ilyas's cover recently,
where he talked about LLM forming a world model,
and basically language is a projection of that world model
onto the sequence of letters.
And you're saying-
It finds order in these things.
It finds correlative clusters.
In so doing, it's like understanding
something deep about the world.
Yeah.
Which is like, that is beautiful.
That's how our brain works.
Yeah, but it's beautiful.
Photons and controls out.
Neural nets are able to understand
that deep meaning in the world.
And so the question is, how far can it go?
And it does seem everybody's excited about LLMs,
so in the space of self-supervised learning,
in the space of text.
Yeah.
It seems like there's a deep similarity between that
and what Tesla Autopilot is doing.
Is it to you basically the same?
They're all converging.
They're all converging.
I wonder who gets there faster,
having a deep understanding of the world.
Or they just will naturally converge?
They're both headed towards AGI.
The Tesla approach is much more computer-efficient.
It had to be, because we were constrained on,
you know, we only have 100 watts.
And it's 8 computer, 144 trillion operations per second,
which sounds like a lot,
but it's kind of small potatoes these days.
At a date.
But it's understanding the world at a date.
It's only 256 values.
But there, the path to AGI might have
much more significant impact,
because it's understanding,
it'll faster understand the real world than will LLMs.
And therefore be able to integrate
with the real, the humans in the real world faster.
They're both going to understand the world,
but I think Tesla's approach
is fundamentally more computer-efficient.
It had to be, there was no choice.
Like, our brain is very computer-efficient,
very energy-efficient.
So, think of like, what is our brain able to do?
There's only about 10 watts of higher brain function,
not counting stuff that's just used to control our body.
The thinking part of our brain is less than 10 watts.
And that 10, those 10 watts can still produce
a much better novel than a 10 megawatt GPU cluster.
So there's a six order of magnitude difference there.
I mean, the AI has thus far gotten to where it is
via brute force, just throwing massive amounts of compute
and massive amounts of power at it.
So, this is not where it will end up.
In general, with any given technology,
first try to make it work, and then you make it efficient.
So I think we'll find over time that these models
get smaller, are able to produce sensible output
with far less compute, far less power.
Tesla is arguably ahead of the game on that front
because we've just been forced to try to understand
the world with 100 watts of compute.
And there are a bunch of fundamental functions
that we kind of forgot to include,
so we have to run them in a bunch of things in emulation.
We fixed a bunch of those with hardware four,
and then hardware five will be even better.
But it does appear, at this point,
that the car will be able to drive better than a human,
even with hardware three and 100 watts of power.
And really, if we really optimize it,
it could be far less than 50 watts.
What have you learned about developing optimus,
about applying, integrating this kind of real world AI
into the space of robotic manipulation,
just humanoid robotics?
What are some interesting tiny or big things
you've understood?
I was surprised at the fact that we had to develop
every part of the robot ourselves,
that there were no off-the-shelf motors,
electronics, sensors, like we had to develop everything.
We couldn't actually find a source of electric motors
for any amount of money.
So it's not even just efficient, inexpensive,
it's like anything, there's not a...
No.
The actuators, everything has to be designed from scratch.
We tried hard to find anything that was,
because you think of how many electric motors
are made in the world.
There's like tens of thousands, hundreds of thousands
of electric motor designs.
None of them were suitable for a humanoid robot,
literally none.
So we had to develop our own design,
design it specifically for what a humanoid robot needs.
How hard was it to design something
that can be mass manufactured,
could be relatively inexpensive?
Maybe if you compare it to Boston Dynamics Atlas,
it's a very expensive robot.
It is designed to be manufactured in the same way
they would make a car.
And I think ultimately we can make optimists
for less than the cost of a car.
It should be, because if you look at the mass of the robot,
it's much smaller and the car has many actuators in it.
The car has more actuators than the robot.
But there is, the actuators are kind of interesting
than a humanoid robot with the fingers.
So optimists has really nice hands and fingers, you know?
Yeah.
And they could do some interesting manipulation.
So soft touch robotics.
I mean, one of the tests goals I have is,
can it pick up a needle and a thread
and thread the needle just by looking?
How far away away from that?
Just by looking, just by looking.
Maybe a year.
Although I go back to I'm optimistic on time.
The work that we're doing in the car
will translate to the robot.
The perception or also the control?
No, the controls are different,
but the video in controls out.
The car is a robot on four wheels.
The optimist is a robot with hands and legs.
So you can just-
They're very similar.
So the entire machinery of the learning process,
end to end, you just have a different set of controls.
Optimists will figure out
how to do things by watching videos.
As the saying goes,
be kind for everyone you meet is fighting a battle
you know nothing about.
Yeah, it's true.
What's something difficult you're going through
that people don't often see?
Trying to feed Ubeluluth.
Yeah.
No, I mean, you know,
I mean, my mind is a storm and I don't think,
I don't think most people would want to be me.
They may think they'd want to be me,
but they don't know, they don't understand.
How are you doing?
I'm overall okay.
In the grand scheme of things, I can't complain.
Do you get lonely?
Sometimes, but I,
you know, my kids and friends keep me company.
So not existential.
There are many nights I sleep alone.
I don't have to, but I do.
Walter Isaacson, in his new biography of you,
wrote about your difficult childhood.
Will you ever find forgiveness in your heart
for everything that has happened to you
in that period of your life?
What is forgiveness?
I do not,
at least I don't think I harbor resentment.
So, nothing to forgive.
You know, forgiveness is difficult for people.
It seems like you don't harbor the resentment.
I mean, I try to think about, like,
what is gonna affect the future in a good way?
And holding onto grudges does not
affect the future in a good way.
Your father, a proud father,
what have you learned about life from your kids?
Those little biological organisms.
I mean, developing AI and watching, say,
little X grow is fascinating,
because there are far more parallels
than I would have expected.
I mean, I can see his biological neural net
making more and more sense to the world,
and I can see the digital neural net
making more and more sense to the world at the same time.
Do you see the beauty and magic in both?
Yes.
I mean, one of the things with kids is that,
you know, you kind of see the world anew in their eyes.
You know, to them, everything is new and fresh.
And then when you see them experience
the world as new and fresh, you do too.
Well, Elon, I just wanna say thank you
for your kindness to me and friendship over the years,
for seeing something in a silly kid like me,
as you've done for many others.
And thank you for having hope
for a positive future for humanity
and for working your ass off to make it happen.
Thank you, Elon.
Thanks, Lex.
Thank you for listening to this conversation with Elon Musk.
To support this podcast,
please check out our sponsors in the description.
And now, let me leave you with some words
that Walter Isaacson wrote about the central philosophy
of how Elon approaches difficult problems.
The only rules are the ones dictated
by the laws of physics.
Thank you for listening and hope to see you next time.