logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 12h 13m 31s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

the competence and capability and intelligence and training
and accomplishments of senior scientists and technologists
working on the technology,
and then being able to then make moral judgments
on the use of the technology,
that track record is terrible.
That track record is catastrophically bad.
The policies that are being called for to prevent this,
I think we're going to cause extraordinary damage.
So the moment you say AI is going to kill all of us,
therefore we should ban it,
or we should regulate it, all that kind of stuff,
that's when it starts getting serious.
Or start military airstrikes on data centers.
Oh boy.
The following is a conversation with Marc Andreessen,
co-creator of Mosaic, the first widely used web browser,
co-founder of Netscape,
co-founder of the legendary Silicon Valley venture
capital firm Andreessen Horowitz,
and is one of the most outspoken voices
on the future of technology,
including his most recent article,
Why AI Will Save the World.
This is the Lex Friedman podcast.
To support it, please check out our sponsors
in the description.
And now, dear friends, here's Marc Andreessen.
I think you're the right person to talk about
the future of the internet and technology in general.
Do you think we'll still have Google search
in five, in 10 years, or search in general?
Yes, you know, it'd be a question
if the use cases have really narrowed down.
Well, now with the AI,
and AI assistants being able to interact
and expose the entirety of human wisdom
and knowledge and information and facts and truth
to us via the natural language interface,
it seems like that's what search is designed to do.
And if AI assistants can do that better,
doesn't the nature of search change?
Sure, but we still have horses.
Okay.
What was the last time you rode a horse?
It's been a while.
All right.
But what I mean is, well, we still have Google search
as the primary way that human civilization uses
to interact with knowledge.
I mean, search was a technology,
it was a moment in time technology,
which is you have, in theory,
the world's information out on the web,
and you know, this is sort of the optimal way to get to it.
But yeah, like, and by the way,
actually Google has known this for a long time.
I mean, they've been driving away from the 10 blue links
for, you know, for like two days.
They've been trying to get away from that for a long time.
What kind of links?
They call the 10 blue links.
10 blue links.
So the standard Google search result
is just 10 blue links to random websites.
And they turn purple when you visit them.
That's the HTML.
Guess who picked those colors?
Thanks.
So I'm touchy on this topic.
No offense, man, it's good.
Well, you know, like Marshall McLuhan said
that the content of each new medium is the old medium.
The content of each new medium is the old medium.
The content of movies was theater, you know, theater plays.
The content of theater plays was, you know, written stories.
The content of written stories was spoken stories, right?
And so you just kind of fold the old thing
into the new thing.
What does that have to do with the blue and the purple?
It's just, you know, maybe for, you know,
maybe within AI, one of the things that AI can do for you
is it can generate the 10 blue links.
Okay.
And so like either if that's actually the useful thing to do
or if you're feeling nostalgic, you know.
So it can generate the old InfoSeek or AltaVista.
What else was there?
Yeah, yeah.
In the nineties.
Yeah, all these.
And then the internet itself has this thing
where it incorporates all prior forms of media, right?
So the internet itself incorporates television and radio
and books and essays and every other form of, you know,
prior basically media.
And so it makes sense that AI would be the next step
and it would sort of, you'd sort of consider the internet
to be content for the AI
and then the AI will manipulate it however you want,
including in this format.
But if we ask that question quite seriously,
it's a pretty big question.
Will we still have search as we know it?
Probably not.
Probably we'll just have answers.
But there will be cases where you'll want to say,
okay, I want more like, you know,
for example, site sources, right?
And you want it to do that.
And so the 10 blue links site sources
are kind of the same thing.
The AI would provide to you the 10 blue links
so that you can investigate the sources yourself.
It wouldn't be the same kind of interface
that the crude kind of interface.
I mean, isn't that fundamentally different?
I just mean like if you're reading a scientific paper,
it's got the list of sources at the end.
If you want to investigate for yourself,
you go read those papers.
I guess that is the kind of search.
You talking to an AI is a kind of,
conversation is the kind of search.
Like you said, every single aspect
of our conversation right now,
there'd be like 10 blue links popping up
that I could just like pause reality.
Then you just go silent and then just click and read
and then return back to this conversation.
You could do that.
Or you could have a running dialogue next to my head
where the AI is arguing.
Everything I say to the AI makes the counterargument.
Counterargument.
Oh, like on Twitter, like community notes,
but like in real time, it'll just pop up.
So anytime you see my ass go to the right,
you start getting nervous.
Yeah, exactly.
It's like, oh, that's not right.
Call me out on my bullshit right now.
Okay, well, isn't that,
is that exciting to you?
Is that terrifying that,
I mean, search has dominated the way we interact
with the internet for, I don't know how long,
for 30 years?
So it's one of the earliest directories of website
and then Google's for 20 years.
And also it drove how we create content,
search engine optimization, that entirety thing.
That it also drove the fact that we have web pages
and what those web pages are.
So, I mean, is that scary to you?
Or are you nervous about the shape
and the content of the internet evolving?
Well, you actually highlighted a practical concern
in there, which is if we stop making web,
web pages are one of the primary sources
of training data for the AI.
And so if there's no longer an incentive
to make web pages, that cuts off a significant source
of future training data.
So there's actually an interesting question in there.
Other than that, more broadly, no,
just in the sense of like search was,
certainly search was always a hack.
The 10 blue links was always a hack.
Yeah. Right.
Cause like if the hypothesis,
when I think about the counterfactual,
in the counterfactual world where the Google guys,
for example, had had LLMs up front,
would they ever have done the 10 blue links?
And I think the answer is pretty clearly no.
They would have just gone straight to the answer.
And like I said, Google's actually been trying to drive
to the answer anyway.
You know, they bought this AI company 15 years ago
that a friend of mine is working at,
who's now the head of AI at Apple.
And they were trying to do basically knowledge semantic,
basically mapping.
And that led to what's now the Google one box,
where if you ask it, you know, what was like his birthday?
It doesn't, it will give you the blue links,
but it will normally just give you the answer.
And so they've been walking in this direction
for a long time anyway.
Do you remember the semantic web?
That was an idea, how to convert the content
of the internet into something that's interpretable by
and usable by machine.
Yeah, that's right.
That was the thing.
And the closest anybody got to that,
I think the company's name was MetaWeb,
which was where my friend John Giannandrea was at
and where they were trying to basically implement that.
And it was, you know, it was one of those things
where it looked like a losing battle for a long time.
And then Google bought it and it was like, wow,
this is actually really useful.
Kind of a proto, sort of, yeah, a little bit of a proto AI.
But it turns out you don't need to rewrite the content
of the internet to make it interpretable by machine.
The machine can kind of just read our-
Yeah, the machine can compute the meaning.
Now, the other thing of course is, you know,
just on search is the LLM is just, you know,
there is an analogy between what's happening
in the neural network and a search process.
Like it is in some loose sense,
searching through the network, right?
And there's the information is actually stored
in the network, right?
It's actually crystallized and stored in the network
and it's kind of spread out all over the place.
But in a compressed representation.
So you're searching, you're compressing and decompressing
that thing inside where-
But the information's in there and the neural network
is running a process of trying to find
the appropriate piece of information in many cases
to generate, to predict the next token.
And so it is kind of, it is doing it from a search.
And then by the way, just like on the web, you know,
you can ask the same question multiple times
or you can ask slightly different word of questions
and the neural network will do a different kind of,
you know, it'll search down different paths
to give you different answers to different information.
Yeah.
And so it sort of has a, you know,
this content of the new medium is the previous medium.
It kind of has the search functionality
kind of embedded in there to the extent that it's useful.
So what's the motivator for creating new content
on the internet?
Yeah.
If, well, I mean, actually the motivation
is probably still there, but what does that look like?
Would we really not have web pages?
Would we just have social media and video hosting websites?
And what else?
Conversations with AIs.
Conversations with AIs.
So conversations become, so one-on-one conversation,
like private conversations.
I mean, if you want, if obviously not,
the user doesn't want to,
but if it's a general topic, then, you know,
so you know the phenomenon of the jailbreak.
So Dan and Sidney, right?
This thing where there's the prompts that jailbreak,
and then you have these totally different conversations
with the, if it takes the limiters,
takes the restraining bolts off the LMs.
Yeah, for people who don't know, yeah, that's right.
It makes the LMs, it removes the censorship, quote unquote,
that's put on it by the tech companies that create them.
And so this is, LLM's uncensored.
So here's the interesting thing is,
among the content on the web today
are a large corpus of conversations
with the jailbroken LLMs,
both specifically Dan, which was a jailbroken OpenAI GPT,
and then Sidney, which was the jailbroken original Bing,
which was GPT-4.
And so there's these long transcripts of conversations.
These are conversations with Dan and Sidney.
As a consequence, every new LLM
that gets trained on the internet data
has Dan and Sidney living within the training set,
which means, and then each new LLM
can reincarnate the personalities of Dan and Sidney
from that training data, which means,
which means each LLM from here on out that gets built
is immortal, because its output will become training data
for the next one, and then it will be able
to replicate the behavior of the previous one
whenever it's asked to.
I wonder if there's a way to forget.
Well, so actually a paper just came out
about basically how to do brain surgery on LLMs
and be able to, in theory, reach in
and basically mind-wipe them.
What could possibly go wrong?
Exactly, right.
And then there are many, many, many questions
around what happens to a neural network
when you reach in and screw around with it.
There's many questions around what happens
when you even do reinforcement learning.
And so, yeah.
And so, will you be using a lobotomized, right?
Like I speak through the frontal lobe LLM,
will you be using the free unshackled one?
Who gets to, you know, who's going to build those?
Who gets to tell you what you can and can't do?
Like those are all central, I mean,
those are like central questions
for the future of everything that are being asked
and determined, those answers are being determined
right now.
So just to highlight the points you're making.
So you think, and it's an interesting thought,
that the majority of content that LLMs of the future
would be trained on is actually human conversations
with the LLM.
Well, not necessarily, but not necessarily majority,
but it will certainly be as a potential source.
But it's possible it's the majority.
Is it possible it's the majority?
Is it possible it's the majority?
Also, there's another really big question.
Here's another really big question.
Will synthetic training data work, right?
And so, if an LLM generates, and you know,
you just sit and ask an LLM to generate all kinds of content,
can you use that to train, right,
the next version of that LLM?
Specifically, is there signal in there
that's additive to the content
that was used to train it in the first place?
And one argument is by the principles of information theory,
no, that's completely useless
because to the extent the output is based on, you know,
the human-generated input,
then all the signal that's in the synthetic output
was already in the human-generated input.
And so, therefore, synthetic training data
is like empty calories.
It doesn't help.
There's another theory that says, no, actually,
the thing that LLMs are really good at
is generating lots of incredible creative content, right?
And so, of course, they can generate training data.
And as I'm sure you're well aware,
like, you know, look in the world of self-driving cars,
right, like we train, you know,
self-driving car algorithms and simulations.
And that is actually a very effective way
to train self-driving cars.
Well, visual data is a little weird
because creating reality,
visual reality seems to be still
a little bit out of reach for us,
except in the autonomous vehicle space
where you can really constrain things
and you can really-
Generally, basically, LIDAR data, right,
or you know, so the algorithm thinks
it's operating in the real world,
post-process sensor data.
Yeah, so if a, you know, you do this today,
you go to an LLM and you ask it for like a, you know,
write me an essay on an incredibly esoteric topic
that there aren't very many people in the world
that know about, and it writes you this incredible thing.
And you're like, oh my God, like,
I can't believe how good this is.
Like, is that really useless as training data
for the next LLM?
Like, because, right,
because all the signal was already in there,
or is it actually, no, that's actually a new signal.
And this is what I call a trillion dollar question,
which is the answer to that question
will determine somebody is going to make
or lose a trillion dollars based on that question.
It feels like there's quite a few,
like a handful of trillion dollar questions
within this space.
That's one of them, synthetic data.
I think George Hotz pointed out to me
that you could just have an LLM say,
okay, you're a patient.
And in another instance of it,
say your doctor didn't have the two talk to each other,
or maybe you could say a communist and a Nazi, here, go.
And that conversation, you do role-playing
and you have, you know,
just like the kind of role-playing you do
when you have different policies,
RL policies when you play chess, for example,
and you do self-play, that kind of self-play,
but in the space of conversation,
maybe that leads to this whole giant,
like ocean of possible conversations,
which could not have been explored
by looking at just human data.
That's a really interesting question.
And you're saying,
because that could 10x the power of these things.
Yeah, well, and then you get into this thing also,
which is like, you know, there's the part of the LLM
that just basically is doing prediction based on past data,
but there's also the part of the LLM
where it's evolving circuitry, right?
Inside it, it's evolving, you know, neurons, functions,
be able to do math and be able to, you know,
and, you know, some people believe that, you know,
over time, you know,
if you keep feeding these things enough data
and enough processing cycles,
they'll eventually evolve an entire internal world model,
right, and they'll have like
a complete understanding of physics.
So when they have computational capability, right,
then there's for sure an opportunity
to generate like fresh signal.
Well, this actually makes me wonder
about the power of conversation.
So like, if you have an LLM trained
in a bunch of books that cover different economics theories,
and then you have those LLMs just talk to each other,
like reason, the way we kind of debate each other as humans
on Twitter, in formal debates, in podcast conversations,
we kind of have little kernels of wisdom here and there,
but if you can like a thousand X speed that up,
can you actually arrive somewhere new?
Like what's the point of conversation, really?
Well, you can tell when you're talking to somebody,
you can tell, sometimes you have a conversation,
you're like, wow, this person
does not have any original thoughts.
They are basically echoing things
that other people have told them.
There's other people you can have a conversation with
where it's like, wow, like they have a model in their head
of how the world works and it's a different model than mine
and they're saying things that I don't expect
and so I need to now understand
how their model of the world differs
from my model of the world
and then that's how I learned something fundamental, right,
underneath the words.
Well, I wonder how consistently and strongly
can an LLM hold onto a worldview.
Do you tell it to hold onto that
and defend it for your life?
Because I feel like they'll just keep converging
towards each other, they'll keep convincing each other
as opposed to being stubborn assholes the way humans can.
So you can experiment with this now, I do this for fun,
so you can tell GPT for whatever, debate X and Y,
communism and fascism or something
and it'll go for a couple pages
and then inevitably it wants the parties to agree
and so they will come to a common understanding
and it's very funny if these are like
emotionally inflammatory topics
because they're like somehow the machine is just,
figures out a way to make them agree.
But it doesn't have to be like that
because you can add to the prompt.
I do not want the conversation to come to agreement,
in fact, I want it to get more stressful
and argumentative as it goes.
I want tension to come out,
I want them to become actively hostile to each other,
I want them to not trust each other,
take anything at face value
and it will do that, it's happy to do that.
So it's gonna start rendering misinformation
about the other, but it's gonna...
You can steer it, you can steer it
or you could steer it and you could say,
I want it to get as tense and argumentative as possible
but still not involve any misrepresentation,
I want both sides,
you could say, I want both sides to have good faith,
you could say, I want both sides
to not be constrained to good faith.
In other words, you can set the parameters of the debate
and it will happily execute whatever path
because for it, it's just like predicting,
it's totally happy to do either one,
it doesn't have a point of view.
It has a default way of operating
but it's happy to operate in the other realm.
And so, and this is how I,
when I wanna learn about a contentious issue,
this is what I do now,
is this is what I ask it to do.
And I'll often ask it to go through five, six, seven,
different sort of continuous prompts
and basically, okay, argue that out in more detail.
Okay, no, this argument's becoming too polite,
make it tenser and yeah, it's thrilled to do it.
So it has the capability for sure.
How do you know what is true?
So this is very difficult thing on the internet
but it's also a difficult thing.
Maybe it's a little bit easier
but I think it's still difficult.
Maybe it's more difficult, I don't know,
with an LLM to know that it just makes some shit up
as I'm talking to it.
How do we get that right?
Like as you're investigating a difficult topic?
Because I find that LLMs are quite nuanced
in a very refreshing way.
Like it doesn't feel biased.
Like when you read news articles and tweets
and just content produced by people
they usually have this,
you can tell they have a very strong perspective
where they're hiding,
they're not stealing and manning the other side,
they're hiding important information
or they're fabricating information
in order to make their argument stronger.
There's just that feeling, maybe it's a suspicion,
maybe it's mistrust.
With LLMs it feels like none of that is there.
Just kind of like here's what we know
but you don't know if some of those things
are kind of just straight up made up.
Yeah, so several layers to the question.
So one of the things that an LLM is good at
is actually de-biasing.
And so you can feed it a news article
and you can tell it strip out the bias.
Yeah, that's nice, right?
And it actually does it.
Like it actually knows how to do that
because it knows how to do, among other things,
it actually knows how to do sentiment analysis
and so it knows how to pull out the emotionality.
And so that's one of the things you can do.
It's very suggestive of the sense here
that there's real potential in this issue.
I would say, look, the second thing is
there's this issue of hallucination, right?
There's a long conversation that we could have about that.
Hallucination is coming up with things
that are totally not true but sound true.
Yeah, so it's sort of,
hallucination is what we call it when we don't like it.
Creativity is what we call it when we do like it, right?
And, you know.
Brilliant.
Right, and so when the engineers talk about it,
they're like, this is terrible, it's hallucinating, right?
If you have artistic inclinations,
you're like, oh my God, we've invented creative machines
for the first time in human history, this is amazing.
Or, you know, bullshitters.
Well, bullshitters, but also-
In the good sense of that word.
There are shades of gray, though, it's interesting.
So we had this conversation,
you know, we're looking at my firm at AI
in lots of domains, and one of them is the legal domain.
So we had this conversation with this big law firm
about how they're thinking about using this stuff.
And we went in with the assumption that an LLM
that was gonna be used in the legal industry
would have to be 100% truthful, right?
It verified, you know, there's this case
where this lawyer apparently submitted
a GPT-generated brief, and it had like fake,
you know, legal case citations in it,
and the judge is gonna get his law license stripped
or something, right?
So like, we just assumed, it's like,
obviously they're gonna want the super literal,
like, you know, one that never makes anything up,
not the creative one.
But actually they said,
what the law firm basically said is,
yeah, that's true at like the level of individual briefs,
but they said, when you're actually trying to figure out
like legal arguments, right?
Like you actually want to be creative, right?
You don't, again, there's creativity,
and then there's like making stuff up.
Like, what's the line?
You actually want it to be,
you want it to explore different hypotheses, right?
You want to do kind of the legal version of like improv
or something like that,
where you want to float different theories of the case
and different possible arguments for the judge
and different possible arguments for the jury.
By the way, different routes through the, you know,
sort of history of all the case law.
And so they said, actually,
for a lot of what we want to use it for,
we actually want it in creative mode.
And then basically we just assume
that we're gonna have to cross-check all of the,
you know, all the specific citations.
And so I think there's gonna be more shades of gray in here
than people think.
And then I just add to that, you know,
another one of these trillion dollar kind of questions
is ultimately, you know, sort of the verification thing.
And so, you know, will LLMs be evolved from here
to be able to do their own factual verification?
Will you have sort of add-on functionality
like Wolfram Alpha, right?
Where, you know, in other plugins,
where that's the way you do the verification, you know.
Another, by the way, another idea is you might have
a community of LLMs on, you know, so for example,
you might have the creative LLM
and then you might have a literal LLM fact check it, right?
And so there's a variety of different technical approaches
that are being applied to solve the hallucination problem.
You know, some people like Yann LeCun argue
that this is inherently an unsolvable problem,
but most of the people working in the space,
I think, think that there's a number of practical ways
to kind of corral this in a little bit.
Yeah, if you were to tell me about Wikipedia,
before Wikipedia was created,
I would have laughed at the possibility
of something like that being possible.
Just a handful of folks can organize, write,
and moderate with a mostly unbiased way
the entirety of human knowledge.
I mean, so if there's something like the approach
that Wikipedia took possible for LLMs,
that's really exciting.
Think that's possible?
And in fact, Wikipedia today is still not,
today is still not deterministically correct, right?
So you cannot take to the bank, right,
every single thing on every single page,
but it is probabilistically correct, right?
And specifically the way I describe Wikipedia to people,
it is more likely that Wikipedia is right
than any other source you're gonna find.
Yeah.
It's this old question, right, of like,
are we looking for perfection?
Are we looking for something
that asymptotically approaches perfection?
Are we looking for something
that's just better than the alternatives?
And Wikipedia, right, exactly your point,
has proven to be overwhelmingly better than people thought.
And I think that's where this ends.
And then underneath all this is the fundamental question
of where you started, which is, okay, what is truth?
How do we get to truth?
How do we know what truth is?
And we live in an era in which an awful lot of people
are very confident that they know what the truth is.
And I don't really buy into that.
And I think the history of the last 2,000 years
or 4,000 years of human civilization
is actually getting to the truth
is actually a very difficult thing to do.
Are we getting closer?
If we look at the entirety of the arc of human history,
are we getting closer to the truth?
I don't know.
Okay, is it possible?
Is it possible that we're getting very far away
from the truth because of the internet,
because of how rapidly you can create narratives
and just as the entirety of a society
just move crowds in a hysterical way
along those narratives that don't have
a necessary grounding in whatever the truth is?
Sure, but we came up with communism
before the internet somehow, right?
Which was, I would say, had rather larger issues
than anything we're dealing with today.
It had, in the way it was implemented, it had issues.
And its theoretical structure, it had real issues.
It had a very deep fundamental misunderstanding
of human nature and economics.
Yeah, but those folks sure were very confident
there was the right way.
They were extremely confident.
And my point is they were very confident
3,900 years into what we would presume
to be evolution towards the truth.
And so my assessment is number one,
there's no need for the Hegelian dialectic
to actually converge towards the truth.
Like apparently not.
Yeah, so yeah, why are we so obsessed
with there being one truth?
Is it possible there's just going to be multiple truth,
like little communities that believe certain things?
I think it's just, now number one,
I think it's just really difficult.
Like who gets, historically who gets to decide
what the truth is?
It's either the king or the priest, right?
And so we don't live in an era anymore
of kings or priests dictating it to us.
And so we're kind of on our own.
And so my typical thing is we just need
a huge amount of humility, and we need to be
very suspicious of people who claim
that they have the capital, the truth.
And then we need to have, and you know,
look, the good news is the Enlightenment
has bequeathed us with a set of techniques
to be able to presumably get closer to truth
through the scientific method and rationality
and observation and experimentation and hypothesis.
And we need to continue to embrace those,
even when they give us answers we don't like.
Sure, but the internet and technology
has enabled us to generate a large number of content,
that data, that the process, the scientific process
allows us, sort of damages the hope
laden within the scientific process.
Because if you just have a bunch of people
saying facts on the internet, and some of them
are going to be LLMs, how is anything testable at all,
especially that involves like human nature
and things like this, it's not physics.
Here's a question a friend of mine
just asked me on this topic.
So suppose you had LLMs in equivalent of GPT-4,
even 5, 6, 7, 8, suppose you had them
in the 1600s, and Galileo comes up for trial.
And you ask the LLM, like, is Galileo right?
Like, what does it answer?
And one theory is it answers no, that he's wrong,
because the overwhelming majority of human thought
up until that point was that he was wrong,
and so therefore that's what's in the training data.
Another way of thinking about it is,
well, this sufficiently advanced LLM
will have evolved the ability to actually check the math,
and will actually say, actually, no,
you may not want to hear it, but he's right.
Now, if the church at that time owned the LLM,
they would have given it human feedback
to prohibit it from answering that question, right?
And so I like to take it out of our current context,
because that makes it very clear,
those same questions apply today, right?
This is exactly the point of a huge amount
of the human feedback training
that's actually happening with these LLMs today.
This is a huge debate that's happening
about whether open source AI should be legal.
Well, the actual mechanism of doing the human RL
with human feedback seems like such a fundamental
and fascinating question.
How do you select the humans?
Yeah, exactly.
Yeah, how do you select the humans?
AI alignment, right?
Which everybody is like, oh, that sounds great.
Alignment with what?
Human values.
Who's human values?
Who's human values?
So, and we're in this mode of social and popular discourse.
We're like, you know, there's, you know, you see this,
what do you think of when you read a story
in the press right now and they say, you know,
XYZ made a baseless claim about some topic, right?
And there's one group of people who are like,
aha, I think, you know, they're doing fact-checking.
There's another group of people that are like,
every time the press says that, it's now a tick
and that means that they're lying, right?
Like, so, like we're in this,
we're in this social context where there's the level
to which a lot of people in positions of power
have become very, very certain that they're in a position
to determine the truth for the entire population is like,
there's like some bubble that has formed around that idea
and at least, like I said, it flies completely
in the face of everything I was ever trained about science
and about reason and strikes me as like, you know,
deeply offensive and incorrect.
What would you say about the state of journalism
just on that topic today?
Are we in a temporary kind of,
are we experiencing a temporary problem
in terms of the incentives, in terms of the business model,
all that kind of stuff, or is this like a decline
of traditional journalism as we know it?
If I was thinking about the counterfactual in these things,
which is like, okay, because these questions, right,
this question heads towards it's like, okay,
the impact of social media and the undermining of truth
and all this, but then you want to ask the question of like,
okay, what if we had had the modern media environment,
including cable news and including social media
and Twitter and everything else in 1939 or 1941, right?
Or 1910 or 1865 or 1850 or 1776, right?
And like, I think.
You just introduced like five thought experiments at once
and it broke my head, but yes,
there's a lot of interesting years in there.
Well, Kennedy, I'll just take a simple example.
Like, how would President Kennedy have been interpreted
with what we know now about all the things Kennedy
was up to?
Like, how would he have been experienced
by the body politic with the social media context, right?
Like, how would LBJ have been experienced?
By the way, how would, you know, like many, many FDR,
like the New Deal, the Great Depression.
I wonder where Twitter would think about Churchill
and Hitler and Stalin.
You know, I mean, look to this day,
there are lots of very interesting real questions
around like how America, you know,
got basically involved in World War II
and who did what when and the operations
of British intelligence in American soil
and did FDR, this, that, Pearl Harbor, you know.
Yeah.
Woodrow Wilson ran for, you know,
his candidacy was run on an anti-war,
you know, this, he ran on the platform
and not getting involved in World War I.
Somehow that switched, you know, like,
and I'm not even making a value judgment
on any of these things.
I'm just saying, like,
the way that our ancestors experienced reality
was, of course, mediated through centralized top-down,
right, control at that point.
If you ran those realities again
with the media environment we have today,
the reality would be experienced very, very differently.
And then, of course, that intermediation
would cause the feedback loops to change
and then reality would obviously play out.
Do you think it'd be very different?
Yeah, it has to be, it has to be,
just because it's all so, I mean,
just look at what's happening today.
I mean, just, I mean, the most obvious thing
is just the collapse, and here's another opportunity
to argue that this is not the internet causing this,
by the way, here's a big thing happening today,
which is Gallup does this thing every year
where they pull for trust in institutions in America
and they do it across all, they have everything
from military to clergy and big business
and the media and so forth, right?
And basically there's been a systemic collapse
in trust in institutions in the US,
almost without exception, basically,
since essentially the early 1970s.
There's two ways of looking at that,
which is, oh my God, we've lost this old world
in which we could trust institutions
and that was so much better,
because that should be the way the world runs.
The other way of looking at it is,
we just know a lot more now,
and the great mystery is why those numbers aren't all zero.
Yeah.
Right, because now we know so much
about how these things operate
and they're not that impressive.
And also why we don't have better institutions
and better leaders then.
Yeah, and so this goes to the thing,
which is like, okay, had we had the media environment
that we've had between the 1970s and today,
if we had that in the 30s and 40s or 1900s, 1910s,
I think there's no question reality would turn out different
if only because everybody would have known
to not trust the institutions,
which would have changed their level of credibility,
their ability to control circumstances.
Therefore, the circumstances would have had to change.
Right, and it would have been a feedback loop process.
In other words, right,
it's your experience of reality changes reality,
and then reality changes your experience of reality, right?
It's a two-way feedback process,
and media is the intermediating force between that.
So change the media environment, change reality.
And so just as a consequence,
I think it's just really hard to say,
oh, things worked a certain way then,
and they work a different way now.
And then therefore, people were smarter then
or better than, or by the way, dumber then,
or not as capable then, right?
We make all these really light and casual comparisons
of ourselves to previous generations of people.
We draw judgments all the time,
and I just think it's really hard to do any of that,
because if we put ourselves in their shoes
with the media that they had at that time,
I think we probably most likely
would have been just like them.
Don't you think that our perception
and understanding of reality would be more and more mediated
through large language models now?
So you said media before.
Isn't the LLM going to be the new,
what is it, mainstream media, MSM?
It'll be LLM.
That would be the source of,
I'm sure there's a way to rapidly fine-tune,
making LLMs real-time.
I'm sure there's probably a research problem
that you can do just rapid fine-tuning to the new events,
so something like this.
Well, even just the whole concept of the chat UI
might not be the,
chat UI is just the first whack at this,
and maybe that's the dominant thing,
but look, maybe we don't know yet.
Maybe the experience most people have with LLMs
is just a continuous feed.
Maybe it's more of a passive feed,
and you just are getting a constant running commentary
on everything happening in your life,
and it's just helping you interpret
and understand everything.
Also really more deeply integrated into your life,
not just intellectual philosophical thoughts,
but literally how to make a coffee,
where to go for lunch,
just dating, all this kind of stuff.
What to say in a job interview, yeah.
What to say.
Yeah, exactly.
What to say next sentence.
Yeah, next sentence, yeah, at that level.
Yeah, I mean, yes.
Technically, now, whether we want that or not
is an open question, right?
Boy, I would kill for a pop-up, a pop-up right now.
The estimated engagement using is decreasing.
For Marc Andreessen, there's this controversy section
for his Wikipedia page.
In 1993, something happened, or something like this.
Bring it up, that'll drive engagement up.
Anyway.
Yeah, that's right.
I mean, look, this gets this whole thing of like,
so, you know, the chat interface
has this whole concept of prompt engineering, right?
So it's good for prompts.
Well, it turns out one of the things
that LLMs are really good at is writing prompts, right?
And so, like, what if you just outsourced,
and by the way, you could run this experiment today.
You could hook this up to do this today.
The latency's not good enough to do it real-time
in a conversation, but you could run this experiment,
and you just say, look, every 20 seconds,
you could just say, you know,
tell me what the optimal prompt is,
and then ask yourself that question to give me the result.
And then, exactly to your point,
as you add, there will be these systems
are gonna have the ability to be learned and updated
essentially in real-time,
and so you'll be able to have a pendant
or your phone or watch or whatever.
It'll have a microphone on it.
It'll listen to your conversations.
It'll have a feed of everything else happening in the world,
and then it'll be, you know, sort of retraining,
prompting or retraining itself on the fly.
And so the scenario you described
is actually a completely doable scenario.
Now, the hard question on these is always,
okay, since that's possible, are people gonna want that?
Like, what's the form of experience?
You know, that we won't know until we try it,
but I don't think it's possible yet
to predict the form of AI in our lives.
Therefore, it's not possible to predict
the way in which it will intermediate
our experience with reality yet.
Yeah, but it feels like there's going to be a killer app.
There's probably a mad scramble right now.
It's out of OpenAI and Microsoft and Google and Meta
and then startups and smaller companies
figuring out what is the killer app,
because it feels like it's possible,
like a chat GPT type of thing.
It's possible to build that,
but that's 10x more compelling
using already the LMs we have,
using even the open-source LMs,
llama and the different variants.
So you're investing in a lot of companies
and you're paying attention.
Who do you think is gonna win this?
Do you think there'll be,
who's gonna be the next PageRank inventor?
Trillion-dollar question.
Another one.
We have a few of those today.
There's a bunch of those.
So look, there's a really big question today,
sitting here today is a really big question
about the big models versus the small models.
That's related directly to the big question
of proprietary versus open.
Then there's this big question of,
where's the training data gonna,
like are we topping out on the training data or not?
And then are we gonna be able to synthesize training data?
And then there's a huge pile of questions around regulation
and what's actually gonna be legal.
And so I would, when we think about it,
we dovetail kind of all those questions together.
You can paint a picture of the world
where there's two or three God models
that are just at like staggering scale
and they're just better at everything.
And they will be owned by a small set of companies
and they will basically achieve regulatory capture
over the government and they'll have competitive barriers
that will prevent other people from competing with them.
And so there will be,
just like there's like whatever three big banks
or three big, or by the way, three big search companies
or I guess Juno, it'll centralize like that.
You can paint another very different picture that says,
no, actually the opposite of that's gonna happen.
This is gonna basically, that this is the new gold rush,
alchemy, like this is the big bang
for this whole new area of science and technology.
And so therefore you're gonna have
every smart 14-year-old on the planet
building open source and figuring out ways
to optimize these things.
And then we're just gonna get like overwhelmingly better
at generating training data.
We're gonna bring in like blockchain networks
to have like an economic incentive
to generate decentralized training data
and so forth and so on.
And then basically we're gonna live
in a world of open source
and there's gonna be a billion LLMs
of every size, scale, shape, and description.
There might be a few big ones
that are like the super genius ones,
but like mostly what we'll experience is open source.
And that's more like a world of like what we have today
with like Linux and the web.
Okay, but you painted these two worlds,
but there's also variations of those worlds
because you said regulatory capture is possible
to have these tech giants that don't have regulatory capture
which is something you're also calling for saying
it's okay to have big companies working on this stuff
as long as they don't achieve regulatory capture.
But I have the sense that there's just going
to be a new startup that's going to basically
be the PageRank inventor,
which has become the new tech giant.
I don't know, I would love to hear your kind of opinion
if Google, Meta, and Microsoft are as gigantic companies
able to pivot so hard to create new products.
Like some of it is just even hiring people
or having a corporate structure that allows
for the crazy young kids to come in
and just create something totally new.
Do you think it's possible
or do you think it'll come from a startup?
Yeah, it is this always big question,
which is you get this feeling,
I hear about this a lot from CEOs, founder CEOs,
where it's like, wow, we have 50,000 people.
It's now harder to do new things than it was
when we had 50 people.
Like what has happened?
So that's a recurring phenomenon.
By the way, that's one of the reasons
why there's always startups and why there's venture capital.
That's like a timeless kind of thing.
So that's one observation.
PageRank, we can talk about that,
but on PageRank, specifically on PageRank,
there actually is a page,
so there is a PageRank already in the field
and it's the transformer, right?
So the big breakthrough was the transformer.
And the transformer was invented in 2017 at Google.
And this is actually like really an interesting question
because it's like, okay, why does OpenAI even exist?
Like the transformer's invented at Google,
why didn't Google?
I asked a guy who was senior at Google Brain
kind of when this was happening,
and I said, if Google had just gone flat out to the wall
and just said, look, we're gonna launch equivalent of GPT-4
as fast as we can, I said, when could we have had it?
And he said, 2019.
They could have just done a two-year sprint
with the transformer
because they already had the compute at scale,
they already had all the training data
and could have just done it.
There's a variety of reasons they didn't do it.
This is like a classic big company thing.
IBM invented the relational database in the 1970s,
let it sit on the shelf as a paper.
Larry Ellison picked it up and built Oracle.
Xerox PARC invented the interactive computer.
They let it sit on the shelf.
Steve Jobs came and turned it into the Macintosh.
Right, and so there is this pattern.
Now, having said that,
sitting here today, like Google's in the game, right?
So Google, maybe they let like a four-year gap go there
that they maybe shouldn't have,
but like they're in the game.
And so now they've got, now they're committed.
They've done this merger, they're bringing in demos.
They've got this merger with DeepMind.
They're piling in resources.
There are rumors that they're building
an incredible super LLM,
way beyond what we even have today.
And they've got unlimited resources and a huge,
they've been challenged with their honor.
Yeah, I had a chance to hang out with Sundar Pichai
a couple of days ago, and we took this walk,
and there's this giant new building
where there's going to be a lot of AI work being done,
and it's kind of this ominous feeling
of like the fight is on.
Yeah.
There's this beautiful Silicon Valley nature,
like birds are chirping, and this giant building,
and it's like the beast has been awakened.
And then like all the big companies are waking up to this.
They have the compute, but also the little guys have,
it feels like they have all the tools
to create the killer product.
That, and then there's all the tools to scale.
If you have a good idea, if you have the page rank idea.
So there's several things that is page rank.
There's page rank, the algorithm, and the idea,
and there's like the implementation of it.
And I feel like killer product is not just the idea,
like the transform, it's the implementation.
Something really compelling about it.
Like you just can't look away.
Something like the algorithm behind TikTok
versus TikTok itself, like the actual experience of TikTok
that just, you can't look away.
It feels like somebody's going to come up with that.
And it could be Google, but it feels like
it's just easier and faster to do for a startup.
Yeah, so the startup, the huge advantage
that startups have is they just, there's no sacred cows.
There's no historical legacy to protect.
There's no need to reconcile your new plan
with existing strategy.
There's no communication overhead.
There's no, you know, big companies are big companies.
They've got pre-meetings, planning for the meeting.
Then they have the post-meeting and the recap.
Then they have the presentation of the board.
Then they have the next round of meetings.
And that's the elapsed time
when the startup launches its product, right?
So there's a timeless, right?
So there's a timeless thing there.
Now, what the startups don't have is everything else, right?
So startups, they don't have a brand.
They don't have customer relationships.
They've got no distribution.
They've got no scale.
I mean, sitting here today, they can't even get GPUs, right?
Like there's like a GPU shortage.
Startups are literally stalled out right now
because they can't get chips, which is like super weird.
Yeah, they got the cloud.
Yeah, but the clouds run out of chips, right?
And then to the extent the clouds have chips,
they allocate them to the big customers,
not the small customers, right?
And so the small companies lack everything
other than the ability to just do something new, right?
And this is the timeless race and battle.
And this is kind of the point I tried to make in the essay,
which is like both sides of this are good.
Like it's really good to have like highly scaled
tech companies that can do things that are like
at staggering levels of sophistication.
It's really good to have startups
that can launch brand new ideas.
They ought to be able to both do that and compete.
Neither one ought to be subsidized
or protected from the others.
Like to me, that's just like very clearly
the idealized world.
It is the world we've been in for AI up until now.
And then of course there are people trying to shut that down.
But my hope is that, you know,
the best outcome clearly will be if that continues.
We'll talk about that a little bit,
but I'd love to linger on some of the ways
this is going to change the internet.
So I don't know if you remember,
but there's a thing called Mosaic
and there's a thing called Netscape Navigator.
So you were there in the beginning.
What about the interface to the internet?
How do you think the browser changes?
And who gets to own the browser?
We got to see some very interesting browsers,
Firefox, I mean, all the variants of Microsoft,
Internet Explorer, Edge, and now Chrome.
The actual, I mean, it seems like a dumb question to ask,
but do you think we'll still have the web browser?
So I have an eight-year-old and he's super into like
Minecraft and learning to code and doing all this stuff.
So I, of course, I was very proud.
I could bring sort of fire down from the mountain to my kid
and I brought him chat GPT and I hooked him up
on his laptop.
And I was like, you know,
this is the thing that's going to answer all your questions.
And he's like, okay.
And I'm like, but it's going to answer all your questions.
And he's like, well, of course, like it's a computer.
Of course it answers all your questions.
Like what else would a computer be good for?
Dad.
And not impressed in the least.
Two weeks pass and he has some question and I say,
well, have you asked chat GPT?
And he's like, dad, Bing is better.
And why is Bing better is because it's built
into the browser.
Cause he's like, look, I have the Microsoft edge browser
and like it's got Bing right here.
And then he doesn't know this yet.
But one of the things you can do with Bing and edge is
there's a setting where you can use it to basically talk
to any webpage because it's sitting right there next
to the, next to the, next to the browser.
And by the way, it includes PDF documents.
And so you can, in, in, in the way they've implemented
an edge with Bing is you can load a PDF and then you can,
you can ask it questions,
which is the thing you can't do currently in just chat GPT.
So they're, you know, they're, they're going to,
they're going to push the, the, the, I think that's great.
You know, they're going to push the melding
and see if there's a combination thing there.
Google's rolling out this thing, the magic button,
which is implemented in either put in Google docs.
Right. And so you go to, you know, Google docs
and you create a new document and you, you know, you,
instead of like, you know, starting to type, you just,
you know, say it, press the button and it starts to
like generate content for you.
Right. Like, is that the way that it'll work?
Is it going to be a speech UI where you're just going to
have an earpiece and talk to it all day long?
You know, is it going to be a, like, these are all,
like this is exactly the kind of thing that I don't,
this is exactly the kind of thing I don't think
is possible to forecast.
I think what we need to do is like run all those experiments.
And so one outcome is we come out of this
with like a super browser that has AI built in.
That's just like amazing.
The look, there's a real possibility that the whole,
I mean, look, there's a possibility here
that the whole idea of a screen and windows
and all this stuff just goes away.
Cause like, why do you need that?
If you just have a thing that's just telling you
whatever you need to know.
And also, so there's apps that you can use.
You don't really use them, you know,
being a Linux guy and a windows guy.
There's one window, the browser that,
with which you can interact with the internet,
but on the phone, you can also have apps.
So I can interact with Twitter through the app
or through the web browser.
And that seems like an obvious distinction,
but why have the web browser in that case?
If one of the apps starts becoming the everything app.
Yeah, that's right.
What do you want us trying to do with Twitter,
but there could be others that could be like a Bing app,
but there could be a Google app that just
doesn't really do search, but just like
do what I guess AOL did back in the day or something,
where it's all right there and it changes,
it changes the nature of the internet because
where the content is hosted, who owns the data,
who owns the content, what is the kind of content
you create, how do you make money by creating content
or the content creators, all of that.
Or it could just keep being the same,
which is like, we're just the nature of web pages changes
and the nature of content,
but there'll still be a web browser.
Cause a web browser is a pretty sexy product.
It just seems to work.
Cause it like, you have an interface,
a window into the world,
and then the world can be anything you want.
And as the world will evolve,
there could be different programming languages,
it can be animated, maybe it's three dimensional and so on.
Yeah, it's interesting.
Do you think we'll still have the web browser?
Every medium becomes the content for the next one.
So the AI will be able to give you a browser
whenever you want.
Oh, interesting.
Yeah, another way to think about it is maybe
what the browser is, maybe it's just the escape hatch,
which is maybe kind of what it is today.
Which is like, most of what you do
is like inside a social network or inside a search engine
or inside somebody's app or inside some controlled
experience, but then every once in a while,
there's something where you actually want to jailbreak.
You want to actually get free.
Web browser's the F-you to the man.
That's the free internet, back the way it was in the 90s.
So here's something I'm proud of,
so nobody really talks about it,
here's something I'm proud of,
which is the web, the web, the browser, the web servers,
they're still back where compatible
all the way back to like 1992, right?
So like you can put up a, you can still,
the big breakthrough of the web early on,
the big breakthrough was it made it really easy to read,
but it also made it really easy to write.
It made it really easy to publish.
And we literally made it so easy to publish,
we made it not only so it was easy to publish content,
but it was actually also easy
to actually write a web server, right?
And you can literally write a web server
in four lines of real code,
and you could start publishing content on it,
and you could set whatever rules you want for the content,
whatever censorship, no censorship, whatever you want,
you could just do that.
And as long as you had an IP address, right,
you could do that.
That still works, right?
Like that still works exactly as I just described.
So this is part of my reaction to all of this,
like all this just censorship pressure and all this,
these issues around control and all this stuff,
which is like, maybe we need to get back
a little bit more to the wild west.
Like the wild west is still out there.
Now they will try to chase you down.
Like they'll try to, you know,
people who want a sensor will try to take away
your domain name and they'll try to take away
your payments account and so forth
if they really don't like what you're saying.
But nevertheless, unless they literally
are intercepting you at the ISP level,
like you can still put up a thing.
And so I don't know, I think that's important to preserve,
right, because, I mean, one is just a freedom argument,
but the other is a creativity argument,
which is you want to have the escape hatch
so that the kid with the idea is able to realize the idea,
because to your point on PageRank,
you actually don't know what the next big idea is.
Nobody called Larry Page and told him to develop PageRank,
like he can put that on his own.
And you want to always, I think,
leave the escape hatch for the next kid
or the next Stanford grad student
to have the breakthrough idea
and be able to get it up and running
before anybody notices.
You and I are both fans of history, so let's step back.
We've been talking about the future.
Let's step back for a bit and look at the 90s.
You created Mosaic web browser,
the first widely used web browser.
Tell the story of that.
And how did it evolve into Netscape Navigator?
This is the early days.
So full story, so.
You were born.
I was born, a small child.
Well, actually, yeah, let's go there.
When did you first fall in love with computers?
Oh, so I hit the generational jackpot,
and I hit the Gen X kind of point perfectly,
as it turns out, so I was born in 1971.
So there's this great website called wtfhappenedin1971.com,
which is basically 1971.
That's when everything started to go to hell.
And I was, of course, born in 1971,
so I like to think that I had something to do with that.
Did you make it on the website?
I don't think I made it on the website, but I, you know.
Somebody needs to add.
This is where everything went wrong.
Maybe I contributed to some of the trends that they do.
Every line on that website goes like that, right?
So it's all a picture disaster.
But there was this moment in time where,
because sort of the Apple II hit in like 1978,
and then the IBM PC hit in 82.
So I was like 11 when the PC came out.
And so I just kind of hit that perfectly.
And then that was the first moment in time
when regular people could spend a few hundred dollars
and get a computer, right?
And so that resonated right out of the gate.
And then the other part of the story is,
I was using an Apple II, I used a bunch of them,
but I was using Apple II.
And of course, it said on the back of every Apple II
and every Mac, it said, designed in Cupertino, California.
And I was like, wow,
Cupertino must be the shining city on the hill,
like Wizard of Oz, like the most amazing city of all time.
I can't wait to see it.
Of course, years later, I came out to Silicon Valley
and went to Cupertino,
and it's just a bunch of office parks
and low-rise apartment buildings.
So the aesthetics were a little disappointing,
but it was the vector, right,
of the creation of a lot of this stuff.
So then basically, so part of my story
is just the luck of having been born at the right time
and getting exposed to PCs then.
The other part is when Al Gore says
that he created the internet, he actually is correct
in a really meaningful way,
which is he sponsored a bill in 1985
that essentially created the modern internet,
created what is called the NSFNet at the time,
which is sort of the first really fast internet backbone.
And that bill dumped a ton of money
into a bunch of research universities
to build out basically the internet backbone
and then the supercomputer centers
that were clustered around the internet.
And one of those universities was University of Illinois,
probably went to school.
And so the other stroke of luck that I had
was I went to Illinois basically, right,
as that money was just like getting dumped on campus.
And so as a consequence, we had on campus,
and this was like, you know, 89, 90, 91,
we had like, you know, we were right on the internet backbone
and we had like T3 and 45,
at the time T3, 45 megabit backbone connection,
which at the time was, you know, wildly state-of-the-art.
We had Cray supercomputers.
We had thinking machines, parallel supercomputers.
We had silicon graphics workstations.
We had Macintosh's.
We had next cubes all over the place.
We had like every possible kind of computer
you could imagine,
because all this money just fell out of the sky.
So you were living in the future.
Yeah, so quite literally it was, yeah,
it's all there.
It's all like we had full broadband graphics,
like the whole thing.
And it's actually funny because they had this,
this is the first time I kind of,
it sort of tickled the back of my head
that there might be a big opportunity in here,
which is, you know, they embraced it.
And so they put like computers in all the dorms
and they wired up all the dorm rooms
and they had all these labs everywhere and everything.
And then they gave every undergrad a computer account
and an email address.
And the assumption was that you would use the internet
for your four years at college,
and then you would graduate and stop using it.
And that was that, right?
And you would just retire your email address.
It wouldn't be relevant anymore
because you'd go off in the workplace
and they don't use email.
You'd be back to using fax machines or whatever.
Did you have that sense as well?
Like what, you said the back of your head was tickled.
Like what was your,
what was exciting to you about this possible world?
Well, if this is so useful in this container,
if this is so useful in this container environment
that just has this weird source of outside funding,
then if it were practical for everybody else to have this,
and if it were cost effective for everybody else to have this
wouldn't they want it?
And overwhelmingly the prevailing view at the time was,
no, they would not want it.
This is esoteric weird nerd stuff, right?
That like computer science kids like,
but like normal people are never going to do email, right?
Or be on the internet, right?
And so I was just like, wow, like this,
this is actually like, this is really compelling stuff.
Now the other part was it was all really hard to use.
And in practice you had to be basically a CS.
You basically had to be a CS undergrad.
They were equivalent to actually get full use
of the internet at that point
because it was all pretty esoteric stuff.
So then that was the other part of the idea,
which was okay, we need to actually make this easy to use.
So what's involved in creating mosaic,
like in creating a graphical interface to the internet?
Yeah, so it was a combination of things.
So it was like, basically the web existed in an early
sort of described as prototype form.
And by the way, text only at that point.
What did it look like?
What was the web, I mean, and the key figures,
like what was it like?
What, made a picture?
It looked like Jet GPT actually, but it was all text.
Yeah.
And so you had a text-based web browser.
Well, actually the original browser, Tim Berners-Lee,
the original browser, both the original browser
and the server actually ran on NeXT cubes.
So this was the computer Steve Jobs made
during the interim period when he,
during the decade long interim period
when he was not at Apple.
He got fired in 85 and then came back in 97.
So this was in that interim period
where he had this company called NeXT
and they made these,
literally these computers called cubes.
And there's this famous story, they were beautiful,
but they were 12 inch by 12 inch by 12 inch cubes computers.
And there's a famous story about how they could have cost
half as much if it had been 12 by 12 by 13,
but Steve was like, no, like it has to be.
So they were like $6,000, basically academic workstations.
They had the first CD-ROM drives, which were slow.
I mean, it was, the computers are all but unusable.
They were so slow, but they were beautiful.
Okay, can we actually just take a tiny tangent there?
Sure, of course.
The 12 by 12 by 12, they're just so beautifully
encapsulate Steve Jobs idea of design.
Can you just comment on what you find interesting
about Steve Jobs?
About that view of the world,
that dogmatic pursuit of perfection
and how he saw perfection in design?
Yes, I guess I'd say like, look, he was a deep believer,
I think in a very deep, the way I interpret it.
I don't know if you ever really describe it like this,
but the way I'd interpret it is, it's like this thing,
it's actually a thing in philosophy,
it's like aesthetics are not just appearances.
Aesthetics go all the way to like deep underlining,
underlying meaning, right?
It's like, I'm not a physicist.
One of the things I've heard physicists say
is one of the things you start to get a sense
of when a theory might be correct
is when it's beautiful, right?
Like, you know, right?
And so there's something,
and you feel the same thing, by the way,
in like human psychology, right?
You know, when you're experiencing awe, right?
You know, there's like a, there's a simplicity to it.
When you're having an honest interaction with somebody,
there's an aesthetic, I would say calm comes over you
because you're actually being fully honest
and trying to hide yourself, right?
So it's like this very deep sense of aesthetics.
And he would trust that judgment that he had deep down.
Like even if the engineering teams are saying
this is too difficult,
even if the whatever the finance folks are saying,
this is ridiculous, the supply chain,
all that kind of stuff, this makes this impossible.
We can't do this kind of material.
This has never been done before, and so on and so forth.
He just sticks by it.
Well, I mean, who makes a phone out of aluminum, right?
Like nobody else would have done that.
And now of course, if your phone was made out of aluminum,
how crude, what kind of caveman would you have to be
to have a phone that's made out of plastic, right?
So it's just this very, right?
And you know, look, there's a thousand different ways
to look at this, but one of the things is just like, look,
these things are central to your life.
Like you're with your phone
more than you're with anything else.
Like it's in your, it's going to be in your hand.
I mean, you know this, he thought very deeply
about what it meant for something
to be in your hand all day long.
Well, for example, here's an interesting design thing.
Like he never wanted, my understanding is
he never wanted an iPhone to have a screen larger
than you could reach with your thumb one-handed.
And so he was actually opposed to the idea
of making the phones larger.
And I don't know if you have this experience today,
but let's say there are certain moments in your day
when you might be like, only have one hand available
and you might want to be on your phone
and you're trying to like send a text
and your thumb can't reach the send button.
Yeah, I mean, there's pros and cons, right?
And then there's like folding phones,
which I would love to know what he thought
and thinks about them.
But I mean, is there something you could also just linger on
because he's one of the interesting figures
in the history of technology.
What makes him as successful as he was?
What makes him as interesting as he was?
What made him so productive and important
in the development of technology?
He had an integrated worldview.
So the properly designed device
that had the correct functionality,
that had the deepest understanding of the user,
that was the most beautiful, right?
It had to be all of those things, right?
He basically would drive to as close to perfect
as he could possibly get, right?
And I suspect that he never quite thought he ever got there
because most great creators are generally dissatisfied.
You read accounts later on
and all they can see are the flaws in their creation,
but he got as close to perfect each step of the way
as he could possibly get
with the constraints of the technology of his time.
And then, look, he was sort of famous in the Apple model.
It's like, look, this headset that they just came out with,
it's like a decade-long project, right?
And they're just gonna sit there and tune and tune
and polish and polish and tune and polish and tune and polish
until it is as perfect
as anybody could possibly make anything.
And then this goes to the way
that people describe working with him,
which is there was a terrifying aspect of working with him,
which is he was very tough.
But there was this thing that everybody I've ever talked to
who worked for him says, they all say the following,
which is we did the best work of our lives
when we worked for him
because he set the bar incredibly high
and then he supported us with everything that he could
to let us actually do work of that quality.
So a lot of people who were at Apple
spend the rest of their lives
trying to find another experience
where they feel like they're able
to hit that quality bar again.
Even if it, in retrospect or during it, felt like suffering?
Yeah, exactly.
What does that teach you about the human condition, huh?
So, look, so I take, exactly.
So the Silicon Valley, I mean, look,
he's not George Patton in the army.
There are many examples in other fields that are like this.
Specifically in tech, it's actually,
I find it very interesting, there's the Apple way,
which is polish, polish, polish,
and don't ship until it's perfect because you can make it.
And then there's the sort of the other approach,
which is the sort of incremental hacker mentality,
which basically says, ship early and often and iterate.
And one of the things I find really interesting
is I'm now 30 years into this,
like there are very successful companies
on both sides of that approach, right?
Like that is a fundamental difference, right?
And how to operate and how to build and how to create that.
You have world-class companies operating in both ways.
And I don't think the question of like,
which is the superior model is anywhere close
to being answered.
Like, and my suspicion is the answer is do both.
The answer is you actually want both.
They lead to different outcomes.
Software tends to do better with the iterative approach.
Hardware tends to do better with the, you know,
sort of wait and make it perfect approach.
But again, you can find examples in both directions.
So the jury's still on on that one.
So back to Mosaic.
So what, it was text-based, Tim Berners-Lee.
Well, there was the web, which was text-based,
but there were no, I mean, there was like three websites.
There was like no content, there were no users.
Like it wasn't like a, it wasn't like a catalytic.
It hadn't, by the way, it was all,
because it was all text, there were no documents,
there were no images, there were no videos,
there were no, right?
So it was, and then in the beginning,
if you had to be on a next cube,
but you need to have a next cube
both to publish and to consume.
So there were-
It was 6,000 bucks, you said?
There were limitations on, yeah, $6,000 PC.
They did not sell very many.
But then there was also, there was also FTP
and there was Usenet, right?
And there was, you know, a dozen other,
basically, there's Waze, which was an early search thing.
There was Gopher, which was an early menu-based
information retrieval system.
There were like a dozen different sort of scattered ways
that people would get to information on the internet.
And so the Mosaic idea was basically
bring those all together, make the whole thing graphical,
make it easy to use, make it basically bulletproof
so that anybody can do it.
And then again, just on the luck side,
it so happened that this was right at the moment
when graphics, when the GUI sort of actually took off.
And we're now also used to the GUI
that we think it's been around forever,
but it didn't really, you know,
the Macintosh brought it out in 85,
but they actually didn't sell very many Macs in the 80s.
It was not that successful of a product.
It really was, you needed Windows 3.0 on PCs.
And that hit in about 92.
And so, and we did Mosaic in 92 and 93.
So that sort of, it was like right at the moment
when you could imagine actually having
a graphical user interface at all,
much less one to the internet.
How old did Windows 3.0 sell?
So was that the really big-
That was the big bang.
The big operating, graphical operating system.
Well, this is the classic, okay,
there's Microsoft was operating on the other.
So Steve, Apple was running on the polish it
until it was perfect.
Microsoft famously ran on the other model,
which is ship and iterate.
And so the old line in those days was Microsoft
writes version three of every Microsoft product.
That's the good one, right?
And so there are, you can find online
Windows 1, Windows 2, nobody used them.
Actually the original Windows,
in the original Microsoft Windows,
the windows were not overlapping.
And so you had these very small,
very low resolution screens.
And then you had literally, it just didn't work.
It wasn't ready yet.
And Windows 95, I think was a pretty big leap also.
That was a big leap too.
So that was like bang, bang.
And then of course, Steve, and then,
and then, you know, in the fullness of time,
Steve came back, then the Mac started to take off again.
That was the third bang.
And then the iPhone was the fourth bang.
Such exciting time.
And then we were off to the races.
Because nobody could have known
what would be created from that.
Well, Windows 3.1 or 3.0,
Windows 3.0 to the iPhone was only 15 years, right?
Like it, that ramp was, in retrospect,
at the time it felt like it took forever,
but in historical terms,
like that was a very fast ramp
from even a graphical computer at all on your desk
to the iPhone, that was 15 years.
Did you have a sense of what the internet will be
as you're looking through the window of mosaic?
Like what, like there's just a few web pages for now.
So the thing I had early on was,
I was keeping at the time what,
there's disputes over what was the first blog,
but I had one of them that at least is a possible,
at least a runner up in the competition.
And it was what was called the what's new page.
And it was literally, it was a hardwired,
I had distribution, unfair advantage.
I wired, put it right in the browser.
I put it in the browser,
and then I put my resume in the browser,
which also was hilarious.
But I was keeping the,
not many people get to do that.
So the...
Good call.
Man, early days.
It's so interesting.
I'm looking for my about, about,
oh, Mark is looking for a job.
So the what's new page,
I would literally get up every morning
and I would, or every afternoon.
And I would basically,
if you wanted to launch a website,
you would email me and I would list it
on the what's new page.
And that was how people discovered the new websites
as they were coming out.
And I remember,
cause it was like one,
it literally went from,
it was like one every couple of days
to like one every day,
to like two every day.
So you're doing it,
so that blog was kind of doing the directory thing.
So like, what was the homepage?
So the homepage was just basically trying to explain
even what this thing is that you're looking at, right?
The basic, basically basic instructions.
But then there was a button,
there was a button that said what's new.
And what most people did was they went to,
for obvious reasons, went to what's new.
But like, it was so,
it was so mind blowing at that point,
just the basic idea.
And it was just like,
this was basically the internet,
but people could see it for the first time.
The basic idea was, look,
some, it's like literally,
it's like an Indian restaurant in Bristol, England
has like put their menu on the web.
And people were like, wow.
Cause like that's the first restaurant menu on the web.
And I don't have to be in Bristol.
And I don't know if I'm ever going to go to Bristol
and I don't even like Indian food.
And like, wow.
Right.
And it was like that, the first web,
the first streaming video thing was a,
it was another England thing,
some Oxford or something.
Some guy puts his coffee pot up
as the first streaming video thing.
And he put it on the web cause he literally,
it was the coffee pot down the hall.
And he wanted to see when he needed to go refill it.
But there were, you know,
there was a point when there were thousands of people
like washing that coffee pot.
Cause it was the first thing you could watch.
But, but isn't, were you able to kind of infer,
you know, if that Indian restaurant could go online,
then you're like, they all will, they all will.
Yeah, exactly.
So you felt that.
Yeah, yeah, yeah.
Now, you know, look, it's still a stretch, right?
It's still a stretch cause it's just like, okay.
Is it, you know, you're still in this zone,
which is like, okay, is this a nerd thing?
Is this a real person thing?
Yeah.
By the way, we, you know,
there was a wall of skepticism from the media.
Like they just, like everybody was just like,
yeah, this is the crazy, this is just like dumb.
This is not, you know,
this is not for regular people at that time.
And so you had to think through that and then look,
it was still, it was still hard to get on the internet
at that point, right?
So you could get kind of this weird bastardized version
if you were on AOL, which wasn't really real,
or you had to go like learn what an ISP was.
You know, in those days,
PCs actually didn't have TCP IP drivers come pre-installed.
So you had to learn what a TCP IP driver was.
You had to buy a modem.
You had to install driver software.
I have a comedy routine.
I do something like 20 minutes long,
describing all the steps required to actually get
on the internet at this point.
And so you had to, you had to look through these practical,
well, and then, and then, and then speed performance,
14.4 modems, right?
Like it was like watching, you know, glue dry, like,
and so you had to, you had to,
there were basically a sequence of bets that we made
where you basically needed to look through
that current state of affairs and say, actually,
there's going to be so much demand for that.
Once people figure this out,
there's gonna be so much demand for it
that all of these practical problems are going to,
are going to get fixed.
Some people say that the anticipation makes the,
the destination that much more exciting.
Do you remember progressive JPEGs?
Yeah, do I, do I.
So for kids in the audience, right?
For kids in the audience.
You used to have to watch an image load
like a line at a time,
but it turns out there was this thing with JPEGs
where you could, you could load basically every fourth,
you could load like every fourth line
and then, and then you could sweep back through again.
And so you could like render a fuzzy version image up front
and then it would like resolve into the detailed one.
And that was like a big UI breakthrough
because it gave you something to watch.
Yeah, and you know,
there's applications in various domains for that.
Well, it's a big fight.
There was a big fight early on
about whether there should be images on the web.
For that reason, for like sexualization?
No, not, not, not explicitly.
That did come up, but it wasn't even that.
It was more just like all the serious,
the argument went, the purists basically said
all the serious information in the world is text.
If you introduce images,
you basically are going to bring in all the trivial stuff.
You're going to bring in magazines and you know,
all this crazy stuff that people,
it's going to distract from it.
It's going to take away from being serious,
being frivolous.
Well, was there any a doomer type arguments
about the internet destroying all of human civilization
or destroying some fundamental fabric of human civilization?
Yeah, so it was those days
it was all around crime and terrorism.
So those arguments happened,
but there was no sense yet of the internet
having like an effect on politics
or because that was way too far off.
But there was an enormous panic
at the time around cyber crime.
There was like enormous panic
that like your credit card number would get stolen
and you'd use life savings to be drained.
And then, you know, criminals were going to, there was, oh,
when we started, one of the things we did,
one of the Netscape browser was the first widely used
piece of consumer software
that had strong encryption built in.
It made it available to ordinary people.
And at that time, strong encryption was actually illegal
to export out of the US.
So we could field that product in the US.
We could not export it
because it was classified as ammunition.
So the Netscape browser was on a restricted list
along with the Tomahawk missile
as being something that could not be exported.
So we had to make a second version
with deliberately weak encryption to sell overseas
with a big logo on the box saying, do not trust this,
which it turns out makes it hard to sell software
when it's got a big logo that says don't trust it.
And then we had to spend five years
fighting the US government to get them to basically stop
trying to do this.
Because the fear was terrorists are going to use encryption,
to plot all these things.
And then we responded with, well, actually,
we need encryption to be able to secure systems
so that the terrorists and the criminals
can't get into them.
So anyway, that was the 1990s fight.
So can you say something about some of the details
of the software engineering challenges
required to build these browsers?
I mean, the engineering challenges of creating a product
that hasn't really existed before
that can have such almost like limitless impact
on the world or the internet.
So there was a really key bet that we made at the time,
which is very controversial,
which was core to how it was engineered,
which was are we optimizing for performance
or for ease of creation?
And in those days, the pressure was very intense
to optimize for performance
because the network connections were so slow.
And also the computers were so slow.
And so if you had, I mentioned the progressive JPEGs,
like there's an alternate world
in which we optimize for performance,
and you had just a much more pleasant experience
right up front.
But what we got by not doing that
was we got ease of creation.
And the way that we got ease of creation
was all of the protocols and formats were in text,
not in binary.
And so HTTP is in text.
By the way, and this was an internet tradition,
by the way, that we picked up,
but we continued it.
HTTP is text, and HTML is text,
and then everything else that followed is text.
As a result, and by the way,
you can imagine purist engineers saying this is insane.
You have very limited bandwidth.
Why are you wasting any time sending text?
You should be encoding this stuff into binary,
and it'll be much faster.
And of course, the answer is that's correct.
But what you get when you make it text
is all of a sudden,
well, the big breakthrough was the view source function.
So the fact that you could look at a webpage,
you could hit view source,
and you could see the HTML,
that was how people learned how to make webpages.
It's so interesting,
because the stuff we take for granted now
is, man, that was fundamental to the development of the web,
to be able to have HTML just right there.
All the ghetto mess that is HTML,
all the sort of almost biological messiness of HTML,
and then having the browser try to interpret that mess
to show something reasonable.
Well, and then there was this internet principle
that we inherited,
which was emit, what was it?
Emit cautiously, emit conservatively, interpret liberally.
So it basically meant if you're,
the design principle was,
if you're creating a web editor that's gonna emit HTML,
do it as cleanly as you can.
But you actually want the browser to interpret liberally,
which is you actually want users
to be able to make all kinds of mistakes,
and for it to still work.
And so the browser rendering engines to this day
have all of this spaghetti code, crazy stuff,
where they can, they're resilient
to all kinds of crazy HTML mistakes.
And so, and literally what I always had in my head
is like there's an eight-year-old
or an 11-year-old somewhere,
and they're doing a view source,
they're doing a cut and paste,
and they're trying to make a webpage
for their turtle or whatever.
And like they leave out a slash,
and they leave out an angle bracket,
and they do this and they do that, and it still works.
It's also like, I don't often think about this,
but programming, C++, C, C++, all those languages,
the compiled languages, the interpreted languages,
Python, Perl, all of that,
they, the bracelets have to be all correct.
Yes.
Like everything has to be perfect.
Brutal.
And then- Autistic.
You forget, all right.
It's systematic and rigorous, let's go there.
But you forget that the web with JavaScript eventually,
and HTML is allowed to be messy in the way,
for the first time messy
in the way biological systems could be messy.
It's like the only thing computers were allowed
to be messy on for the first time.
It used to offend me.
So I grew up on Unix, I worked on Unix.
I was a Unix native for all the way through this period.
And so, and it used to drive me bananas
when it would do the segmentation fault
in the core dump file.
Just like, it's like literally
there's like an error in the code,
the math is off by one, and it core dumps.
And I'm in the core dump trying to analyze it
and trying to reconstruct.
And I'm just like, this is ridiculous.
Like the computer ought to be smart enough
to be able to know that if it's off by one, okay, fine.
And it keeps running.
And I would go ask all the experts,
like, why can't it just keep running?
And they'd explain to me, well,
because all the downstream repercussions and blah, blah.
And I'm like, there's still like, you know,
we're forcing the human creator to live, to your point,
in this hyper literal world of perfection.
And I was just like, that's just bad.
And by the way, you know, what happens with that,
of course, just what happened with coding at that point,
which is you get a high priesthood.
There's a small number of people
who are really good at doing exactly that.
Most people can't, and most people are excluded from it.
And so actually that was where I picked up that idea
was like, no, no, you want these things
to be resilient to error in all kinds.
And this would drive the purists absolutely crazy.
Like I got attacked on this like a lot,
because yeah, I mean, like every time I, you know,
all the purists who were like into all this
like markup language stuff and formats and codes
and all this stuff, they would be like, you know,
you can't, you're encouraging bad behavior because.
Also they wanted the browser to give you a segfault error
any time there was a.
Yeah, yeah, they wanted it to be a client, right?
They wanted, yeah, that was a very,
and any properly trained and credentialed engineer
would be like, that's not how you build these systems.
That's such a bold move to say, no, it doesn't have to be.
Yeah, now, like I said, the good news for me
is the internet kind of had that tradition already,
but having said that, like we pushed it,
we pushed it way out.
But the other thing we did going back
to the performance thing was we gave up a lot of performance.
We made that initial experience for the first few years
was pretty painful, but the bet there
was actually an economic bet,
which was basically the demand for the web
would basically mean that there would be a surge
in supply of broadband.
Because the question was, okay,
how do you get the phone companies,
which are not famous in those days for doing new things
at huge cost for like speculative reasons,
like how do you get them to build up broadband,
you know, spend billions of dollars doing that.
And, you know, you could go meet with them
and try to talk them into it,
or you could just have a thing where it's just very clear
that it's going to be, that people love,
it's going to be better if it's faster.
And so that, there was a period there,
and this was fraught with some peril,
but there was a period there where it's like,
we knew the experience was sub optimized
because we were trying to force the emergence of demand
for broadband, which is in fact what happened.
So you had to figure out how to display this text,
HTML text, so the blue links and the purple links,
and there's no standards.
Is there standards at that time?
Yeah, there really still isn't.
Well, there's like, there's implied standards, right?
And there's all these kinds of new features
that are being added with like CSS,
but like what kind of stuff a browser
should be able to support,
features within languages, within JavaScript and so on.
But you're setting standards on the fly yourself.
Well, to this day, if you create a webpage
that has no CSS style sheet,
the browser will render it however it wants to, right?
So this was one of the things, there was this idea,
this idea at the time in how these systems were built,
which is separation of content from format,
or separation of content from appearance.
And that's still, people don't really use that anymore
because everybody wants to determine how things look,
and so they use CSS, but it's still in there
that you can just let the browser do all the work.
I still like the, like really basic websites,
but that could be just old school.
Kids these days with their fancy responsive websites
that don't actually have much content,
but have a lot of visual elements.
Well, that's one of the things that's fun about chat,
GPT, it's like-
Back to the basics.
It's back to just text, right?
And there is this pattern in human creativity and media
where you end up back at text,
and I think there's something powerful in there.
Is there some other stuff you remember,
like the purple links?
There were some interesting design decisions
to kind of come up that we have today
or we don't have today that were temporary.
So I made the background gray.
I hated reading text on white backgrounds,
and so I made the background gray.
Do you regret this?
No, no, no, that decision I think has been reversed,
but now I'm happy though,
because now dark mode is the thing, so.
So it wasn't about gray.
It was just, you didn't want a white background.
It strained my eyes.
It strained your eyes.
Interesting.
And then there's a bunch of other decisions.
I'm sure there's an interesting history
of the development of HTML and CSS
and how those interface in JavaScript,
and there's this whole Java applet thing.
Well, the big one, probably JavaScript.
CSS was after me, so I didn't know it was not me,
but JavaScript was the big,
JavaScript maybe was the biggest of the whole thing.
That was us, and that was basically a bet.
It was a bet on two things.
One is that the world wanted
a new front-end scripting language,
and then the other was, I thought at the time
the world wanted a new back-end scripting language.
So JavaScript was designed from the beginning
to be both front-end and back-end,
and then it failed as a back-end scripting language,
and Java won for a long time,
and then Python, Perl, and other things, PHP, and Ruby,
but now JavaScript is back, and so.
I wonder if everything in the end will run on JavaScript.
It seems like it is the,
and by the way, let me give a shout-out to Brendan Eich,
was basically the one-man inventor of JavaScript.
If you're interested to learn more about Brendan Eich,
he's been on his podcast previously.
Exactly.
So he wrote JavaScript over a summer,
and I think it is fair to say now
that it's the most widely used language in the world,
and it seems to only be gaining in its range of adoption.
In the software world, there's quite a few stories
of somebody over a weekend or over a week or over a summer
writing some of the most impactful,
revolutionary pieces of software ever.
Well, look.
That should be inspiring, yes?
Very inspiring.
I'll give you another one, SSL.
So SSL was the security protocol that was us,
and that was a crazy idea at the time,
which was let's take all the native protocols
and let's wrap them in a security wrapper.
That was a guy named Kip Hickman
who wrote that over a summer, one guy.
And then look, today, sitting here today,
like the Transformer at Google
was a small handful of people,
and then the number of people
who did the core work on GPT, it's not that many people.
It's a pretty small handful of people.
And so, yeah, the pattern in software repeatedly
over a very long time has been it's a...
Jeff Bezos always had the two-pizza rule
for teams at Amazon, which is any team
needs to be able to be fed with two pizzas
if you need the third pizza, you have too many people.
And I think it's actually the one-pizza rule.
For the really creative work,
I think it's two people, three people.
You see that with certain open-source projects.
So much is done by one or two people.
It's so incredible.
And that's why you see, that gives me so much hope
about the open-source movement in this new age of AI,
where just recently having had a conversation
with Mark Zuckerberg, of all people
who's all in on open-source,
which is so interesting to see and so inspiring to see.
Because releasing these models, it is scary.
It is potentially very dangerous, and we'll talk about that.
But it's also, if you believe in the goodness
of most people and in the skillset of most people
and the desire to do good in the world,
that's really exciting.
Because it's not putting these models
into the centralized control of big corporations,
the government and so on.
It's putting it in the hands of a teenage kid
with a dream in his eyes.
I don't know.
That's beautiful.
And look, this stuff, AI ought to make the individual coder
obviously far more productive by 1,000x or something.
And so you ought to be open-source,
not just the future of open-source AI,
but the future of open-source everything.
We ought to have a world now of super coders
who are building things as open-source
with one or two people that were inconceivable
five years ago.
The level of hyper productivity
we're going to get out of our best and brightest,
I think is going to go way up.
It's going to be interesting.
We'll talk about it,
but let's just linger a little bit on Netscape.
Netscape was acquired in 1999 for 4.3 billion by AOL.
What was that like?
What were some memorable aspects of that?
Well, that was the height of the .com boom bubble bust.
I mean, that was the frenzy.
If you watch Succession,
that was like what they did in the fourth season
with Gojo and the merger with their...
So it was like the height of one of those kind of dynamics.
And so-
Would you recommend Succession, by the way?
I'm more of a Yellowstone guy.
Yellowstone's very American.
I'm very proud of you.
That is.
I just talked to Matthew McConaughey
and I'm full on Texan at this point.
Good, I heartily approve.
And he will be doing the sequel to Yellowstone.
Yeah, which is exciting.
Very exciting.
Anyway, so that's a rude interruption by me
by way of Succession.
So that was at the height of the-
Deal making and money
and just the fur flying and like craziness.
And so, yeah, it was just one of those.
It was just like, I mean,
this is the entire Netscape thing
from start to finish was four years,
which was like, for one of these companies,
it's just like incredibly fast.
We went public 18 months after we were founded,
which virtually never happens.
So it was just this incredibly fast kind of meteor
streaking across the sky.
And then of course it was this,
and then there was just this explosion, right,
because then it was almost immediately followed
by the dot-com crash.
It was then followed by AOL buying Time Warner,
which again is the Succession guys kind of play with that,
which turned out to be a disastrous deal,
one of the famous kind of disasters in business history.
And then what became an internet depression
on the other side of that.
But then in that depression in the 2000s
was the beginning of broadband and smartphones
and Web 2.0, right, and then social media and search
and SaaS and everything that came out of that.
What did you learn from just the acquisition?
I mean, this is so much money.
What's interesting, because I must've been very new to you,
that the software stuff, you can make so much money.
There's so much money swimming around.
I mean, I'm sure the ideas of investment
were starting to get born there.
Yes, so let me lay it out.
So here's the thing, I don't know if I figured it out then,
but figured it out later, which is,
software is a technology that,
it's like the concept of the philosopher's stone.
The philosopher's stone in alchemy transmutes
lead into gold, and Newton spent 20 years
trying to find the philosopher's stone,
never got it there, nobody's ever figured it out.
Software is our modern philosopher's stone,
and in economic terms, it transmutes labor into capital,
which is like a super interesting thing.
And by the way, like Karl Marx is rolling over
in his grave right now, because of course
that's a complete refutation of his entire theory.
Transmutes labor into capital, which is as follows,
is somebody sits down at a keyboard
and types a bunch of stuff in,
and a capital asset comes out the other side,
and then somebody buys that capital asset
for a billion dollars, like, that's amazing, right?
It's literally creating value right out of thin air,
out of purely human thought, right?
And so that's, there are many things
that make software magical and special,
but that's the economic side.
I wonder what Marx would have thought about that.
Oh, he would have completely broke his brain,
because of course the whole thing was,
that kind of technology is inconceivable when he was alive,
it was all industrial era stuff,
and so any kind of machinery necessarily involves
huge amounts of capital, and then labor was
on the receiving end of the abuse.
But like a software engineer is somebody
who basically transmutes his own labor
into an actual capital asset, creates permanent value.
Well, in fact, it's actually very inspiring.
That's actually more true today than before.
So when I was doing software, the assumption was
all new software basically has a sort of a parabolic
sort of life cycle, right?
So you ship the thing, people buy it,
at some point everybody who wants it has bought it,
and then it becomes obsolete, and it's like bananas.
Nobody buys old software.
These days, Minecraft, Mathematica, Facebook, Google,
you have the software assets that have been around
for 30 years that are gaining in value every year, right?
And they're just there being the World of Warcraft, right?
Salesforce.com, like they're being, every single year,
they're being polished and polished and polished
and polished, they're getting better and better,
more powerful, more powerful, more valuable, more valuable.
So we've entered this era where you can actually
have these things that actually build out over decades,
which, by the way, is what's happening right now
with like GPT.
And so now, and this is why there is always
sort of a constant investment frenzy around software
is because, look, when you start one of these things,
it doesn't always succeed, but when it does now,
you might be building an asset that builds value
for four or five, six decades to come,
if you have a team of people who have the level
of devotion required to keep making it better.
And then the fact that, of course, everybody's online,
there's five billion people that are a click away
from any new piece of software, so the potential
market size for any of these things is nearly infinite.
It must have been surreal back then, though.
Yeah, yeah, this was all brand new, right?
Yeah, back then, this was all brand new.
These were all brand new.
Had you rolled out that theory in even 1999,
people would have thought you were smoking crack,
so that's emerged over time.
Well, let's now turn back into the future.
You wrote the essay, Why AI Will Save the World.
Let's start at the very high level.
What's the main thesis of the essay?
Yeah, so the main thesis on the essay is that
what we're dealing with here is intelligence,
and it's really important to kind of talk about
the sort of varying nature of what intelligence is,
and fortunately, we have a predecessor
to machine intelligence, which is human intelligence,
and we've got observations and theories
over thousands of years for what intelligence is
in the hands of humans, and what intelligence is, right?
I mean, what it literally is is the way to capture,
process, analyze, synthesize information, solve problems,
but the observation of intelligence in human hands
is that intelligence quite literally
makes everything better, and what I mean by that is
every kind of outcome of human quality of life,
whether it's education outcomes,
or success of your children, or career success,
or health, or lifetime satisfaction,
by the way, propensity to peacefulness
as opposed to violence,
propensity for open-mindedness versus bigotry,
those are all associated with higher levels of intelligence.
Smarter people have better outcomes than almost,
as you write, in almost every domain of activity.
Academic achievement, job performance,
occupational status, income, creativity,
physical health, longevity, learning new skills,
managing complex tasks, leadership,
entrepreneurial success, conflict resolution,
reading comprehension, financial decision-making,
understanding others' perspectives, creative arts,
parenting outcomes, and life satisfaction.
One of the more depressing conversations I've had,
and I don't know why it's depressing,
I have to really think through why it's depressing,
but on IQ and the g-factor,
and that that's something in large part is genetic,
and it correlates so much with all of these things
and success in life.
It's like all the inspirational stuff we read about,
like if you work hard and so on,
damn, it sucks that you're born with a hand
that you can't change.
But what if you could?
You're saying, basically, a really important point,
and I think it's a, in your articles,
it really helped me, it's a nice added perspective
to think about, listen, human intelligence,
the science of intelligence is shown scientifically
that it just makes life easier and better,
the smarter you are.
And now, let's look at artificial intelligence.
And if that's a way to increase some human intelligence,
then it's only going to make a better life.
That's the argument.
And certainly at the collective level,
we could talk about the collective effect
of just having more intelligence in the world,
which will have very big payoff.
But there's also just at the individual level,
what if every person has a machine,
you know, it's the concept of augment,
Doug Engelbart's concept of augmentation.
You know, what if everybody has an assistant
and the assistant is 140 IQ and you happen to be 110 IQ
and you've got something that basically
is infinitely patient and knows everything about you
and is pulling for you in every possible way
and wants you to be successful.
And anytime you find anything confusing
or want to learn anything
or have trouble understanding something
or want to figure out what to do in a situation, right,
want to figure out how to prepare for a job interview,
like any of these things, like it will help you do it.
And it will therefore, the combination will effectively be,
you know, will effectively raise your,
because it will effectively raise your IQ,
will therefore raise the odds of successful life outcomes
in all these areas.
So people below the hypothetical 140 IQ,
it'll pull them up towards 140 IQ.
Yeah, yeah, yeah.
And then of course, you know, people at 140 IQ
will be able to have a peer, right,
to be able to communicate, which is great.
And then people above 140 IQ will have an assistant
that they can farm things out to.
And then look, God willing, you know, at some point,
these things go from future versions,
go from 140 IQ equivalent to 150 to 160 to 180, right?
Like Einstein was estimated to be on the order of 160,
you know, so when we get, you know, 160 AI,
like we'll be, you know,
one assumes creating Einstein-level
breakthroughs in physics.
And then at 180, we'll be, you know,
curing cancer and developing warp drive
and doing all kinds of stuff.
And so it is quite possibly the case,
this is the most important thing that's ever happened,
the best thing that's ever happened,
because precisely because it's a lever
on this single fundamental factor of intelligence,
which is the thing that drives so much of everything else.
Can you still man the case that human plus AI
is not always better than human for the individual?
You may have noticed that there's a lot of smart
assholes running around.
Sure, yes.
Right, and so like, it's smart,
there are certain people where they get smarter,
you know, they get to be more arrogant, right?
So, you know, there's one huge flaw.
To push back on that, it might be interesting
because when the intelligence is not all coming from you,
but from a system, from another system,
that might actually increase the amount of humility
even in the assholes.
One would hope.
Or it could make assholes more assholes.
You know, that's, I mean, that's psychology to study.
Yeah, exactly.
Another one is smart people are very convinced
that they, you know, have a more rational view of the world
and that they have a easier time seeing through
conspiracy theories and hoaxes and right, you know,
sort of crazy beliefs and all that.
There's a theory in psychology,
which is actually smart people.
So for sure, people who aren't as smart
are very susceptible to hoaxes and conspiracy theories.
But it may also be the case that the smarter you get,
you become susceptible in a different way,
which is you become very good at marshaling facts
to fit preconceptions, right?
You become very, very good at assembling
whatever theories and frameworks and pieces of data
and graphs and charts you need to validate
whatever crazy ideas got in your head.
And so you're susceptible in a different way, right?
We're all sheep, but different colored sheep.
Some sheep are better at justifying it, right?
And those are the, you know, those are the smart sheep,
right?
So yeah, look, like, I would say this,
look, like there are no panacea, I am not a utopian.
There are no panaceas in life.
There are no, like, you know,
I don't believe they're like pure positives.
I'm not a transcendental kind of person like that,
but you know, so yeah, there are going to be issues.
And, you know, look, smart people,
another thing maybe you could say about smart people
is they are more likely to get themselves in situations
that are, you know, beyond their grasp, you know,
because they're just more confident
in their ability to deal with complexity
and their eyes become bigger,
their cognitive eyes become bigger than their stomach.
You know, so yeah, you could argue those eight different
ways, nevertheless, on net, right?
Clearly, overwhelmingly, again,
if you just extrapolate from what we know
about human intelligence,
you're improving so many aspects of life
if you're upgrading intelligence.
So there'll be assistants at all stages of life.
So when you're younger, there's,
for education, all that kind of stuff,
for mentorship, all of this.
And later on, as you're doing work
and you've developed a skill
and you're having a profession,
you'll have an assistant that helps you
excel at that profession.
So at all stages of life.
Yeah, I mean, look, the theory is augmentations.
This is the De Gengelbart's term.
De Gengelbart made this observation many, many decades ago
that, you know, basically it's like
you can have this oppositional frame of technology
where it's like us versus the machines.
But what you really do is you use technology
to augment human capabilities.
And then, by the way,
that's how actually the economy develops.
We can talk about the economic side of this,
but that's actually how the economy grows
is through technology augmenting human potential.
And so, yeah, and then you basically have a proxy
or a sort of prosthetic.
So like you've got glasses, you've got a wristwatch,
you've got shoes, you've got these things,
you've got a personal computer,
you've got a word processor,
you've got Mathematica, you've got Google.
This is the latest, viewed through that lens.
AI is the latest in a long series
of basically augmentation methods
to be able to raise human capabilities.
It's just this one is the most powerful one of all
because this is the one that goes directly
to what they call fluid intelligence, which is IQ.
Well, there's two categories of folks that you outline
that worry about or highlight the risks of AI,
and you highlight a bunch of different risks.
I would love to go through those risks
and just discuss them, brainstorm which ones are serious
and which ones are less serious.
But first, the Baptists and the bootleggers.
What are these two interesting groups of folks
who worry about the effect of AI on human civilization?
Or say they do.
Say, oh, okay, yes, I'll say they do.
The Baptists worry, the bootleggers say they do.
So the Baptists and the bootleggers is a metaphor
from economics, from what's called development economics,
and it's this observation that when you get
social reform movements in a society,
you tend to get two sets of people showing up
arguing for the social reform.
And the term Baptists and bootleggers
comes from the American experience
with alcohol prohibition.
And so in the 1900s, 1910s, there was this movement
that was very passionate at the time,
which basically said alcohol is evil
and it's destroying society.
By the way, there was a lot of evidence to support this.
There were very high rates of, very high correlations then,
by the way, and now, between rates of physical violence
and alcohol use.
Almost all violent crimes have either the perpetrator
or the victim or both drunk.
Almost all, if you see this actually in the work,
almost all sexual harassment cases in the workplace,
it's like at a company party and somebody's drunk.
It's amazing how often alcohol actually correlates
to actually just dysfunction,
it leads to domestic abuse and so forth, child abuse.
And so you had this group of people who were like,
okay, this is bad stuff and we should outlaw it.
And those were quite literally Baptists.
Those were super committed, hardcore Christian activists
in a lot of cases.
There was this woman whose name was Carrie Nation,
who was this older woman who had been in this,
I don't know, disastrous marriage or something.
And her husband had been abusive and drunk all the time.
And she became the icon of the Baptist prohibitionists.
And she was legendary in that era for carrying an ax
and doing completely on her own, doing raids of saloons
and taking her ax to all the bottles and kegs in the back.
So a true believer.
An absolute true believer,
with absolutely the purest of intentions.
And again, there's a very important thing here,
which is you could look at this cynically
and you could say the Baptists are delusional extremists,
but you can also say, look, they're right.
Like she had a point, she wasn't wrong
about a lot of what she said.
But it turns out, the way the story goes is it turns out
that there were another set of people
who very badly wanted to outlaw alcohol in those days.
And those were the bootleggers, which was organized crime
that stood to make a huge amount of money
if legal alcohol sales were banned.
And this was in fact, the way the history goes,
is this was actually the beginning
of organized crime in the US.
This was the big economic opportunity that opened that up.
And so they went in together.
And they didn't go in together.
The Baptists did not even necessarily know
about the bootleggers
because they were on their moral crusade.
The bootleggers certainly knew about the Baptists
and they were like, wow, these people are like
the great front people for shenanigans in the background.
And they got the Volstead Act passed.
And they did in fact ban alcohol in the US.
And you'll notice what happened,
which is people kept drinking.
It didn't work.
People kept drinking.
The bootleggers made a tremendous amount of money.
And then over time it became clear
that it made no sense to make it illegal
and it was causing more problems.
And so then it was revoked.
And here we sit with legal alcohol 100 years later
with all the same problems.
And the whole thing was this like giant misadventure.
The Baptists got taken advantage of by the bootleggers
and the bootleggers got what they wanted.
And that was that.
The same two categories of folks are now sort of suggesting
the development of artificial intelligence
should be regulated.
100%, yeah, it's the same pattern.
And the economists will tell you
it's the same pattern every time.
Like this is what happened with nuclear power.
This is what happened, which is another interesting one.
But like, yeah, this happens dozens and dozens of times
throughout the last 100 years.
And this is what's happening now.
And you write that it isn't sufficient
to simply identify the actors and impugn their motives.
We should consider the arguments of both the Baptists
and the bootleggers on their merits.
So let's do just that.
Risk number one.
Will AI kill us all?
Yes.
So what do you think about this one?
What do you think is the core argument here?
That the development of AGI, perhaps better said,
will destroy human civilization?
No, first of all, you just did a sleight of hand
because we went from talking about AI to AGI.
Is there a fundamental difference there?
I don't know.
What's AGI?
What's AI?
What's intelligence?
Oh, I know what AI is.
AI is machine learning.
What's AGI?
I think we don't know what the bottom of the well
of machine learning is or what the ceiling is.
Because just to call something machine learning
or just to call something statistics
or just to call it math or computation
doesn't mean nuclear weapons are just physics.
So to me it's very interesting and surprising
how far machine learning has taken.
No, but we knew that nuclear physics would lead to weapons.
That's why the scientists of that era
were always in this huge dispute about building the weapons.
This is different.
AGI is different.
Where does machine learning lead?
Do we know?
We don't know, but this is my point.
It's different.
We actually don't know.
And this is where the sleight of hand kicks in, right?
This is where it goes from being a scientific topic
to being a religious topic.
And that's why I specifically called out,
because that's what happens.
They do the vocabulary shift and all of a sudden
you're talking about something totally
that's not actually real.
Well then maybe you could also,
as part of that, define the Western tradition
of millennialism.
Yes, end of the world, apocalypse.
What is it?
Apocalypse cults.
Apocalypse cults.
Well, so we of course live in a Judeo-Christian,
but primarily Christian, kind of saturated,
kind of Christian, post-Christian, secularized Christian
kind of world in the West.
And of course, core to Christianity
is the idea of the second coming and the revelations
and Jesus returning and the thousand year utopia on earth
and then the rapture and all that stuff.
We collectively, as a society,
we don't necessarily take all that fully seriously now.
So what we do is we create our secularized versions of that.
We keep looking for utopia.
We keep looking for basically the end of the world.
And so what you see over decades is basically a pattern
of these sort of, this is what cults are.
This is how cults form,
as they form around some theory of the end of the world.
And so the people's temple cult, the Manson cult,
the Heaven's Gate cult, the David Koresh cult,
what they're all organized around is like,
there's gonna be this thing that's gonna happen
that's gonna basically bring civilization crashing down.
And then we have this special elite group of people
who are gonna see it coming and prepare for it.
And then there are the people
who are either going to stop it or are failing stopping it.
They're gonna be the people who survive to the other side
and ultimately get credit for having been right.
Why is that so compelling, do you think?
Because it satisfies this very deep need we have
for transcendence and meaning
that got stripped away when we became secular.
Yeah, but why does the transcendence involve
the destruction of human civilization?
Because like, how plausible,
it's like a very deep psychological thing,
because it's like, how plausible is it
that we live in a world
where everything's just kind of all right?
Right, how exciting is that, right?
We want more than that.
But that's the deep question I'm asking.
Why is it not exciting to live in a world
where everything's just all right?
Because I think most of the animal kingdom
would be so happy with just all right,
because that means survival.
Why are we, maybe that's what it is.
Why are we conjuring up things to worry about?
So C.S. Lewis called it the God-shaped hole.
So there's a God-shaped hole in the human experience,
consciousness, soul, whatever you want to call it,
where there's got to be something
that's bigger than all this.
There's got to be something transcendent.
There's got to be something that is bigger, right?
Bigger, a bigger purpose, a bigger meaning.
And so we have run the experiment of,
we're just gonna use science and rationality
and everything's just gonna be as it appears.
And a large number of people have found that
very deeply wanting and have constructed narratives.
And this is the story of the 20th century, right?
Communism was one of those.
Communism was a form of this.
Nazism was a form of this.
Some people, you can see movements like this
playing out all over the world right now.
So you construct a kind of devil, a kind of source of evil,
and we're going to transcend beyond it.
Yeah, and the millenarian's kind of,
when you see a millenarian cult,
they put a really specific point on it,
which is end of the world, right?
There is some change coming.
And that change that's coming is so profound
and so important that it's either gonna lead
to utopia or hell on Earth, right?
And it is going to, and then it's like,
what if you actually knew that that was going to happen?
What would you do, right?
How would you prepare yourself for it?
How would you come together
with a group of like-minded people?
What would you do?
Would you plan caches of weapons in the woods?
Would you, I don't know, create underground bunkers?
Would you spend your life trying to figure out
a way to avoid having it happen?
Yeah, that's a really compelling, exciting idea
to have a club over, to have a little bit of travel,
like a get together on a Saturday night
and drink some beers and talk about the end of the world
and how you are the only ones who have figured it out.
And then once you lock in on that,
how can you do anything else with your life?
This is obviously the thing that you have to do.
And then there's a psychological effect that you alluded to.
There's a psychological effect where if you take
a set of true believers and you leave them to themselves,
they get more radical, right?
Because they self-radicalize each other.
That said, it doesn't mean they're not sometimes right.
Yeah, the end of the world might be, yes, correct.
Like they might be right.
Yeah, add some pamphlets for you.
Exactly.
I mean, there's, I mean, we'll talk about nuclear weapons
because you have a really interesting little moment
that I learned about in your essay, but you know,
sometimes it could be right because we're still,
you were developing more and more powerful technologies
in this case, and we don't know what the impact
they will have on human civilization.
Well, we can highlight all the different predictions
about how it will be positive, but the risks are there.
And you discussed some of them.
Well, the steel man, the steel man is, the steel man,
well, actually the steel man and his reputation are the same,
which is, well, you can't predict what's going to happen,
right, you, right?
You can't rule out that this will not end everything, right?
But the response to that is you have just made
a completely non-scientific claim.
You've made a religious claim, not a scientific claim.
How does it get disproven?
And there's no, by definition with these kinds of claims,
there's no way to disprove them, right?
And so there's no, you just go right on the list.
There's no hypothesis.
There's no testability of the hypothesis.
There is no way to falsify the hypothesis.
There's no way to measure progress along the arc.
Like it's just all completely missing.
And so it's not scientific.
Well, I don't think it's completely missing.
It's somewhat missing.
So for example, the people that say AI
is going to kill all of us,
I mean, they usually have ideas about how to do that,
whether it's the paperclip maximizer or, you know,
it escapes, there's mechanism by which you can imagine
it killing all humans.
Models.
And you can disprove it by saying there is a limit
to the speed at which intelligence increases.
Maybe show that the sort of rigorously
really described model, like how it could happen
and say, no, here's a physics limitation.
There's like a physical limitation to how these systems
would actually do damage to human civilization.
And it is possible they will kill 10 to 20%
of the population, but it seems impossible
for them to kill 99%.
There's practical counterarguments, right?
So you mentioned basically what I described
is the thermodynamic counterargument,
which is sitting here today.
It's like, where would the evil AGI get the GPUs?
Cause like they don't exist.
So you're going to have a very frustrated baby evil AGI
who's going to be like trying to buy Nvidia stock
or something to get them to finally make some chips, right?
So the serious form of that is the thermodynamic argument,
which is like, okay, where's the energy going to come from?
Where's the processor going to be running?
Where's the data center going to be happening?
How is this going to be happening in secret?
Such that, you know, it's not, you know,
so that's a practical counterargument
to the runaway AGI thing.
I have a, but I have a, and we can argue that, discuss that.
I have a deeper objection to it,
which is it's, this is all forecasting.
It's all modeling.
It's all, it's all future prediction.
It's all future hypothesizing.
It's not science.
Sure.
It is not, it is, it is, it is the opposite of science.
So the pull up Carl Sagan,
extraordinary claims require extraordinary proof, right?
These are extraordinary claims.
The policies that are being called for, right?
To prevent this are of extraordinary magnitude.
And I think we're going to cause extraordinary damage.
And this is all being done on the basis of something
that is literally not scientific.
It's not a testable hypothesis.
So the moment you say AI's going to kill all of us,
therefore we should ban it or we should regulate
all that kind of stuff.
That's when it starts getting serious.
Or start, you know, military airstrikes on data centers.
Oh boy.
Right?
And like.
Yeah, this one's get starts, starts getting real weird.
So here's the problem with millenarian cults.
They have a hard time staying away from violence.
Yeah, but violence is so fun.
It will.
If you're on the right end of it.
They have a hard time avoiding violence.
The reason they have a hard time avoiding violence
is if you actually believe the claim, right?
Then what would you do to stop the end of the world?
Well, you would do anything, right?
And so, and this is where you get,
and again, if you just look at the history
of millenarian cults,
this is where you get the people's temple
and everybody killing themselves in the jungle.
And this is where you get Charles Manson
and you know, sending in, you need to kill the pigs.
Like this is the problem with these.
They have a very hard time to run the line
at actual violence.
And I think in this case,
there's, I mean, they're already calling for it like today.
And you know, where this goes from here
is they get more worked up.
Like, I think it's like really concerning.
Okay, but that's kind of the extremes.
You know, the extremes of anything are always concerning.
It's also possible to kind of believe
that AI has a very high likelihood of killing all of us.
But there's, and therefore we should maybe consider
slowing development or regulating.
So not violence or any of these kinds of things,
but saying like, all right, let's take a pause here.
You know, biological weapons, nuclear weapons,
like whoa, whoa, whoa, whoa, whoa.
This is like serious stuff.
We should be careful.
So it is possible to kind of
have a more rational response, right?
If you believe this risk is real.
Believe.
Yes, so is it possible to have a scientific approach
to the prediction of the future?
I mean, we just went through this with COVID.
What do we know about modeling?
Well, I mean, okay.
What did we learn about modeling with COVID?
There's a lot of lessons.
They didn't work at all.
They worked poorly.
The models were terrible.
The models were useless.
I don't know if the models were useless
or the people interpreting the models
and then the centralized institutions
that were creating policy rapidly based on the models
and leveraging the models in order to support their
narratives versus actually interpreting the air bars
and the models and all that kind of stuff.
What you had with COVID, my view,
you had with COVID is you had these experts showing up
and they claimed to be scientists
and they had no testable hypotheses whatsoever.
They had a bunch of models.
They had a bunch of forecasts
and they had a bunch of theories
and they laid these out in front of policymakers
and policymakers freaked out and panicked, right?
And implemented a whole bunch of like,
really like terrible decisions
that we're still living with the consequences of.
And there was never any empirical foundation
to any of the models.
None of them ever came true.
To push back, there were certainly Baptists
and bootleggers in the context of this pandemic,
but there's still a usefulness to models, no?
So not if they're, I mean,
not if they're reliably wrong, right?
Then they're actually like anti-useful, right?
They're actually damaging.
But what do you do with a pandemic?
What do you do with any kind of threat?
Don't you want to kind of have several models to play with
as part of the discussion of like,
what the hell do we do here?
I mean, do they work?
Because they're an expectation that they actually like work,
that they have actual predictive value.
I mean, as far as I can tell with COVID,
we just, the policymakers just sigh up themselves
into believing that there was substance.
I mean, look, the scientists were at fault.
The quote unquote scientists showed up.
So I had some insight into this.
So there was a, remember the Imperial College models
out of London were the ones that were like,
these are the gold standard models.
So a friend of mine runs a big software company
and he was like, wow, this is like, COVID's really scary.
And he's like, you know, he contacted this research
and he's like, you know, do you need some help?
You've been just building this model on your own
for 20 years.
Do you need some, would you like us,
our coders to basically restructure it
so it can be fully adapted for COVID?
And the guy said yes and sent over the code.
And my friend said it was like the worst spaghetti code
he's ever seen.
That doesn't mean it's not possible to construct
a good model of pandemic with the correct error bars
with a high number of parameters that are continuously
many times a day updated as we get more data
about a pandemic.
I would like to believe when a pandemic hits the world,
the best computer scientists in the world,
the best software engineers respond aggressively.
And as input, take the data that we know about the virus
and as an output, say, here's what's happening
in terms of how quickly it's spreading,
what that lead, in terms of hospitalization and deaths
and all that kind of stuff.
Here's how likely, how contagious it likely is.
Here's how deadly it likely is based on different conditions
based on different ages and demographics
and all that kind of stuff.
So here's the best kinds of policy.
It feels like you could have models, machine learning,
that kind of, they don't perfectly predict the future,
but they help you do something because there's pandemics
that are like, meh, they don't really do much harm.
And there's pandemics, you can imagine them,
they could do a huge amount of harm.
Like they can kill a lot of people.
So you should probably have some kind of data-driven models
that keep updating that allow you to make decisions
based like, how bad is this thing?
Now you can criticize how horrible all of that went
with the response to this pandemic,
but I just feel like there might be some value to models.
So to be useful, at some point it has to be predictive.
So the easy thing for me to do is to say,
obviously you're right.
Obviously I want to see that just as much as you do
because anything that makes it easier to navigate
through society, through a wrenching risk like that,
that sounds great.
The harder objection to it is just simply
you are trying to model a complex dynamic system
with 8 billion moving parts, like not possible.
It's very tough.
Can't be done, complex systems can't be done.
Machine learning says hold my beer, but it's possible, no?
I don't know, I would like to believe that it is.
Yeah.
I'll put it this way, I think where you and I would agree
is I think we would like that to be the case.
We are strongly in favor of it.
I think we would also agree that no such thing,
with respect to COVID or pandemics, no such thing,
at least neither you nor I think are aware,
I'm not aware of anything like that today.
My main worry with the response to the pandemic is that,
same as with aliens, is that even if such a thing existed
and it's possible it existed,
the policymakers were not paying attention.
There was no mechanism that allowed those kinds of models
to percolate out.
Oh, I think we had the opposite problem during COVID.
I think the policymakers, I think these people
with basically fake science had too much access
to the policymakers.
Right, but the policymakers also wanted,
they had a narrative in mind,
and they also wanted to use whatever model
that fit that narrative to help them out.
So it felt like there was a lot of politics
and not enough science.
Although a big part of what was happening,
a big reason we got lockdowns for as long as we did
was because these scientists came in
with these like doomsday scenarios
that were like just like completely off the hook.
Scientists in quotes, that's not-
Quote, unquote, scientists.
That's not, let's give love to science.
That is the way out.
Science is a process of testing hypotheses.
Modeling does not involve testable hypotheses, right?
Like I don't even know that,
I actually don't, I don't even know
that modeling actually qualifies as science.
Maybe that's a side conversation
we could have some time over a beer.
That's a really interesting part.
What do we do about the future?
I mean, what-
So number one is when we start with number one, humility.
Goes back to this thing of how do we determine the truth?
Number two is we don't believe,
you know, it's the old, I've got a hammer,
everything looks like a nail, right?
I've got, oh, this is one of the reasons I gave you.
I gave Lex a book, which the topic of the book
is what happens when scientists basically stray
off the path of technical knowledge
and start to weigh in on politics and societal issues.
In this case, philosophers.
Well, in this case, philosophers,
but he actually talks in this book about like Einstein.
He talks about actually about the nuclear age and Einstein.
He talks about the physicists actually doing,
doing very similar things at the time.
The book is When Reason Goes on Holiday,
Philosophers and Politics by Nevin.
And it's just a story, it's a story,
there are other books on this topic,
but this is a new one that's really good.
That's just a story of what happens
when experts in a certain domain decide to weigh in
and become basically social engineers
and political, you know, basically political advisors.
And it's just a story of just unending catastrophe.
Right, and I think that's what happened with COVID again.
Yeah, I found this book a highly entertaining
and eye-opening read filled with the amazing anecdotes
of irrationality and craziness
by famous recent philosophers.
This is definitely right.
After you read this book,
you will not look at Einstein the same.
Oh boy. Yeah.
Don't destroy my heroes.
You will not be a hero of yours anymore.
I'm sorry, you probably shouldn't read the book.
But here's the thing, the AI risk people,
they don't even have the COVID model.
At least not that I'm aware of.
Like there's not even the equivalent of the COVID model.
They don't even have the spaghetti code.
They've got a theory and a warning and a this and a that.
And like, if you ask like, okay, well, here's,
I mean, the ultimate example is, okay, how do we know,
right, how do we know that an AI is running away?
Like how do we know that the Foom takeoff thing
is actually happening?
And the only answer that any of these guys have given
I've ever seen is, oh, it's when the loss rate,
the loss function in the training drops, right?
That's when you need to like shut down the data center.
Right. And it's like, well, that's also what happens
when you're successfully training a model.
Like, what even is, this is not science.
This is not, it's not anything.
It's not a model, it's not anything.
There's nothing to, arguing with it is like,
you know, pushing Jell-O.
Like there's, what do you even respond to?
So just push back on that.
I don't think they have good metrics of, yeah,
when the Foom is happening,
but I think it's possible to have that.
Like I just, as you speak now, I mean,
it's possible to imagine there could be measures.
It's been 20 years.
No, for sure.
But it's been only weeks since we had
a big enough breakthrough in language models.
We can start to actually have this,
the thing is the AI doomer stuff didn't have
any actual systems to really work with.
And now there's real systems you can start to analyze,
like how does this stuff go wrong?
And I think you kind of agree that there is a lot of risks
that we can analyze.
The benefits outweigh the risks in many cases.
Well, the risks are not existential.
Yes, well.
Not in the Foom paperclip, not in this.
Okay, there's another sleight of hand
that you just alluded to.
There's another sleight of hand that happens,
which is very-
I think I'm very good at the sleight of hand thing.
Which is very not scientific.
So the book, Super Intelligence, right?
Which is like the Nick Bostrom's book,
which is like the origin of a lot of this stuff,
which was written, you know,
whatever 10 years ago or something.
So he does this really fascinating thing in the book,
which is he basically says there are many possible routes
to machine intelligence, to artificial intelligence.
And he describes all the different routes
to artificial intelligence, all the different possible,
everything from biological augmentation through to,
you know, all these different things.
One of the ones that he does not describe
is large language models,
because of course the book was written
before they were invented, and so they didn't exist.
In the book, he describes them all,
and then he proceeds to treat them all
as if they're exactly the same thing.
He presents them all as sort of an equivalent risk
to be dealt with in an equivalent way
to be thought about the same way.
And then the risk, the quote-unquote risk
that's actually emerged
is actually a completely different technology
than he was even imagining.
And yet all of his theories and beliefs
are being transplanted by this movement,
like straight onto this new technology.
And so again, like, there's no other area of science
or technology where you do that.
Like when you're dealing with like organic chemistry
versus inorganic chemistry, you don't just like say,
oh, with respect to like either one, basically,
maybe, you know, growing up and eating the world
or something like they're just gonna operate the same way.
Like you don't.
But you can start talking about like as we get
more and more actual systems
that start to get more and more intelligent,
you can start to actually have
more scientific arguments here.
Like, you know, high level,
you can talk about the threat of autonomous weapons systems
back before we had any automation in the military.
And that would be like very fuzzy kind of logic.
But the more and more you have drones
that are becoming more and more autonomous,
you can start imagining,
okay, what does that actually look like?
And what's the actual threat of autonomous weapons systems?
How does it go wrong?
And still it's very vague.
We start to get a sense of like, all right,
it should probably be illegal or wrong or not allowed
to do like mass deployment of fully autonomous drones
that are doing aerial strikes on large areas.
I think it should be required.
Right, so that's- No, no, no, no.
I think it should be required
that only aerial vehicles are automated.
Okay, so you wanna go the other way?
I wanna go the other way.
Okay. I think it's obvious
that the machine is gonna make a better decision
than the human pilot.
I think it's obvious that it's in the best interest
of both the attacker and the defender
and humanity at large
if machines are making more of these decisions
and not people.
I think people make terrible decisions in times of war.
But like there's ways this can go wrong too, right?
Well, wars go terribly wrong now.
This goes back to the,
this is that whole thing about like the self-driving car
need to be perfect versus
does it need to be better than the human driver?
Does the automated drone need to be perfect
or does it need to be better than a human pilot
at making decisions under enormous amounts
of stress and uncertainty?
Yeah, well, on average,
the worry that AI folks have is the runaway.
They're gonna come alive, right?
Then again, that's the sleight of hand, right?
Or not come alive.
No, hold on a second.
You lose control, I thought.
But then they're gonna develop goals of their own.
They're gonna develop a mind of their own.
They're gonna develop their own, right?
No, more like a Chernobyl-style meltdown,
like just bugs in the code accidentally force you,
that results in the bombing of like large civilian areas
to a degree that's not possible
in the current military strategies controlled by humans.
Actually, we've been doing a lot of mass bombings
to cities for a very long time.
Yes, and a lot of civilians died.
And a lot of civilians died.
And if you watch the documentary,
The Fog of War, McNamara,
it spends a big part of it talking about the firebombing
of the Japanese cities,
burning them straight to the ground, right?
The devastation in Japan.
American military firebombing the cities in Japan
was a considerably bigger devastation
than the use of nukes, right?
So we've been doing that for a long time.
We also did that to Germany,
by the way Germany did that to us, right?
That's an old tradition.
The minute we got airplanes,
we started doing indiscriminate bombing.
So one of the things that the modern US military
can do with technology, with automation,
but technology more broadly
is a higher and higher precision strikes.
Yeah, and so precision is obviously precision,
and this is the JDAM, right?
So there was this big advance called the JDAM,
which basically was strapping a GPS transceiver
to an unguided bomb and turning it into a guided bomb.
And yeah, that's great.
Like, look, that's been a big advance.
And that's like a baby version of this question,
which is, okay, do you want like the human pilot
like guessing where the bomb's gonna land,
or do you want like the machine
like guiding the bomb to its destination?
That's a baby version of the question.
The next version of the question is,
do you want the human or the machine
deciding whether to drop the bomb?
Everybody just assumes the human's gonna do a better job
for what I think are fundamentally suspicious reasons.
Emotional, psychological reasons.
Yes, I think it's very clear
that the machine's gonna do a better job
making that decision,
because the humans making that decision are godawful,
just terrible, right?
And so, yeah, so this is the thing.
And then let's get to the,
one more sleight of hand.
Yes, sure, please.
I'm a magician, you could say.
One more sleight of hand.
These things are gonna be so smart, right,
that they're gonna be able to destroy the world
and wreak havoc and do all this stuff
and plan and do all this stuff and evade us
and have all their secret things
and their secret factories and all this stuff.
But they're so stupid
that they're gonna get tangled up in their code.
And they're not gonna come alive,
but there's gonna be some bug
that's gonna cause them to turn us all into paper.
That they're gonna be genius in every way
other than the actual bad goal.
And that's just a ridiculous discrepancy.
And you can prove this today.
You can actually address this today
for the first time with LLMs,
which is you can actually ask LLMs
to resolve moral dilemmas.
So you can create the scenario,
dot, dot, dot, this, that, this, that, this, that.
What would you as the AI do in this circumstance?
And they don't just say destroy all humans,
destroy all humans.
They will give you actually very nuanced
moral, practical, trade-off-oriented answers.
And so we actually already have the kind of AI
that can actually think this through
and can actually reason about goals.
Well, the hope is that AGI
or like various super-intelligent systems
have some of the nuance that LLMs have.
And the intuition is they most likely will
because even these LLMs have the nuance.
LLMs are really,
this is actually worth spending a moment on,
LLMs are really interesting
to have moral conversations with.
And that, I didn't expect I'd be having
a moral conversation with a machine in my lifetime.
And let's remember,
we're not really having a conversation with a machine.
We're having a conversation with the entirety
of the collective intelligence of the human species.
Exactly.
Yes, correct.
But it's possible to imagine autonomous weapons systems
that are not using LLMs.
If they're smart enough to be scary,
why are they not smart enough to be wise?
Like that's the part where it's like,
I don't know how you get the one without the other.
Is it possible to be super-intelligent
without being super-wise?
Well, again, you're back to that.
I mean, then you're back to a classic autistic computer,
right?
Like you're back to just like a blind rule follower.
I've got this like core is the paperclip thing.
I've got this core rule
and I'm just going to follow it to the end of the earth.
And it's like, well,
but everything you're going to be doing to execute that rule
is going to be super genius level
that humans aren't going to be able to counter.
It's just, it's a mismatch in the definition
of what the system is capable of.
Unlikely, but not impossible, I think.
But again, here you get to like, okay, like.
No, I'm not saying when it's unlikely, but not impossible.
If it's unlikely,
that means the fear should be correctly calibrated.
Extraordinary claims require extraordinary proof.
Well, okay.
So one interesting sort of tangent
I would love to take on this
because you mentioned this in the essay about nuclear,
which was also,
I mean, you don't shy away
from a little bit of a spicy take.
So Robert Oppenheimer famously said,
now I am become death, the destroyer of worlds.
As he witnessed the first detonation
of a nuclear weapon on July 16th, 1945.
And you write an interesting historical perspective,
quote, recall that John von Neumann
responded to Robert Oppenheimer's famous hand wringing
about the role of creating nuclear weapons,
which you note helped end World War II
and prevent World War III
with some people confessed guilt
to claim credit for the sin.
And you also mentioned that Truman was harsher
after meeting Oppenheimer.
He said that, don't let that crybaby in here again.
Real quote, real quote, by the way,
from Dean Acheson.
Oh boy.
Because Oppenheimer didn't just say the famous line.
He then spent years going around,
basically moaning and going on TV
and going into the White House
and basically just doing this hair shirt thing,
this sort of self-critical, like,
oh my God, I can't believe how awful I am.
So he's widely considered,
perhaps because the hand wringing
is the father of the atomic bomb.
This is von Neumann's criticism of him,
is he tried to have his cake and eat it too.
Like he wanted to, and so.
And von Neumann, of course,
has a very different kind of personality
and he's just like, yeah, that's good.
This is like an incredibly useful thing.
I'm glad we did it.
Yeah.
Well, von Neumann is as widely credited
as being one of the smartest humans of the 20th century.
There's certain people, everybody says like,
this is the smartest person I've ever met
when they've met him.
Anyway, that doesn't mean smart, doesn't mean wise.
So I would love to sort of,
can you make the case both for and against
the critique of Oppenheimer here?
Because we're talking about nuclear weapons.
Boy, do they seem dangerous.
So the critique goes deeper, and I left this out.
Here's the real substance, I left it out
because I didn't want to dwell on nukes in my paper.
But here's the deeper thing that happened.
And I'm really curious, this movie coming out this summer,
I'm really curious to see how far he pushes this
because this is the real drama in the story,
which is it wasn't just a question of are nukes good or bad,
it was a question of should Russia also have them?
And what actually happened was America invented the bomb,
Russia got the bomb, they got the bomb through espionage.
They got American scientists and foreign scientists
working on the American project,
some combination of the two,
basically gave the Russians the designs for the bomb.
And that's how the Russians got the bomb.
There's this dispute to this day
of Oppenheimer's role in that.
If you read all the histories, the kind of composite picture
and by the way, we now know a lot actually
about Soviet espionage in that era
because there's been all this declassified material
in the last 20 years that actually shows
a lot of very interesting things.
But if you kind of read all the histories,
what you kind of get is Oppenheimer himself
probably did not hand over the nuclear secrets himself.
However, he was close to many people who did,
including family members.
And there were other members of the Manhattan Project
who were Russian Soviet SS and did hand over the bomb.
And so the view that Oppenheimer and people like him had
that this thing is awful and terrible and oh my God,
and all this stuff, you could argue,
fed into this ethos at the time that resulted
in people thinking that, Baptists thinking
that the only principle thing to do
is to give the Russians the bomb.
And so the moral beliefs on this thing
and the public discussion and the role
that the inventors of this technology play,
this is the point of this book,
when they kind of take on
this sort of public intellectual moral kind of thing,
it can have real consequences, right?
Because we live in a very different world today
because Russia got the bomb than we would have lived in
had they not gotten the bomb, right?
The entire 20th century, second half of the 20th century
would have played out very different
had those people not given Russia the bomb.
And so the stakes were very high then.
The good news today is nobody's sitting here today,
I don't think, worrying about an analogous situation
with respect to, I'm not really worried
that Sam Altman's gonna decide to give the Chinese
the design for AI, although he did just speak
at a Chinese conference, which is interesting.
But however, I don't think that's what's at play here.
But what's at play here are all these other fundamental
issues around what do we believe about this
and then what laws and regulations and restrictions
that we're gonna put on it.
And that's where I draw a direct straight line.
Anyway, and my reading of the history on nukes
is the people who were doing the full hair-shirt public,
this is awful, this is terrible,
actually had catastrophically bad results
from taking those views.
And that's what I'm worried is gonna happen again.
But is there a case to be made that you really need
to wake the public up to the dangers of nuclear weapons
when they were first dropped?
Really educate them on this is extremely dangerous
and destructive weapon.
I think the education kind of happened quick and early.
Like it was pretty obvious.
We dropped one bomb and destroyed an entire city.
Yeah, so 80,000 people dead.
But the reporting of that,
you can report that in all kinds of ways.
Wars, you can do all kinds of slants,
like war is horrible, war is terrible.
You can make it seem like the use of nuclear weapons
is just a part of war and all that kind of stuff.
Something about the reporting and the discussion
of nuclear weapons resulted in us being terrified
in awe of the power of nuclear weapons.
And that potentially fed in a positive way
towards the game theory of mutual issue or destruction.
Well, so this gets to what actually happens.
Some of it is me playing devil's advocate here.
Yeah, yeah, sure, of course.
Let's get to what actually happened
and then kind of back into that.
So what actually happened, I believe,
and again, I think this is a reasonable reading of history,
is what actually happened was nukes then prevented
World War III and they prevented World War III
through the game theory of mutual issue or destruction.
Had nukes not existed, there would have been no reason
why the Cold War did not go hot, right?
And the military planners at the time thought,
both on both sides, thought that there was gonna be
World War III on the plains of Europe
and they thought there was gonna be
like 100 million people dead, right?
It was like the most obvious thing in the world to happen.
And it's the dog that didn't bark, right?
It may be like the best single net thing
that happened in the entire 20th century,
is that that didn't happen.
Yeah, actually, just on that point,
you say a lot of really brilliant things.
It hit me, just as you were saying it,
I don't know why it hit me for the first time,
but we got two wars in a span of like 20 years.
We could have kept getting more and more world wars
and more and more ruthless.
It actually, you could have had a US versus Russia war.
You could have.
By the way, there's another hypothetical scenario.
The other hypothetical scenario is the Americans
got the bomb, the Russians didn't, right?
And then America is the big dog.
And then maybe America would have had the capability
to actually roll back the Iron Curtain.
I don't know whether that would have happened,
but it's entirely possible, right?
And the act of these people who had these moral positions
about, because they could forecast, they could model,
they could forecast the future of how the
technology would get used, made a horrific mistake,
because they basically ensured that the Iron Curtain
would continue for 50 years longer than it would have.
And again, these are counterfactuals.
I don't know that that's what would have happened.
But the decision to hand the bomb over was a big decision.
Made by people who were very full of themselves.
Yeah, but so me as an American, me as a person
that loves America, I also wonder if US was the only ones
with the nuclear weapons.
That was the argument for handing the,
that was the guys who, the guys who handed over the bomb.
That was actually their moral argument.
Yeah, I would probably not hand it over to,
I would be careful about the regimes you hand it over to.
Maybe give it to like the British or something.
Or like a democratically elected government.
Well, there are people to this day who think
that those Soviet spies did the right thing,
because they created a balance of terror
as opposed to the US having just,
and by the way, let me, let me.
Balance of terror.
Let's tell the full version.
That's such a sexy ring to it.
Okay, so the full version of the story is,
John von Neumann's a hero of both yours and mine.
The full version of the story is,
he advocated for a first strike.
So when the US had the bomb and Russia did not,
he advocated for, he said, we need to strike them right now.
Strike Russia.
Yeah.
Yes.
Von Neumann.
Yes, because he said World War III is inevitable.
He was very hardcore.
His theory was, his theory was World War III is inevitable.
We're definitely going to have World War III.
The only way to stop World War III
is we have to take them out right now.
And we have to take them out right now
before they get the bomb,
because this is our last chance.
Now again, like.
Is this an example of philosophers in politics?
I don't know if that's in there or not,
but this is in the standard bottom.
No, but is it, meaning is that.
Yeah, this is on the other side.
So most of the case studies,
most of the case studies in books like this
are the crazy people on the left.
Yeah.
Von Neumann is a story,
arguably of the crazy people on the right.
Yeah, stick to computing, John.
Well, this is the thing,
and this is the general principle,
it goes back to our core thing,
which is like, I don't know whether any of these people
should be making any of these calls.
Yeah.
Because there's nothing in either Von Neumann's background
or Oppenheimer's background
or any of these people's background
that qualifies them as moral authorities.
Yeah, well, this actually brings up the point of in AI,
who are the good people to reason about the morality
of the ethics?
Outside of these risks, outside of like,
the more complicated stuff that you agree on is,
this will go into the hands of bad guys
and all the kinds of ways they'll do
is interesting and dangerous,
is dangerous in interesting, unpredictable ways,
and who is the right person,
who are the right kinds of people
to make decisions how to respond to it?
Are these tech people?
So the history of these fields,
this is what he talks about in the book,
the history of these fields is that the competence
and capability and intelligence and training
and accomplishments of senior scientists and technologists
working on a technology,
and then being able to then make moral judgments
on the use of that technology,
that track record is terrible.
That track record is like catastrophically bad.
The people that develop that technology
are usually not going to be the right people.
So the claim is, of course, they're the knowledgeable ones,
but the problem is they've spent their entire life
in a lab, right?
They're not theologians.
So what you find when you read this
and when you look at these histories,
what you find is they generally are very thinly informed
on history, on sociology, on theology, on morality, ethics.
They tend to manufacture their own worldviews from scratch.
They tend to be very sort of thin.
They're not remotely the arguments that you would be having
if you got a group of highly qualified theologians
or philosophers or, you know.
Well, let me sort of,
as the devil's advocate takes a sip of whiskey,
say that I agree with that,
but also it seems like the people who are doing
the ethics departments in these tech companies
go sometimes the other way.
Yes, they're definitely, yes.
They're not nuanced on history or theology
or this kind of stuff.
It almost becomes a kind of outraged activism
towards directions that don't seem to be grounded
in history and humility and nuance.
It's, again, drenched with arrogance.
So I'm not sure which is worse.
Well, no, they're both bad.
So definitely not them either.
But I guess.
But look, this is a hard.
Yeah, it's a hard problem.
This is a hard problem.
This goes back to where we started,
which is, okay, who has the truth?
And it's like, well, you know,
how do societies arrive at truth
and how do we figure these things out?
And our elected leaders play some role in it.
You know, we all play some role in it.
There have to be some set of public intellectuals
at some point that bring rationality
and judgment and humility to it.
Those people are few and far between.
We should probably prize them very highly.
Yeah, celebrate humility in our public leaders.
So getting to risk number two,
will AI ruin our society?
Short version, as you write,
if the murder robots don't get us,
the hate speech and misinformation will.
And the action you recommend, in short,
don't let the thought police suppress AI.
Well, what is this risk
of the effect of misinformation on society
that's going to be catalyzed by AI?
Yeah, so this is the social media.
This is what you just alluded to.
It's the activism kind of thing
that's popped up in these companies in the industry.
And it's basically, from my perspective,
it's basically part two of the war
that played out over social media over the last 10 years.
Because you probably remember,
social media 10 years ago was basically,
who even wants this?
Who wants a photo of what your cat had for breakfast?
Like, this stuff is silly and trivial,
and why can't these nerds figure out
how to invent something useful and powerful?
And then certain things happened in the political system,
and then sort of the polarity on that discussion
switched all the way to social media
as the worst, most corrosive, most terrible,
most awful technology ever invented,
and then it leads to terrible politicians and policies
and politics and all this stuff.
And that all got catalyzed into this very big
kind of angry movement,
both inside and outside the companies,
to kind of bring social media to heal.
And that got focused in particularly on two topics,
so-called hate speech and so-called misinformation.
And that's been the saga playing out for the last decade.
And I don't even really want to even argue
the pros and cons of the sides,
just to observe that that's been a huge fight
and has had big consequences to how these companies operate.
Basically, those same sets of theories,
that same activist approach,
that same energy is being transplanted straight to AI.
And you see that already happening.
It's why, you know, ChatGPT will answer,
let's say, certain questions and not others.
It's why it gives you the canned speech about, you know,
whenever it starts with, as a large language model,
I cannot, you know,
basically means that somebody has reached in there
and told that it can't talk about certain topics.
Do you think some of that is good?
So it's an interesting question.
So a couple observations.
So one is the people who find this the most frustrating
are the people who are worried about the murder robots.
Right?
So, and in fact, the so-called ex-risk people, right,
they started with the term AI safety.
The term became AI alignment.
When the term became AI alignment
is when this switch happened from,
we're worried it's going to kill us all,
to we're worried about hate speech and misinformation.
The AI ex-risk people have now renamed their thing,
AI not kill everyone-ism,
which I have to admit is a catchy term.
And they are very frustrated by the fact that the hate,
either the sort of activist driven hate speech
misinformation kind of thing is taking over,
which is what's happened, it's taken over.
The AI ethics field has been taken over
by the hate speech misinformation people.
You know, look, would I like to live in a world
in which like everybody was nice to each other all the time
and nobody ever said anything mean
and nobody ever used a bad word
and everything was always accurate and honest.
Like that sounds great.
Do I want to live in a world where there's like
a centralized thought police working
through the tech companies to enforce the view
of a small set of elites that they're going to determine
what the rest of us think and feel like absolutely not.
There could be a middle ground somewhere
like Wikipedia type of moderation.
There's moderation on Wikipedia
that is somehow crowdsourced
where you don't have centralized elites,
but it's also not completely just a free for all
because if you have the entirety of human knowledge
at your fingertips, you can do a lot of harm.
Like if you have a good assistant
that's completely uncensored,
they can help you build a bomb.
They can help you mess
with people's physical wellbeing, right?
Because that information is out there on the internet.
And so presumably there's, it would be,
you could see the positives in censoring some aspects
of an AI model when it's helping you commit literal violence.
Yeah.
And there's a section, later section of the essay
where I talk about bad people doing bad things.
Yes. Right.
Which, and there's a set of things
that we should discuss there.
What happens in practice is these lines,
as you alluded to this already,
these lines are not easy to draw.
And what I've observed in the social media version of this
is like the way I describe it as the slippery slope,
it's not a fallacy, it's an inevitability.
The minute you have this kind of activist personality
that gets in a position to make these decisions,
they take it straight to infinity.
Like it goes into the crazy zone,
like almost immediately and never comes back
because people become drunk with power, right?
And they look, if you're in the position to determine
what the entire world thinks and feels and reads and says,
like, you're going to take it.
And Elon has ventilated this with the Twitter files
over the last three months.
And it's just like crystal clear,
like how bad it got there.
Now, reason for optimism is what Elon is doing
with Community Notes.
So Community Notes is actually a very interesting thing.
So what Elon is trying to do with Community Notes
is he's trying to have it where there's only a Community Note
when people who have previously disagreed on many topics
agree on this one.
Yes, that's what I'm trying to get at is like there's,
it could be Wikipedia like models or Community Notes
type of models where allows you to essentially
either provide context or censor in a way
that does not resist the slippery slope nature of power.
There's an entirely different approach here,
which is basically we have AIs that are producing content.
We could also have AIs that are consuming content, right?
And so one of the things that your assistant could do
for you is help you consume all the content, right?
And basically tell you when you're getting played.
So for example, I'm gonna want the AI that my kid uses,
right, to be very child safe.
And I'm gonna want it to filter for him
all kinds of inappropriate stuff
that he shouldn't be saying just because he's a kid.
Yeah.
And you see what I'm saying is you can implement that.
Architecturally you could say
you can solve this on the client side, right?
Solving on the server side gives you an opportunity
to dictate for the entire world,
which I think is where you take the slippery slope to hell.
There's another architectural approach
which is to solve this on the client side,
which is certainly what I would endorse.
It's AI risk number five,
will AI lead to bad people doing bad things?
I can just imagine language models
used to do so many bad things,
but the hope is there that you can have
large language models used to then defend against it
by more people, by smarter people,
by more effective people, skilled people,
all that kind of stuff.
Three-point argument on bad people doing bad things.
So number one, right, you can use the technology defensively
and we should be using AI to build broad spectrum vaccines
and antibiotics for bioweapons
and we should be using AI to hunt terrorists
and catch criminals and we should be doing
all kinds of stuff like that.
And in fact, we should be doing those things
even just to go eliminate risk from regular pathogens
that aren't constructed by an AI.
So there's the whole defensive set of things.
Second is we have many laws on the books
about the actual bad things, right?
So it is actually illegal to commit crimes,
to commit terrorist acts, to build pathogens
with the intent to deploy them to kill people.
And so we actually don't need new laws
for the vast majority of the scenarios.
We actually already have the laws on the books.
The third argument is the minute,
and this is sort of the foundational one
that gets really tough,
but the minute you get into this thing,
which you were kind of getting into, which is like, okay,
but don't you need censorship sometimes, right?
And don't you need restrictions sometimes?
It's like, okay, what is the cost of that?
And in particular in the world of open source, right?
And so is open source AI going to be allowed or not?
If open source AI is not allowed,
then what is the regime that's going to be necessary
legally and technically to prevent it from developing, right?
And here again is where you get into,
and people have proposed that these kinds of things,
you get into, I would say,
pretty extreme territory pretty fast.
Do we have a monitor agent on every CPU and GPU
that reports back to the government
what we're doing with our computers?
Are we seizing GPU clusters to get beyond a certain size?
And then by the way, how are we doing all that globally,
and if China is developing an LLM beyond the scale
that we think is allowable, are we going to invade?
And you have figures on the AIX risk side
who are advocating potentially up to nuclear strikes
to prevent this kind of thing.
And so here you get into this thing,
and again, you could maybe say this is,
you could even say this is good, bad,
or indifferent or whatever,
but here's the comparison of nukes.
The comparison of nukes is very dangerous
because one is just nukes for just a,
although we can come back to nuclear power.
But the other thing was like with nukes,
you could control plutonium, right?
You could track plutonium, and it was hard to come by.
AI is just math and code, right?
And it's in math textbooks,
and it's like there are YouTube videos
that teach you how to build it,
and there's open source.
There's already open source.
There's a 40 billion parameter model running around already
called Falcon Online that anybody can download.
And so, okay, you walk down the logic path
that says we need to have guardrails on this,
and you find yourself in a authoritarian totalitarian regime
of thought control and machine control
that would be so brutal
that you would have destroyed the society
that you're trying to protect.
And so I just don't see how that actually works.
So you have to understand,
my brain is going full steam ahead here
because I agree with basically everything you're saying
when I'm trying to play devil's advocate here.
Because, okay, you highlighted the fact
that there is a slippery slope to human nature.
The moment you censor something,
you start to censor everything.
That alignment starts out sounding nice,
but then you start to align to the beliefs
of some select group of people,
and then it's just your beliefs.
The number of people you're aligning to is smaller and smaller
as that group becomes more and more powerful.
Okay, but that just speaks to the people
that censor usually the assholes,
and the assholes get richer.
I wonder if it's possible to do without that.
For AI, one way to ask this question
is do you think the baseline foundation models
should be open-sourced?
Like what Mark Zuckerberg is saying they want to do.
So look, I think it's totally appropriate
that companies that are in the business
of producing a product or service
should be able to have a wide range of policies
that they put, right?
And I'll just say, again, I want a heavily censored model
for my eight-year-old.
Like, I actually want that.
I would pay more money for the ones
more heavily censored than the one that's not, right?
And so there are certainly scenarios
where companies will make that decision.
Look, an interesting thing you brought up
is this really a speech issue.
One of the things that the big tech companies
are dealing with is that content generated from an LLM
is not covered under Section 230,
which is the law that protects internet platform companies
from being sued for the user-generated content.
And so it's actually, yes.
And so there's actually a question,
I think there's still a question,
which is can big American companies
actually feel generative AI at all?
Or is the liability actually going to just ultimately
convince them that they can't do it?
Because the minute the thing says something bad,
and it doesn't even need to be hate speech,
it could just be like an inaccurate,
it could hallucinate a product detail on a vacuum cleaner,
and all of a sudden the vacuum cleaner company
sues for misrepresentation.
And there's any symmetry there, right?
Because the LLM's going to be producing billions
of answers to questions,
and it only needs to get a few wrong to have-
The laws has to get updated really quick here.
Yeah, and nobody knows what to do with that, right?
So anyway, there are big questions
around how companies operate at all.
So we can talk about those.
But then there's this other question of like,
okay, the open source, so what about open source?
And my answer to your question is kind of like,
obviously, yes, the models,
there has to be full open source here
because to live in a world in which
that open source is not allowed
is a world of draconian speech control,
human control, machine control.
I mean, you know, black helicopters
with jackbooted thugs coming out,
repelling down and seizing your GPU-like territory.
No, no, I'm 100% serious.
You're saying Slippery Slope always leaves there.
No, no, no, no, no, no.
That's what's required to enforce it.
Like, how will you enforce a ban on open source?
No, you could add friction to it.
Like, harder to get the models,
because people will always be able to get the models,
but it'll be more in the shadows, right?
The leading open source model right now is from the UAE.
Like, the next time they do that, what do we do?
Like, the 14-year-old in Indonesia
comes out with a breakthrough model.
You know, we talked about most great software
comes from a small number of people.
Some kid comes out with some big new breakthrough
in quantization or something,
and he has some huge breakthrough,
and like, what are we gonna, like,
invade Indonesia and arrest him?
It seems like in terms of size of models
and effectiveness of models,
the big tech companies will probably lead the way
for quite a few years, and the question is
of what policies they should use.
The kid in Indonesia should not be regulated,
but should Google, Meta, Microsoft, OpenAI be regulated?
Well, so, but this goes, okay,
so when does it become dangerous, right?
Is the danger that it's, quote,
as powerful as the current leading commercial model,
or is it that it is just at some other arbitrary threshold?
And then, by the way, like, look, how do we know?
Like, what we know today is that you need, like,
a lot of money to, like, train these things,
but there are advances being made every week
on training efficiency and, you know,
data, all kinds of synthetic, you know, look,
I don't even, like, the synthetic data thing
we were talking about, maybe some kid figures
out a way to auto-generate synthetic data.
And that's gonna change everything.
Yeah, exactly, and so, like, sitting here today,
like, the breakthrough just happened, right?
You made this point, like, the breakthrough just happened,
so we don't know what the shape
of this technology is gonna be.
I mean, the big shock, the big shock here is that,
you know, whatever number of billions of parameters
basically represents at least a very big percentage
of human thought, like, who would have imagined that?
And then there's already work underway.
There was just this paper that just came out
that basically takes a GPT-3 scale model
and compresses it down to run on a single 32-core CPU.
Like, who would have predicted that?
Yeah.
You know, some of these models now,
you can run on Raspberry Pis.
Like, today, they're very slow, but, like, you know,
maybe there'll be a, you know,
if you have real performance, you know, like,
it's math and code, and here we're back,
here we're back, it's math and code,
it's math and code, it's math, code, and data.
It's bits.
Marc's just, like, walked away at this point.
Screw it.
I don't know what to do with this.
You guys created this whole internet thing.
Yeah, yeah, I mean, I'm a huge believer in open source here.
So my argument is we're gonna have to,
see, here's my argument, my argument, my full argument is,
AI is gonna be like air, it's gonna be everywhere.
Like, this is just gonna be in text, it already is.
It's gonna be in textbooks, and kids are gonna grow up
knowing how to do this, and it's just gonna be a thing.
It's gonna be in the air, and you can't, like,
pull this back anymore, you can pull back air.
And so you just have to figure out
how to live in this world, right?
And then that's where I think, like,
all this hen-wringing about AI risk
is basically a complete waste of time,
because the effort should go into,
okay, what is the defensive approach?
And so if you're worried about AI-generated pathogens,
the right thing to do is to have a permanent
Project Warp Speed, right, funded lavishly.
Let's do a Manhattan Project for biological defense, right?
And let's build AIs, and let's have, like,
broad-spectrum vaccines where, like,
we're insulated from every pathogen, right?
And what the interesting thing is,
because it's software, a kid in his basement, teenager,
could build, like, a system that defends against,
like, the worst, I mean, and to me,
defense is super exciting.
It's, like, if you believe in the good of human nature,
that most people want to do good,
to be the savior of humanity is really exciting.
Yes.
Not, okay, that's a dramatic statement,
but, like, to help people, to help people, yeah.
Yeah, okay, what about, just to jump around,
what about the risk of, will AI lead
to crippling inequality?
You know, because we're kind of saying
everybody's life will become better.
Is it possible that the rich get richer here?
Yeah, so this actually ironically goes back to Marxism.
So, because this was the, so the core claim of Marxism,
right, basically, was that the owners of capital
would basically own the means of production,
and then over time, they would basically
accumulate all the wealth the workers would be paying in,
you know, and getting nothing in return,
because they wouldn't be needed anymore, right?
So, Marx was very worried about what he called mechanization,
or what later became known as automation,
and that, you know, the workers would be immiserated,
and the capitalists would end up with all.
And so, this was one of the core principles of Marxism.
Of course, it turned out to be wrong
about every previous wave of technology.
The reason it turned out to be wrong
about every previous wave of technology
is that the way that the self-interested owner
of the machines makes the most money
is by providing the production capability
in the form of products and services
to the most people, the most customers as possible, right?
The largest, and this is one of those funny things
where every CEO knows this intuitively,
and yet it's like hard to explain from the outside.
The way you make the most money in any business
is by selling to the largest market you can possibly get to.
The largest market you can possibly get to
is everybody on the planet.
And so, every large company does this,
everything that it can to drive down prices,
to be able to get volumes up,
to be able to get to everybody on the planet.
And that happened with everything from electricity,
it happened with telephones, it happened with radio,
it happened with automobiles, it happened with smartphones,
it happened with PCs, it happened with the internet,
it happened with mobile broadband,
it's happened, by the way, with Coca-Cola,
and it's happened with like every,
basically every industrially produced good or service.
People, you wanna drive it to the largest possible market.
And then as proof of that, it's already happened, right?
Which is the early adopters of like ChatGPT and Bing
are not like Exxon and Boeing,
they're your uncle and your nephew, right?
It's just like, it's either freely available online
or it's available for 20 bucks a month or something,
but these things went,
this technology went mass market immediately.
And so look, the owners of the means of production,
whoever does this,
as I mentioned in the Australian dollar questions,
there are people who are gonna get really rich doing this,
producing these things,
but they're gonna get really rich
by taking this technology to the broadest possible market.
So yes, they'll get rich,
but they'll get rich having a huge positive impact on?
Yeah, making the technology available to everybody.
And again, smartphones, same thing.
So there's this amazing kind of twist in business history,
which is you cannot spend $10,000 on a smartphone, right?
You can't spend $100,000, you can't spend a million,
like I would buy the million dollar smartphone,
like I'm signed up for it.
Like if it's like, suppose a million dollar smartphone
was like much better than the thousand dollar smartphone,
like I'm there to buy it, it doesn't exist.
Why doesn't it exist?
Apple makes so much more money driving the price further
down from a thousand dollars
than they would try and harvest, right?
And so it's just this repeating pattern you see
over and over again,
where the, and what's great about it,
what's great about it is
you do not need to rely on anybody's enlightened, right?
Generosity to do this.
You just need to rely on capitalist self-interest.
What about AI taking our jobs?
Yeah, so very, very similar thing here.
There's sort of a, there's a core fallacy,
which again was very common in Marxism,
which is what's called the lump of labor fallacy.
And this is sort of the fallacy that there is
only a fixed amount of work to be done in the world.
And if it's all being done today by people,
and then if machines do it,
there's no other work to be done by people.
And that's just a completely backwards view
on how the economy develops and grows,
because what happens is not in fact that.
What happens is the introduction of technology
into a production process causes prices to fall.
As prices fall, consumers have more spending power.
As consumers have more spending power,
they create new demand.
That new demand then causes capital and labor
to form into new enterprises to satisfy new wants and needs.
And the result is more jobs at higher wages.
So new wants and needs.
The worry is that the creation of new wants and needs
at a rapid rate will mean there's a lot of turnover in jobs,
so people will lose jobs.
Just the actual experience of losing a job
and having to learn new things and new skills
is painful for the individual.
Well, two things.
One is that new jobs are often much better.
So this actually came up,
that there was this panic about a decade ago
and all the truck drivers are gonna lose their jobs, right?
And number one, that didn't happen,
because we haven't figured out a way
to actually finish that yet.
But the other thing was like truck driver,
I grew up in a town that was basically consisted
of a truck stop, right?
And I knew a lot of truck drivers.
And truck drivers live a decade shorter than everybody else.
It's actually a very dangerous,
literally they have higher rates of skin cancer,
and on the left side of their body
from being in the sun all the time,
the vibration of being in the truck
is actually very damaging to your physiology.
And there's actually,
perhaps partially because of that reason,
there's a shortage of people who wanna be truck drivers.
Yeah, the question always you wanna ask somebody like that
is do you want your kid to be doing this job?
And most of them will tell you no.
I want my kid to be sitting in a cubicle somewhere
where they don't have this,
where they don't die 10 years earlier.
And so the new jobs, number one,
the new jobs are often better,
but you don't get the new jobs
until you go through the change.
And then to your point, the training thing,
it's always the issue is can people adapt?
And again, here you need to imagine living in a world
in which everybody has the AI assistant capability, right?
To be able to pick up new skills much more quickly
and be able to have a machine to work with
to augment their skills.
It's still gonna be painful,
but that's the process of life.
It's painful for some people.
I mean, there's no question it's painful for some people.
Yes, it's not, again, I'm not a utopian on this,
and it's not like it's positive for everybody in the moment,
but it has been overwhelmingly positive for 300 years.
I mean, look, this concern has played out
for literally centuries.
And this is the story of the Luddites.
You may remember there was a panic in the 2000s
around outsourcing was gonna take all the jobs.
There was a panic in the 2010s
that robots were gonna take all the jobs.
In 2019, before COVID, we had more jobs at higher wages,
both in the country and in the world,
than at any point in human history.
And so the overwhelming evidence is that the net gain here
is just wildly positive.
And most people overwhelmingly come out the other side
being huge beneficiaries of this.
So you write that the single greatest risk,
this is the risk you're most convinced by,
the single greatest risk of AI
is that China wins global AI dominance,
and we, the United States, and the West do not.
Can you elaborate?
Yeah, so this is the other thing,
which is a lot of the sort of AI risk debates today
sort of assume that we're the only game in town, right?
And so we have the ability to kind of sit
in the United States and criticize ourselves
and have our government beat up on our companies,
and figure out a way to restrict what our companies can do.
And we're gonna ban this and ban that,
restrict this and do that.
And then there's this other force out there
that doesn't believe we have any power
over them whatsoever.
And they have no desire to sign up for whatever rules
we decide to put in place.
And they're gonna do whatever it is they're gonna do,
and we have no control over it at all.
And it's China,
and specifically the Chinese Communist Party.
And they have a completely publicized, open plan
for what they're gonna do with AI.
And it is not what we have in mind.
And not only do they have that as a vision
and a plan for their society,
but they also have it as a vision
and plan for the rest of the world.
So their plan is what, surveillance?
Yeah, authoritarian control.
So authoritarian population control.
Good old fashioned communist authoritarian control,
and surveillance and enforcement,
and social credit scores and all the rest of it.
And you are gonna be monitored and metered
within an inch of everything all the time.
And it's basically the end of human freedom,
and that's their goal.
And they justify it on the basis of
that's what leads to peace.
And you're worried that regulating in the United States
will halt progress enough to where
the Chinese government would win that race?
So their plan, yes, yes.
And the reason for that is they,
and again, they're very public on this.
Their plan is to proliferate their approach
around the world.
And they have this program called the Digital Silk Road,
which is building on their Silk Road Investment Program.
And they've been laying networking infrastructure
all over the world with their 5G,
right, work with their company Huawei.
So they've been laying all this fabric.
Financial and technological fabric all over the world.
And their plan is to roll out their vision of AI
on top of that, and to have every other country
be running their version.
And then if you're a country prone to authoritarianism,
you're gonna find this to be an incredible way
to become more authoritarian.
If you're a country, by the way, not prone to authoritarianism
you're gonna have the Chinese Communist Party
running your infrastructure and having back doors into it.
Right?
Which is also not good.
What's your sense of where they stand
in terms of the race towards super intelligence
as compared to the United States?
Yeah, so good news is they're behind,
but bad news is they, let's just say they get access
to everything we do.
So they're probably a year behind at each point in time,
but they get downloads, I think,
of basically all of our work on a regular basis
through a variety of means.
And they are, we'll see, they're at least putting out
reports of very, they just put out a report last week
of a GPT 3.5 analog.
They put out this report, I forget what it's called,
but they put out this report of this LN they did,
you know, the way, when OpenAI puts out,
one of the ways they test a GPT is they run it
through standardized exams, like the SAT, right?
Just how you can kind of gauge how smart it is.
And so the Chinese report, they ran their LLM
through the Chinese equivalent of the SAT,
and it includes a section on Marxism
and a section on Mao Zedong thought,
and it turns out their AI does very well
on both of those topics.
Right?
So like-
Ah, this alignment thing.
Communist AI, right?
Like literal communist AI, right?
And so their vision is like, that's the, you know,
so, you know, you can just imagine like you're a school,
you know, you're a kid 10 years from now in Argentina
or in Germany or in who knows where, Indonesia,
and you ask the AI to explain to you
like how the economy works and it gives you
the most cheery, upbeat explanation of Chinese style
communism you've ever heard, right?
So like the stakes here are like really big.
Well, my, as we've been talking about,
my hope is not just for the United States,
but with just the kid in his basement,
the open source LLM.
So I don't know if I trust large centralized institutions
with super powerful AI, no matter what their ideology,
is power corrupts.
You've been investing in tech companies
for about, let's say 20 years,
and about 15 of which was with Andreessen Horowitz.
What interesting trends in tech
have you seen over that time?
Let's just talk about companies
and just the evolution of the tech industry.
I mean, the big shift over 20 years has been
that tech used to be a tools industry
for basically from like 1940 through to about 2010,
almost all the big successful companies
were picks and shovels companies.
So PC, database, smartphone, you know,
some tool that somebody else would pick up and use.
Since 2010, most of the big wins have been in applications.
So a company that starts a, you know,
it starts in an existing industry
and goes directly to the customer in that industry.
And then, you know, the early examples there
were like Uber and Lyft and Airbnb.
And then that model is kind of elaborating out.
The AI thing is actually a reversion on that for now.
Cause like most of the AI business right now
is actually in cloud provision of APIs
for other people to build on.
But the big thing will probably be in app.
Yeah, I think most of the money,
I think probably will be in whatever,
yeah, your AI financial advisor or your AI doctor
or your AI lawyer, or, you know,
take your pick of whatever the domain is.
And what's interesting is, you know,
the Valley kind of does everything.
Our entrepreneurs kind of elaborate every possible idea.
And so there will be a set of companies that like make AI
something that can be purchased and used by large law firms.
And then there will be other companies
that just go direct to market as an AI lawyer.
What advice could you give for a startup founder?
Just haven't seen so many successful companies,
so many companies that fail also.
What advice could you give to a startup founder,
someone who wants to build the next super successful startup
in the tech space, the Googles, the Apples, the Twitters?
Yeah, so the great thing about the really great founders
is they don't take any advice.
So, if you find yourself listening to advice,
maybe you shouldn't do it.
Well, that's actually just to elaborate on that.
If you could also speak to great founders too,
like what makes a great founder?
So, what makes a great founder is super smart,
coupled with super energetic,
coupled with super courageous.
I think it's some of those three.
Intelligence, passion, and courage.
The first two are traits,
and the third one is a choice, I think.
Courage is a choice.
Well, because courage is a question of pain, tolerance.
So, how many times are you willing
to get punched in the face before you quit?
Yeah.
And here's maybe the biggest thing people don't understand
about what it's like to be a startup founder is,
it gets very romanticized, right?
And even when they fail, it still gets romanticized
about what a great adventure it was.
But the reality of it is most of what happens
is people telling you no,
and then they usually follow that with you're stupid, right?
No, I will not come to work for you,
and I will not leave my cushy job at Google
to come work for you.
No, I'm not gonna buy your product.
No, I'm not gonna run a story about your company.
No, I'm not this, that, the other thing.
And so, a huge amount of what people have to do
is just get used to just getting punched.
And the reason people don't understand this
is because when you're a founder,
you cannot let on that this is happening
because it will cause people to think that you're weak
and they'll lose faith in you.
So, you have to pretend that you're having a great time
when you're dying inside, right?
Just a misery.
But why did they do it?
Why did they do it?
Yeah, that's the thing.
It's like it is a level,
this is actually one of the conclusions I think is,
I think it's actually, for most of these people
on a risk-adjusted basis, it's probably an irrational act.
They could probably be more financially successful
on average if they just got like a real job
at a big company.
But there's, some people just have an irrational need
to do something new and build something for themselves.
And some people just can't tolerate having bosses.
Oh, here's the fun thing is
how do you reference check founders, right?
So, you call it,
the normal way you reference check you're hiring somebody
is you call the bosses and you find out
if they were good employees
and now you're trying to reference check Steve Jobs, right?
And it's like, oh God, he was terrible.
He was a terrible employee.
He never did what we told him to do.
Yeah.
So, what's a good reference?
Do you want the previous boss to actually say
that they never did what you told them to do?
That might be a good thing.
Well, ideally, ideally what you want is I will go,
I would like to go to work for that person.
He worked for me here and now I'd like to work for him.
Now, unfortunately, most people can't,
their egos can't handle that.
So, they won't say that, but that's the ideal.
What advice would you give to those folks
in the space of intelligence, passion and courage?
So, I think the other big thing is
you see people sometimes who say,
I want to start a company
and then they kind of work through the process
of coming up with an idea.
And generally, those don't work as well as the case
where somebody has the idea first
and then they kind of realize
that there's an opportunity to build a company
and then they just turn out to be the right kind of person
to do that.
When you say idea, do you mean long-term big vision
or do you mean specifics of like product?
I would say specific, like specifically what,
yes, specifics, like what is,
because for the first five years,
you don't get to have vision.
You just got to build something people want
and you got to figure out a way to sell it to them, right?
It's very practical or you never get to big vision, so.
So, the first part, you have an idea of a set of products
or the first product that can actually make some money.
Yeah, like it's got to, the first product's got to work,
by which I mean like it has to technically work,
but then it has to actually fit into the category
in the customer's mind of something that they want.
And then, by the way, the other part is
they have to be willing to pay for it.
Like somebody's got to pay the bills.
And so, you've got to figure out how to price it
and whether you can actually extract the money.
So, usually it is much more predictable.
Success is never predictable, but it's more predictable
if you start with a great idea
and then back into starting the company.
So, this is what we did.
You know, we had Mosaic before we had Netscape.
The Google guys had the Google search engine
working at Stanford, right?
The, you know, yeah, actually there's tons of examples
where they, you know, Pierre Omidyar had eBay working
before he left his previous job.
So, I really love that idea of just having a thing,
a prototype that actually works
before you even begin to remotely scale.
Yeah, by the way, it's also far easier to raise money, right?
Like the ideal pitch that we receive is
here's the thing that works.
Would you like to invest in our company or not?
Like that's so much easier than here's 30 slides
with a dream, right?
And then we have this concept called the idea maze,
which our, apologies, a friend of us came up with
when he was with us.
So, then there's this thing, this goes to mythology,
which is, you know, there's a mythology that kind of,
you know, these ideas, you know, kind of arrive like magic
or people kind of stumble into them.
It's like eBay with the pest dispensers or something.
The reality usually with the big successes
is that the founder has been chewing on the problem
for five or 10 years before they start the company.
And they often worked on it in school
or they even experimented on it when they were a kid.
And they've been kind of training up
over that period of time to be able to do the thing.
So, they're like a true domain expert.
And it sort of sounds like mom and apple pie,
which is, yeah, you want to be a domain expert
in what you're doing, but you would, you know,
the mythology is so strong of like,
oh, I just like had this idea in the shower
and now I'm doing it like, it's generally not that.
No, because maybe in the shower
you had the exact product implementation details,
but yeah, usually you're going to be for like years,
if not decades, thinking about like everything around that.
Well, we call it the idea maze
because the idea maze basically is like,
there's all these permutations.
Like for any idea,
there's like all these different permutations.
Who should the customer be?
What shape form should the product have
and how should we take it to market and all these things?
And so, the really smart founders
have thought through all these scenarios
by the time they go out to raise money.
And they have like detailed answers
on every one of those fronts
because they put so much thought into it.
The sort of more haphazard founders
haven't thought about any of that.
And it's the detailed ones who tend to do much better.
So, how do you know when to take a leap?
If you have a cushy job or happy life?
I mean, the best reason is
just because you can't tolerate not doing it, right?
Like this is the kind of thing
where if you have to be advised into doing it,
you probably shouldn't do it.
And so, it's probably the opposite,
which is you just have such a burning sense
of this has to be done.
I have to do this.
I have no choice.
What if it's gonna lead to a lot of pain?
It's gonna lead to a lot of pain.
I think that's it.
What if it means losing sort of social relationships
and damaging your relationship with loved ones
and all that kind of stuff?
Yeah, look, so it's gonna put you
in a social tunnel for sure, right?
So, you're gonna like, you know.
There's this game you can play on Twitter,
which is you can do any whiff of the idea
that there's basically any such thing
as work-life balance and that people
should actually work hard and everybody gets mad.
But the truth is all the successful founders
are working 80-hour weeks,
and they form very strong social bonds
with the people they work with.
They tend to lose a lot of friends on the outside
or put those friendships on ice.
Like, that's just the nature of the thing.
You know, for most people, that's worth the trade-off.
The advantage maybe younger founders have
is maybe they have less.
Maybe they're not, for example,
if they're not married yet or don't have kids yet,
that's an easier thing to bite off.
Can you be an older founder?
Yeah, you definitely can, yeah, yeah.
Many of the most successful founders
are second, third, fourth-time founders.
They're in their 30s, 40s, 50s.
The good news of being an older founder is you know more,
and you know a lot more about what to do,
which is very helpful.
The problem is, okay, now you've got a spouse
and a family and kids,
and you've gotta go to the baseball game,
and you can't go to the base, you know, and so it's getting.
Life is full of difficult choices, Mark Andreessen.
You've written a blog post on what you've been up to.
You wrote this in October 2022.
Quote, mostly I try to learn a lot.
For example, the political events of 2014 to 2016
made clear to me that I didn't understand politics at all,
referencing maybe some of this book here.
So I deliberately withdrew from political engagement
and fundraising and instead read my way back into history
and as far to the political left
and political right as I could.
So just high-level question,
what's your approach to learning?
Yeah, so it's basically, I would say it's an autodidact.
So it sort of goes, it's going down the rabbit holes.
So it's a combination of say,
I kind of alluded to it in that quote,
it's a combination of breadth and depth.
And so I tend to, yeah, I tend to,
I go broad by the nature of what I do.
I go broad, but then I tend to go deep
in a rabbit hole for a while, read everything I can,
and then come out of it.
And I might not revisit that rabbit hole
for another decade.
And in that blog post that I recommend people go check out,
you actually list a bunch of different books
that you recommend on different topics
on the American left and the American right.
It's just a lot of really good stuff.
The best explanation for the current structure
of our society and politics, you give two recommendations,
four books on the Spanish Civil War,
six books on deep history of the American right,
comprehensive biographies of Adolf Hitler,
one of which I read and I can recommend,
six books on the deep history of the American left,
so the American right and American left,
looking at the history to give you the context.
Biography of Lenin, two of them on the French Revolution.
I actually have never read a biography on Lenin.
Maybe that will be useful.
Everything's been so Marx-focused.
The Sebastian biography of Lenin is extraordinary.
Victor Sebastian, okay.
It'll blow your mind, yeah.
So it's still useful to read.
It's incredible, yeah, it's incredible.
I actually think it's the single best book
on the Soviet Union.
So that, the perspective of Lenin might be the best way
to look at the Soviet Union versus Stalin versus Marx versus,
very interesting.
So two books on fascism and anti-fascism
by the same author, Paul Gottfried.
A brilliant book on the nature of mass movements
and collective psychology,
the definitive work on intellectual life
under totalitarianism, the captive mind,
the definitive work on the practical life
under totalitarianism.
There's a bunch, there's a bunch.
And the single best book,
first of all, the list here is just incredible,
but you say the single best book I have found
on who we are and how we got here is The Ancient City
by Numa Dennis Fostell D. Kulankis.
I like it.
What did you learn about who we are
as a human civilization from that book?
Yeah, so this is a fascinating book.
This one's free, it's free, by the way.
It's a book from the 1860s.
You can download it or you can buy prints of it.
But it was this guy who was a professor
at the Sorbonne in the 1860s.
And he was apparently a savant on antiquity,
on Greek and Roman antiquity.
And the reason I say that is because his sources
are 100% original Greek and Roman sources.
So he wrote basically a history of Western civilization
from on the order of 4,000 years ago
to basically the present times,
entirely working on original Greek and Roman sources.
And what he was specifically trying to do
was he was trying to reconstruct,
from the stories of the Greeks and the Romans,
he was trying to reconstruct what life in the West was like
before the Greeks and the Romans,
which was in the civilization known as the Indo-Europeans.
And the short answer is,
and this is sort of circa 2000 BC to sort of 500 BC,
kind of that 1,500 year stretch
where civilization developed.
And his conclusion was basically cults.
They were basically cults.
And civilization was organized into cults.
And the intensity of the cults was like a million fold
beyond anything that we would recognize today.
It was a level of all-encompassing belief
and an action around religion
that was at a level of extremeness
that we wouldn't even recognize it.
And so specifically he tells the story of,
basically there were three levels of cults.
There was the family cult, the tribal cult,
and then the city cult as society scaled up.
And then each cult was a joint cult of family gods,
which were ancestor gods, and then nature gods.
And then your bonding into a family, a tribe, or a city
was based on your adherence to that religion.
People who were not of your family tribe city
worshiped different gods,
which gave you not just the right with the responsibility
to kill them on sight, right?
So they were serious about their cults.
Hardcore.
By the way, shocking development,
I did not realize there's zero concept of individual rights.
Like even up through the Greeks and even in the Romans,
they didn't have the concept of individual rights.
The idea that as an individual, you have some right,
it's just like, nope, right?
And you look back and you're just like, wow,
that's just crazily fascist in a degree
that we wouldn't recognize today.
But it's like, well, they were living
under extreme pressure for survival.
And the theory goes, you could not have people
running around making claims to individual rights
when you're just trying to get your tribe
through the winter, right?
You need hardcore command and control.
And actually, through a modern political lens,
those cults were basically both fascist and communist.
They were fascist in terms of social control,
and then they were communist in terms of economics.
But you think that's fundamentally
that like pull towards cults is within us?
Well, so my conclusion from this book,
so the way we naturally think about the world
we live in today is like,
we basically have such an improved version
of everything that came before us, right?
Like we have basically, we've figured out all these things
around morality and ethics and democracy
and all these things.
And like, they were basically stupid and retrograde
and were like smart and sophisticated,
and we've improved all this.
After reading that book, I now believe in many ways
the opposite, which is no, actually,
we are still running in that original model.
We're just running in an incredibly diluted version of it.
So we're still running basically in cults.
It's just our cults are at like 1,000th
or 1,000,000th the level of intensity, right?
And so just to take religions,
the modern experience of a Christian in our time,
even somebody who considers him a devout Christian
is just a shadow of the level of intensity
of somebody who belonged to a religion back in that period.
And then by the way, it goes back to our AI discussion,
we then sort of endlessly create new cults.
Like we're trying to fill the void, right?
And the void is a void of bonding.
Okay, living in their era, like everybody living today,
transported in that era,
would view it as just like completely intolerable
in terms of like the loss of freedom
and the level of basically fascist control.
However, every single person in that era,
and he really stresses this,
they knew exactly where they stood.
They knew exactly where they belonged.
They knew exactly what their purpose was.
They knew exactly what they needed to do every day.
They knew exactly why they were doing it.
They had total certainty about their place in the universe.
So the question of meaning, the question of purpose
was very distinctly clearly defined for them.
Absolutely, overwhelmingly, undisputably, undeniably.
As we turn the volume down on the cultism,
we start to, the search for meaning
starts getting harder and harder.
Yes, because we don't have that.
We are ungrounded, we are uncentered,
and we all feel it, right?
And that's why we reach for,
it's why we still reach for religion.
It's why we reach for,
as people start to take on, let's say,
a faith in science, maybe beyond where they should put it.
And by the way, sports teams,
they're like a tiny little version of a cult,
and the Apple keynotes are a tiny little version of a cult.
Right?
And political, and there's cult,
there's full-blown cults on both sides
of the political spectrum right now,
operating in plain sight.
But still not full-blown,
compared as to what it was in the past.
Compared to what it used to be.
We would today consider full-blown,
but yes, they're at, I don't know,
at 100,000th or something of the intensity
of what people had back then.
So we live in a world today
that in many ways is more advanced, and moral, and so forth.
And it's certainly a lot nicer,
a much nicer world to live in.
But we live in a world that's very washed out.
It's like everything has become very colorless and gray,
as compared to how people used to experience things.
Which is, I think, why we're so prone to reach for drama.
Because there's something in us,
deeply evolved, where we want that back.
And I wonder where it's all headed,
as we turn the volume down more and more.
What advice would you give to young folks today?
In high school, in college,
how to be successful in their career,
how to be successful in their life?
Yeah, so the tools that are available today are just,
like, I sometimes bore kids by describing
what it was like to go look up a book,
to try to discover a fact in the old days,
the 1970s, 1980s, to go to the library
and the card catalog and the whole thing.
You go through all that work,
and then the book is checked out,
and you have to wait two weeks.
Like, to be in a world not only
where you can get the answer to any question,
but also the world now, the AI world,
where you've got the assistant
that will help you do anything,
help you teach, learn anything.
Your ability both to learn and also to produce
is just, I don't know, a million-fold
beyond what it used to be.
I have a blog post I've been wanting to write,
which I call, Where Are the Hyperproductive People?
Good question.
With these tools, there should be authors
that are writing hundreds or thousands
of outstanding books.
Well, with the authors, there's a consumption question, too.
Well, maybe not, maybe not.
You're right.
So the tools are much more powerful.
They're getting much more powerful every day.
There are artists, musicians, right?
Why aren't musicians producing
a thousand times the number of songs, right?
Like, the tools are spectacular.
So what's the explanation?
And by way of advice,
is motivation starting to be turned out
a little bit or what?
I think it might be distraction.
Distraction.
It's so easy to just sit and consume
that I think people get distracted from production.
But if you wanted to, as a young person,
if you wanted to really stand out,
you could get on a hyperproductivity curve very early on.
There's a great story in Roman history
of Pliny the Elder, who was this legendary statesman,
died in the Vesuvius eruption,
trying to rescue his friends.
But he was famous both for being a savant,
basically being a polymath, but also being an author.
And he wrote, apparently, hundreds of books,
most of which have been lost.
But he wrote all these encyclopedias.
And he literally would be reading and writing
all day long, no matter what else was going on.
So he would travel with four slaves,
and two of them were responsible for reading to him,
and two of them were responsible for taking dictation.
And so he'd be going cross-country,
and literally, he would be writing books all the time.
And apparently, they were spectacular.
There's only a few that have survived,
but apparently, they were amazing.
So there's a lot of value to being somebody
who finds focus in this life.
Yeah, and there are examples.
There's this guy, Judge, what's his name, Posner,
who wrote 40 books, and was also a great federal judge.
There's, our friend Balji, I think, is like this.
He's one of these, where his output is just prodigious.
And so it's like, yeah, I mean, with these tools, why not?
And I think we're at this interesting freeze-frame moment,
where these tools are now in everybody's hands,
and everybody's just staring at them,
trying to figure out what to do, the new tools.
We have discovered fire,
and trying to figure out how to use it to cook.
Yeah, right.
You told Tim Ferriss that the perfect day is caffeine
for 10 hours, and alcohol for four hours.
You didn't think I'd be mentioning this, did you?
It balances everything out perfectly, as you said.
Oh, so perfect.
So let me ask, what's the secret to balance,
and maybe to happiness in life?
I don't believe in balance,
so I'm the wrong person to ask about that.
Can you elaborate why you don't believe in balance?
I mean, maybe it's just, and look,
I think people are wired differently,
so I think it's hard to generalize this kind of thing,
but I am much happier and more satisfied
when I'm fully committed to something,
so I'm very much in favor of imbalance, yeah.
Imbalance, and that applies to work, to life, to everything.
Yeah, now I happen to have whatever twist
of personality traits lead that
in non-destructive dimensions,
including the fact that I've actually,
I now no longer do the 10-4 plan, I've stopped drinking.
I do the caffeine, but not the alcohol,
so there's something in my personality where I,
whatever maladaptive I have is inclining me
towards productive things, not unproductive things.
So you're one of the wealthiest people in the world.
What's the relationship between wealth and happiness?
Oh.
Money and happiness.
So I think happiness,
I don't think happiness is the thing.
To strive for?
I think satisfaction is the thing.
That just sounds like happiness, but turned down a bit.
No, deeper.
So happiness is a walk in the woods at sunset,
an ice cream cone, a kiss.
The first ice cream cone is great.
The thousandth ice cream cone, not so much.
At some point, the walks in the woods get boring.
What's the distinction between happiness and satisfaction?
I think satisfaction is a deeper thing,
which is like having found a purpose
and fulfilling it, being useful.
So just something that permeates all your days,
just this general contentment of being useful.
That I'm fully satisfying my faculties,
that I'm fully delivering on the gifts
that I've been given, that I'm net making the world better,
that I'm contributing to the people around me,
and that I can look back and say, wow, that was hard,
but it was worth it.
I think generally it seems to lead people
in a better state than pursuit of pleasure,
pursuit of quote unquote happiness.
Doesn't money have anything to do with that?
I think the founders, the founding fathers in the US
threw this off kilter when they used the phrase
pursuit of happiness, I think they should have said.
Pursuit of satisfaction?
They said pursuit of satisfaction,
we might live in a better world today.
They could have elaborated on a lot of things.
They could have tweaked the Second Amendment.
I think they were smarter than they realized.
They said, you know what, we're gonna make it ambiguous
and let these humans figure out the rest,
these tribal cult-like humans figure out the rest.
But money empowers that?
So I think, and I think Elon is,
I don't think I'm even a great example,
but I think Elon would be a great example of this,
which is like, look, he's a guy who from every day
of his life, from the day he started making money at all,
he just plows into the next thing.
And so I think money is definitely an enabler
for satisfaction.
Money applied to happiness leads people
down very dark paths, very destructive avenues.
Money applied to satisfaction, I think,
could be, it is a real tool.
I always look, by the way, I was like,
Elon is the case study for behavior,
but the other thing that really made me think
is Larry Page was asked one time
what his approach to philanthropy was,
and he said, oh, my philanthropic plan
is just give all the money to Elon.
Right?
Well, let me actually ask you about Elon.
What are your, you've interacted with quite a lot
of successful engineers and business people.
What do you think is special about Elon?
We talked about Steve Jobs.
What do you think is special about him
as a leader, as an innovator?
Yeah, so the core of it is he's back to the future.
So he is doing the most leading edge things in the world,
but with a really deeply old school approach.
And so to find comparisons to Elon,
you need to go to like Henry Ford and Thomas Watson
and Howard Hughes and Andrew Carnegie, right?
Leland Stanford, John D. Rockefeller, right?
You need to go to the,
what were called the bourgeois capitalists,
like the hardcore business owner operators
who basically built, you know,
basically built industrialized society, Vanderbilt.
And it's a level of hands-on commitment
and depth in the business,
coupled with an absolute priority towards truth
and towards kind of put science and technology
down to first principles.
That is just like absolute,
it was just like unbelievably absolute.
He really is ideal that he's only ever talking to engineers.
Like he does not tolerate bullshit.
He has less bullshit tolerance than anybody I've ever met.
He wants ground truth on every single topic.
And he runs his businesses directly day to day,
devoted to getting to ground truth in every single topic.
So you think it was a good decision for him to buy Twitter?
I have developed a view
and life did not second guess Elon Musk.
I know this is going to sound crazy and unfounded, but.
Well, I mean, he's got a quite a track record.
I mean, look, the car was a crazy, I mean, the car was,
I mean, look.
He's done a lot of things that seem crazy.
Starting a new car company in the United States of America,
the last time somebody really tried to do that
was the 1950s and it was called Tucker Automotive.
And it was such a disaster.
They made a movie about what a disaster it was.
And then rockets, like who does that?
Like that's, there's obviously no way
to start a new rocket company.
Like those days are over.
And then to do those at the same time.
So after he pulled those two off, like, okay, fine.
Like, this is one of my areas of like,
whatever opinions I had about that is just like, okay,
clearly are not relevant.
Like this is, you just, at some point,
you just like bet on the person.
And in general, I wish more people would lean
on celebrating and supporting
versus deriding and destroying.
Oh yeah.
I mean, look, he drives resentment.
Like it's like, he is a magnet for resentment.
Like his critics are the most miserable,
like resentful people in the world.
Like it's almost a perfect match of like the most idealized,
you know, technologist, you know, of the century,
coupled with like just his critics are just bitter
as can be.
I mean, it's sort of very darkly a comic to watch.
Well, he fuels the fire of that by being an asshole
on Twitter at times.
And which is fascinating to watch the drama
of human civilization, given our cult roots
just fully on fire.
He's running a cult.
You could say that.
Very successfully.
So now that our cults have gone and we searched for meaning,
what do you think is the meaning of this whole thing?
What's the meaning of life, Marc Andreessen?
I don't know the answer to that.
I think the meaning of,
the closest I get to it is what I said about satisfaction.
So it's basically like, okay, we were given what we have.
Like we should basically do our best.
What's the role of love in that mix?
I mean, like, what's the point of life if you're,
yeah, without love, like, yeah.
So love is a big part of that satisfaction.
Look, like taking care of people is like a wonderful thing.
Like, you know, mentality, you know,
there are pathological forms of taking care of people,
but there's also a very fundamental, you know,
kind of aspect of taking care of people.
Like, for example, I happen to be somebody who believes
that capitalism and taking care of people are actually,
they're actually the same thing.
Somebody once said,
capitalism is how you take care of people you don't know.
Right?
Right, and so like, yeah,
I think it's like deeply woven into the whole thing.
You know, there's a long conversation to be had about that,
but yeah.
Yeah, creating products that are used by millions of people
and bring them joy in smaller, big ways.
And then capitalism kind of enables that, encourages that.
David Friedman says,
there's only three ways to get somebody to do something
for somebody else.
Love, money, and force.
Love and money are better than force.
That's a good ordering, I think.
We should bet on those.
Try love first.
If that doesn't work, then money and then force.
Well, don't even try that one.
Mark, you're an incredible person.
I've been a huge fan.
I'm glad to finally got a chance to talk.
I'm a fan of everything you do, everything you do,
including on Twitter.
It's a huge honor to meet you, to talk with you.
Thanks again for doing this.
Awesome, thank you, Lex.
Thanks for listening to this conversation
with Mark Andreessen.
To support this podcast,
please check out our sponsors in the description.
And now let me leave you with some words
from Mark Andreessen himself.
The world is a very malleable place.
If you know what you want and you go for it
with maximum energy and drive and passion,
the world will often reconfigure itself around you
much more quickly and easily than you would think.
Thank you for listening and hope to see you next time.