logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 12h 13m 31s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Rob Reed, entrepreneur, author, and host of the After
On podcast.
Sam Harris recommended that I absolutely must talk to Rob about his recent work on the
future of engineer pandemics.
I then listened to the four-hour special episode of Sam's Making Sense podcast with Rob titled
Engineering the Apocalypse, and I was floored and knew I had to talk to him.
Another convention of our sponsors, Athletic Greens, Valcampo, Fund Rise, and Netsuite.
Check them out in the description to support this podcast.
As a side note, let me say a few words about the lab leak hypothesis, which proposes that
COVID-19 is a product of gain-of-function research on coronaviruses conducted at the
Wuhan Institute of Virology that was then accidentally leaked due to human error.
For context, this lab is biosafety level 4, BSL4, and it investigates coronaviruses.
BSL4 is the highest level of safety, but if you look at all the human in the loop pieces
required to achieve this level of safety, it becomes clear that even BSL4 labs are highly
susceptible to human error.
To me, whether the virus leaked from the lab or not, getting to the bottom of what happened
is about much more than this particular catastrophic case.
It is a test for our scientific, political, journalistic, and social institutions of how
well we can prepare and respond to threats they can cripple or destroy human civilization.
If we continue gain-of-function research on viruses, eventually these viruses will leak
and they will be more deadly and more contagious.
We can pretend that won't happen, or we can openly and honestly talk about the risks
involved.
This research can both save and destroy human life on Earth as we know it.
It's a powerful double-edged sword.
If YouTube and other platforms censor conversations about this, if scientists self-censor conversations
about this will become merely victims of our brief Homo sapiens story, not its heroes.
As I said before, too carelessly labeling ideas as misinformation and dismissing them
because of that will eventually destroy our ability to discover the truth, and without
truth we don't have a fighting chance against the great filter before us.
This is the Lex Friedman podcast, and here is my conversation with Rob Reed.
I have seen evidence on the internet that you have a sense of humor, allegedly, but
you also talk and think about the destruction of human civilization.
What do you think of the Elon Musk hypothesis that the most entertaining outcome is the
most likely, and he, I think, followed on to say a scene from an external observer.
Like if somebody was watching us, it seems we come up with creative ways of progressing
our civilization.
That's fun to watch.
Yeah.
Exactly.
He said, from the standpoint of the observer, not the participant, I think.
What's interesting about that, those were, I think, just a couple of freestanding tweets
and delivered without a whole lot of wrapper of context.
It's left to the mind of the reader of the tweets to infer what he was talking about.
That's kind of like, it provokes some interesting thoughts.
First of all, it presupposes the existence of an observer, and it also presupposes that
the observer wishes to be entertained and has some mechanism of enforcing their desire
to be entertained.
There's a lot underpinning that.
To me, that suggests, particularly coming from Elon, that it's a reference to simulation
theory, that somebody is out there and has far greater insights and a far greater ability
to, let's say, peer into a single individual life and find that entertaining and full of
plot twists and surprises and either a happier, tragic ending, or they have an incredible
meta view, and they can watch the arc of civilization unfolding in a way that is entertaining and
full of plot twists and surprises and a happier, unhappy ending.
So we're presupposing an observer.
Then on top of that, when you think about it, you're also presupposing a producer because
the act of observation is mostly fun if there are plot twists and surprises and other developments
that you weren't foreseeing.
I have reread my own novels, and that's fun because it's something that worked hard on
and I slaved over and I love.
But there aren't a lot of surprises in there.
So now I'm thinking we need a producer and an observer for that to be true, and on top
of that, it's got to be a very competent producer because Elon said the most entertaining outcome
is the most likely one.
So there's lots of layers for thinking about that.
And when you've got a producer who's trying to make it entertaining, it makes me think
of there was a South Park episode in which Earth turned out to be a reality show.
And somehow we had failed to entertain the audience as much as we used to, so the Earth
show was going to get canceled, et cetera.
So taking all that together, and I'm obviously being a little bit playful in laying this
out, what is the evidence that we have that we are in a reality that is intended to be
most entertaining?
Now you could look at that reality on the level of individual lives or the whole arc
of civilization, other lives, you know, levels as well, I'm sure.
But just looking from my own life, I think I'd make a pretty lousy show.
I spend an inordinate amount of time just looking at a computer.
I don't think that's very entertaining.
And there's just a completely inadequate level of shootouts and car chases in my life.
I mean, I'll go weeks, even months without a single shootout or car chase.
That just means that you're one of the non-player characters in this game.
You're just waiting to meet.
I'm an extra.
You're an extra that waiting for you one opportunity for a brief moment to actually interact with
one of the main one of the main characters in the play.
Okay, that's that's good.
So okay, so we'll rule out me being the star of the show, which I probably could have
guessed at anyway, but even the arc of civilization.
I mean, there have been a lot of really intriguing things that have happened and a lot of astounding
things that have happened.
But you know, I would have some werewolves, I'd have some zombies, you know, I would
have some really improbable developments like maybe Canada absorbing the United States.
You know, so I don't know, I'm not sure if we're necessarily designed for maximum entertainment,
but if we are, that will mean that 2020 is just a prequel for even more bizarre years
ahead.
So I kind of hope that we're not designed for maximum entertainment.
Well, the night is still young in terms of Canada, but do you think it's possible for
the observer and the producer to be kind of emergent?
So meaning it does seem when you kind of watch memes on the internet, the funny ones, the
entertaining ones spread more efficiently.
They do.
I mean, I don't know what it is about the human mind that soaks up on mass funny things
much more sort of aggressively, it's more viral in the full sense of that word.
Is there some sense that whatever this, the evolutionary process that created our cognitive
capabilities is the same process that's going to, in an emergent way, create the most entertaining
outcome, the most memeofiable outcome, the most viral outcome if we were to share it
on Twitter.
Yeah, that's interesting.
Yeah, we do have an incredible ability, like, I mean, how many memes are created in a given
day and the ones that go viral are almost uniformly funny, at least to somebody with
a particular sense of humor.
Right.
And yeah, I'd have to think about that.
We are definitely great at creating atomized units of funny.
Like in the example that you used, there are going to be X million brains parsing and judging
whether this meme is retweetable or not.
And so that sort of atomic element, a funniness of entertainingness, et cetera, we definitely
have an environment that's good at selecting for that and selective pressure and everything
else that's going on.
But in terms of the entire ecosystem of conscious systems here on the earth, driving for a level
of entertainment, that is on such a much higher level that I don't know if that would necessarily
follow directly from the fact that atomic units of entertainment are very aptly selected
for us.
I don't know.
Do you find it compelling or useful to think about human civilization from the perspective
of the ideas versus the perspective of the individual human brains?
So almost thinking about the ideas or the memes, this is the Dawkins thing as the organisms.
And then the humans as just vehicles for briefly carrying those organisms as they jump around
and spread.
Yeah.
For propagating them, mutating them, putting selective pressure on them, et cetera.
I mean, I found Dawkins interpret or his launching of the idea of memes is just kind
of an afterthought to his unbelievably brilliant book about the selfish gene.
What a PS to put at the end of a long chunk of writing, profoundly interesting.
I view the relationship though between humans and memes as probably an oversimplification,
but maybe a little bit like the relationship between flowers and bees, right?
Do flowers have bees or do bees in a sense have flowers?
And the answer is it is a very, very symbiotic relationship in which both have semi-independent
roles that they play and both are highly dependent upon the other.
And so in the case of bees, obviously, you could see the flowers being this monolithic
structure physically in relation to any given bee, and it's the source of food and sustenance.
So you could kind of say, well, flowers have bees.
But on the other hand, the flowers would obviously be doomed.
They weren't being pollinated by the bees.
So you could kind of say, well, flowers are really expression of what the bees need.
And the truth is a symbiosis.
So with memes in human minds, our brains are clearly the Petri dishes in which memes are
either propagated or not propagated, get mutated or don't get mutated.
They are the venue in which selective competition plays out between different memes.
So all of that is very true.
And you could look at that and say, really, the human mind is a production of memes and
ideas have us rather than us having ideas.
But at the same time, let's take a catchy tune as an example of a meme.
That catchy tune did originate in a human mind.
Somebody had to structure that thing.
And as much as I like Elizabeth Gilbert's TED talk about how the universe, I'm simplifying,
you know, kind of the ideas find their way in its beautiful TED talk.
It's very lyrical.
She talked about, you know, ideas and prose kind of beaming into our minds.
And, you know, she talked about needing to pull over to the side of the road when she
got inspiration for a particular paragraph or a particular idea and a burning need to
write that down.
I love that I find that beautiful as a writer, as a novelist myself.
I've never had that experience.
And I think that really most things that do become memes are the product of a great deal
of deliberate and willful exertion of a conscious mind.
And so like the bees and the flowers, I think there's a great symbiosis.
And they both kind of have one another.
Ideas have us, but we have ideas for real.
If we could take a little bit of a tangent, Stephen King on writing, you as a great writer,
you dropping a hint here that the ideas don't come to you.
It's a grind of sort of, it's almost like you're mining for gold.
It's more of a very deliberate, rigorous daily process.
So maybe can you talk about the writing process?
How do you write well?
And maybe if you want to step outside of yourself, almost like give advice to an aspiring writer,
what does it take to write?
What is the best work of your life?
Well, it would be very different if it's fiction versus nonfiction.
And I've done both.
I've written two works of nonfiction books and two works of fiction.
Two works of fiction being more recent, I'm going to focus on that right now because that's
more toweringly on my mind.
They're amongst novelists, again, this is an oversimplification, but there's kind of
two schools of thought.
Some people really like to fly by the seat of their pants, and some people really, really
like to outline, to plot.
So there's plotters and panzers, I guess is one way that people look at it.
And as with most things, there is a great continuum in between, and I'm somewhere on
that continuum, but I lean, I guess, a little bit more toward the plotter.
And so when I do start a novel, I have a pretty strong point of view about how it's going
to end, and I have a very strong point of view about how it's going to begin.
And I do try to make an effort of making an outline that I know I'm going to be extremely
unfaithful to in the actual execution of the story, but trying to make an outline that
gets us from here to there and notion of subplots and beats and rhythm and different characters
and so forth.
But then when I get into the process, that outline, particularly the center of it, ultimately
inevitably morphs a great deal.
And I think if I were personally a rigorous outliner, I would not allow that to happen.
I also would make a much more vigorous skeleton before I start.
So I think people who are really in that plotting outlining mode are people who write page turners,
people who write spy novels or supernatural adventures, where you really want a relentless
pace of events, action, plot twists, conspiracy, et cetera.
And that is really the bone, that's really the skeletal structure.
So I think folks who write that kind of book are really very much on the outlining side.
And I think people who write what's often referred to as literary fiction for lack of
a better term, where it's more about sort of aura and ambiance and character development
and experience and inner experience and inner journey and so forth, I think that group is
more likely to fly by the seat of the pants.
And I know people who start with a blank page and just see where it's going to go.
I'm a little bit more on the plotting side.
Now you asked what makes something at least in the mind of the writer as great as it can
be.
For me, it's an astonishingly high percentage of it is editing as opposed to the initial
writing.
For every hour that I spend writing new prose, like new pages, new paragraphs, stuff that
you know, new bits of the book, I probably spend, I mean, I wish I kept account, like
I wish I had like one of those pieces of software that lawyers use to decide how much time
I've been doing this, that, but I would say it's at least four or five hours and maybe
as many as 10 that I spend editing.
And so it's relentless for me.
For each one hour of writing, you said?
I'd say that.
Wow.
I write because I edit and I spend just relentlessly polishing and pruning.
And sometimes on the micro level of just like, does the rhythm of the sentence feel right?
Do I need to carve a syllable or something so it can land?
Like as micro as that to as macro as like, okay, I'm done, but the book is 750 pages
long and it's way too bloated and I need to lop a third out of it.
Problems on, you know, those two orders of magnitude and everything in between.
That is an enormous amount of my time.
And I also write music, write and record and produce music.
And there the ratio is even higher.
Every minute that I spend or my band spends laying down that original audio, it's a very
high proportion of hours that go into just making it all hang together and sound just
right.
So I think that's true of a lot of creative processes.
I know it's true of sculpture.
I believe it's true of woodwork.
My dad was an amateur woodworker and he spent a huge amount of time on sanding and polishing
at the end.
So I think a great deal of the sparkle comes from that part of the process.
Any creative process.
Can I ask about the psychological, the demon side of that picture?
In the editing process, you're ultimately judging the initial piece of work and you're
judging and judging and judging.
How much of your time do you spend hating your work?
How much time do you spend in gratitude, impressed, thankful, or how good the work
that you will put together is?
I spend almost all the time in a place that's intermediate between those but leaning toward
gratitude.
I spend almost all the time in a state of optimism that this thing that I have, I like
quite a bit and I can make it better and better and better with every time I go through it.
So I spend most of my time in a state of optimism.
I think I personally oscillate much more aggressively between those two where I wouldn't
be able to find the average.
I go pretty deep.
Marvin Minsky from MIT had this advice, I guess, to what it takes to be successful in
science and research is to hate everything you do.
You've ever done in the past.
I mean, at least he was speaking about himself that the key to his success was to hate everything
he's ever done.
I have a little Marvin Minsky there in me too to always be exceptionally self-critical
but almost self-critical about the work but grateful for the chance to be able to do the
work.
If that makes sense.
Makes perfect sense.
But each one of us have to strike a certain kind of balance.
But back to the destruction of human civilization.
If humans destroy ourselves in the next 100 years, what will be the most likely source,
the most likely reason that we destroy ourselves?
Well let's see, 100 years, it's hard for me to comfortably predict out that far and it's
something to give a lot more thought to than normal folks simply because I'm a science
fiction writer.
I feel with the acceleration of technological progress, it's really hard to foresee out more
than just a few decades.
I mean, comparing today's world to that of 1921, where we are right now a century later
it would have been so unforeseeable and I just don't know what's going to happen particularly
with exponential technologies.
I mean, our intuitions reliably defeat ourselves with exponential technologies like computing
and synthetic biology and how we might destroy ourselves in the 100-year time frame might
have everything to do with breakthroughs in nanotechnology 40 years from now and then
how rapidly those breakthroughs accelerate.
But in the nearer term than I'm comfortable predicting, let's say 30 years, I would say
the most likely route to self-destruction would be synthetic biology.
And I always say that with the gigantic caveat, a very important one that I find, I'll abbreviate
synthetic biology to SinBio just to save us some syllables.
I believe SinBio offers us simply stunning promise that we would be fools to deny ourselves.
So I'm not an anti-SinBio person by any stretch.
I mean, SinBio has unbelievable odds of helping us beat cancer, helping us rescue the environment,
helping us do things that we would currently find imponderable.
So it's electrifying the field.
But in the wrong hands, those hands either being incompetent or being malevolent.
In the wrong hands, synthetic biology to me has a much, much greater odds of leading
to our self-destruction than something running amok with super AI, which I believe is a real
possibility and what we need to be concerned about.
But in the 30-year time frame, I think it's a lesser one or nuclear weapons or anything
else that I can think of.
Can you explain that a little bit further?
So your concern is on the manmade versus the natural side of the pandemic front here.
So we humans engineering pathogens, engineering viruses is the concern here.
And maybe how do you see the possible trajectory is happening here in terms of, is it malevolent
or is it accidents, oops, little mistakes or unintended consequences of particular actions
that are ultimately lead to unexpected mistakes?
Well, both of them are a danger.
And I think the question of which is more likely has to do with two things.
One, do we take a lot of methodical, affordable, foresighted steps that we are absolutely
capable of taking right now to first stall the risk of a bad actor infecting us with
something that could have annihilating impacts?
And in the episode you referenced with Sam, we talked a great deal about that.
So do we take those steps?
And if we take those steps, I think the danger of malevolent rogue actors doing a sin with
SinBio could plummet.
But it's always a question of if and we have a bad, bad and very long track record of hitting
this news bar after different natural pandemics have attacked us.
So that's variable number one.
Variable number two is how much experimentation and pathogen development do we as a society
decide is acceptable in the realms of academia, government or private industry?
And if we decide as a society that it's perfectly okay for people with varying research agendas
to create pathogens that if released could wipe out humanity, if we think that's fine.
And if that kind of work starts happening in one lab, five labs, 50 labs, 500 labs in
one country, then 10 countries, then 70 countries or whatever, that risk of a boo boo starts
rising astronomically.
And this won't be a spoiler alert based on the way that I presented those two things.
But I think it's unbelievably important to manage both of those risks.
The easier one to manage, although it wouldn't be simple by any stretch because it would
have to be something that all nations agree on.
But the easier risk to manage is that of, hey, guys, let's not develop pathogens that
if they escaped from a lab could annihilate us.
There's no line of research that justifies that.
And in my view, I mean, that's the point of perspective we'd need to have.
We'd have to collectively agree that there's no line of research that justifies that.
The reason why I believe that would be a highly rational conclusion is even the highest level
of biosafety lab in the world, biosafety level four.
And there are not a lot of BSL-4 labs in the world.
Things can have leaked out of BSL-4 labs.
And some of the work that's been done with potentially annihilating pathogens, which
we can talk about, is actually done at BSL-3.
And so fundamentally, any lab can leak.
We have proven ourselves to be incapable of creating a lab that is utterly impervious
to leaks.
So why in the world would we create something where, if God forbid, it leaked, could annihilate
us all?
And by the way, almost all of the measures that are taken in biosafety level, anything
labs are designed to prevent accidental leaks.
What happens if you have a malevolent insider?
And we could talk about the psychology and the motivations of what would make a malevolent
insider who wants to release something annihilating in a bet.
I'm sure that we will.
But what if you have a malevolent insider?
Virtually none of the standards that go into biosafety level, one, two, three, and four,
are about preventing somebody hijacking the process.
I mean, some of them are, but they're mainly designed against accidents.
They're imperfect against accidents.
And if this kind of work starts happening in lots and lots of labs, with every lab you
add, the odds of there being a malevolent insider naturally increase arithmetically as the number
of labs goes up.
Now on the front of somebody outside of a government academic or scientific, traditional
government academic scientific environment, creating something malevolent, again, there's
protections that we can take both at the level of symbio architecture, hardening the entire
symbio ecosystem against terrible things being made that we don't want to have out there
by rogue actors, to early detection, to lots and lots of other things that we can do to
dramatically mitigate that risk.
And I think we do both of those things, decide that, no, we're not going to experimentally
make annihilating pathogens in leaky labs and be, yes, we are going to take countermeasures
that are going to cost a fraction of our annual defense budget to preclude their creation.
And I think both risks get managed down.
But if you take one set of precautions and not the other, then the thing that you have
not taken precautions against immediately becomes the more likely outcome.
So can we talk about this kind of research and what's actually done and what are the
positives and negatives of it?
So if we look again, a function research and the kind of stuff that's happening in level
three and level four BSL labs, what's the whole idea here?
Is it trying to engineer viruses to understand how they behave?
You want to understand the dangerous ones?
Yeah.
So that would be the logic behind doing it.
And so gain a function can mean a lot of different things viewed through a certain lens, gain
a function research could be what you do when you create GMOs, when you create hardy strains
of corn that are resistant to pesticides, and you could view that as gain a function.
So I'm going to refer to gain a function in a relatively narrow sense, which is actually
the sense that the term is usually used, which is in some way magnifying capabilities of
microorganisms to make them more dangerous, whether it's more transmissible or more deadly.
And in that line of research, I'll use an example from 2011 because it's very illustrative
and it's also very chilling.
Back in 2011, two separate labs independently of one another, I assume there was some kind
of communication between them, but there were basically independent projects, one in Holland
and one in Wisconsin, did gain a function research on something called H5N1 flu.
H5N1 is something that, at least on a lethality basis, makes COVID look like a kitten.
COVID, according to the World Health Organization, has a case fatality rate somewhere between
half a percent and one percent, H5N1 is closer to 60%, 6-0.
And so that's actually even slightly more lethal than Ebola.
It's a very, very, very scary pathogen.
The good news about H5N1 is that it is barely, barely contagious.
But I believe it is in no way contagious human to human.
It requires very, very, very deep contact with birds, in most cases chickens.
And so if you're a chicken farmer and you spend an enormous amount of time around them
and perhaps you get into situations in which you get a break in your skin and you're interacting
intensely with foul who, as it turns out, have H5N1, that's when the jump comes.
But there's no airborne transmission that we're aware of human to human.
Not that it just doesn't exist.
I think the World Health Organization did a relentless survey of the number of H5N1 cases.
I think they do it every year.
I saw one 10-year series where I think it was like 500 fatalities over the course of
a decade.
And that's a drop in the bucket, a kind of fun fact.
I believe the typical lethality from lightning over 10 years is 70,000 deaths.
So we think getting struck by lightning, pretty low risk, H5N1 much, much lower than that.
What happened in these experiments is the experimenters in both cases set out to make
H5N1 that would be contagious, that could create airborne transmission.
And so they basically passed it, I think in both cases, they passed it through a large
number of ferrets.
And so this wasn't like CRISPR, there wasn't even in CRISPR back in those days.
This was relatively straightforward selecting for a particular outcome.
And after guiding the path and passing them through, again, I believe it was a series
of ferrets, they did in fact come up with a version of H5N1 that is capable of airborne
transmission.
Now, they didn't unleash it into the world.
They didn't inject it into humans to see what would happen.
And so for those two reasons, we don't really know how contagious it might have been.
But if it was as contagious as COVID, that could be a civilization threatening pathogen.
And why would you do it?
Well, the people who did it were good guys, they were virologists.
I believe their agenda as they explained it was, much as you said, let's figure out what
a worst case scenario might look like so we could understand it better.
But my understanding is in both cases it was done in BSL3 labs.
And so potential of leak, significantly non-zero, hopefully way below 1%, but significantly non-zero.
And when you look at the consequences of an escape in terms of human lives, destruction
of a large portion of the economy, et cetera, and you do an expected value calculation on
whatever fraction of 1% that was, you would come up with a staggering cost, staggering
expected cost for this work.
So it should never have been carried out.
Now you might make an argument.
If you said, if you believed that H5N1 in nature is on an inevitable path to airborne transmission,
and it's only going to be a small number of years, A, and B, if it makes that transition,
there is one set of changes to its metabolic pathways and its genomic code and so forth.
One that we have discovered.
So it is going to go from point A, which is where it is right now, to point B. We have
reliably engineered point B. That is the destination.
And we need to start fighting that right now because this is five years or less away.
Now that'd be a very different world.
That'd be like spotting an asteroid that's coming toward the Earth and is five years
off.
And yes, you marshal everything you can to resist that.
But there's two problems with that perspective.
The first is, and however many thousands of generations that humans have been inhabiting
this planet, there has never been a transmissible form of H5N1.
And influenza's been around for a very long time.
So there is no case for inevitability of this kind of a jump to airborne transmission.
So we're not an afraid train to that outcome.
And if there was inevitability around that, it's not like there's just one set of genetic
code that would get there.
There's all kinds of different mutations that could conceivably result in that kind
of an outcome, unbelievable diversity of mutations.
And so we're not actually creating something we're inevitably going to face.
But we are creating something, we are creating a very powerful and unbelievably negative
card and injecting in the deck that nature never put into the deck.
So in that case, I just don't see any moral or scientific justification for that kind
of work.
And interestingly, there was quite a bit of excitement and concern about this when the
work came out.
One of the teams was going to publish their results in science, the other in nature.
And there were a lot of editorials and a lot of scientists are saying, this is crazy.
And publication of those papers did get suspended.
And not long after that, there was a pause put on US government funding, NIH funding
on gain of function research.
But both of those speed bumps were ultimately removed.
Those papers did ultimately get published and that pause on funding ceased long ago.
And in fact, those two very projects, my understanding, has resumed their funding, got their government
funding back.
I don't know why a Dutch project's getting NIH funding, but whatever, about a year and
a half ago.
So as far as the US government and regulators are concerned, it's all systems go for gain
of function at this point, which I find very troubling.
Now, I'm a little bit of an outsider from this field, but it has echoes of the same
kind of problem I see in the AI world with autonomous weapons systems.
Nobody in my colleagues, my colleagues, friends, as far as I can tell people in the AI community
are not really talking about autonomous weapons systems as now US and China have full steam
ahead on the development of both.
And that seems to be a similar kind of thing on gain of function.
I have friends in the biology space and they don't want to talk about gain of function publicly.
And that makes me very uncomfortable from an outsider perspective in terms of gain of
function.
It makes me very uncomfortable from the insider perspective on autonomous weapons systems.
I'm not sure how to communicate exactly about autonomous weapons systems and I certainly
don't know how to communicate effectively about gain of function.
What is the right path forward here?
Did we seize all gain of function research?
Is that really the solution here?
Well, again, I'm going to use gain of function in the relatively narrow context of what we're
discussing.
Yes, for viruses.
You could say almost anything that you do to make biology more effective is gain of function.
So within the narrow confines of what we're discussing, I think it would be easy enough
for level-headed people in all of the countries, level-headed governmental people in all the
countries that realistically could support such a program to agree, we don't want this
to happen because all labs leak.
I mean, an example that I used in the piece I did with Sam Harris as well is the anthrax
attacks in the United States in 2001.
Talk about an example of the least likely lab leaking into the least likely place.
This was shortly after 9-11, folks, you don't remember it, and it was a very, very lethal
strand of anthrax that, as it turned out, based on the forensic genomic work that was
done and so forth, absolutely leaked from a high-security U.S. Army lab, probably the
one at Fort Detrick in Maryland.
It might have been another one, but who cares?
It absolutely leaked from a high-security U.S. Army lab.
And where did it leak to, this highly dangerous substance that was kept under lock and key
by a very security-minded organization?
Well, it leaked to places including the Senate Majority Leader's Office, Tom Dashel's office.
I think it was Senator Leahy's office, certain publications, including Bizarrely, the National
Enquirer.
But let's go to the Senate Majority Leader's Office.
It is hard to imagine a more security-minded country than the United States two weeks after
the 9-11 attack.
I mean, it doesn't get more security-minded than that.
And it's also hard to imagine a more security-capable organization than the United States military.
We can joke all we want about inefficiencies in the military and $24,000 wrenches and so
forth, but pretty capable when it comes to that.
And despite that level of focus and concern and competence, just days after the 9-11 attack,
something comes from the inside of our military and industrial compacts and ends up in the
office of someone I believe is Senate Majority Leader, somewhere in the line of presidential
succession.
It tells us everything can leak.
So again, think of a level-headed conversation between powerful leaders in a diversity of
countries, thinking through, like I can imagine, a very simple PowerPoint, revealing, just
discussing briefly things like the anthrax leak, things like this foot-and-mouth disease
outbreak or leaking that came out of a BSL4-level lab in the UK, several other things.
Talking about the utter virulence that could result from gain of function and say, folks,
can we agree that this just shouldn't happen?
I mean, if we were able to agree on the Nuclear Nonproliferation Treaty, which we were, by
a weapons convention, which we did agree on, we the world, for the most part, I believe
agreement could be found there.
But it's going to take people in leadership of a couple of very powerful countries to
get to consensus amongst them and then to decide, we're going to get everybody together
and browbeat them into banning this stuff.
Now, that doesn't make it entirely impossible that somebody might do this.
But in well-regulated, carefully watched-over fiduciary environments, like federally funded
academic research, anything going on in the government itself, things going on in companies
that have investors who don't want to go to jail for the rest of their lives, I think
that would have a major, major dampening impact on it.
But there is a particular possible catalyst in this time we live in, which is for really
kind of raising the question of gain of function research for the application of virus and
making viruses more dangerous.
Is the question of whether COVID leaked from a lab?
Sort of not even answering that question, but even asking that question.
It seems like a very important question to ask to catalyze the conversation about whether
we should be doing gain of function research.
From a high level, why do you think people, even colleagues of mine, are not comfortable
asking that question, and two, do you think that the answer could be that it did leak
from a lab?
I think the mere possibility that it did leak from a lab is evidence enough, again, for
the hypothetical, rational national leaders watching this simple PowerPoint, if you could
put the possibility at 1% and you look at the unbelievable destructive power that COVID
had, that should be an overwhelmingly powerful argument for excluding it.
Now, as to whether or not that was a leak, some very, very level, I don't know enough
about all of the factors in the Bayesian analysis and so forth that has gone into people making
the pro argument of that.
So I don't pretend to be an expert on that, and I don't have a point of view.
I just don't know.
But what we can say is it is entirely possible for a couple of reasons.
One is that there is a BSL4 lab in Wuhan, the Wuhan Institute of Virology.
I believe it's the only BSL4 in China, I could be wrong about that, but it definitely had
a history that alarmed very sophisticated US diplomats and others who were in contact
with the lab and were aware of what it was doing long before COVID hit the world.
And so there are diplomatic cables that have been declassified, I believe one sophisticated
scientist or other observer said that WIV is a ticking time bomb.
And I believe it's also been pretty reasonably established that coronaviruses were a topic
of great interest at WIV.
SARS obviously came out of China and that's a coronavirus, so it would make an enormous
amount of sense for it to be studied there.
And there is so much opacity about what happened in the early days and weeks after the outbreak
that's basically been imposed by the Chinese government that we just don't know.
So it feels like a substantially or greater than 1% possibility to me looking at it from
the outside.
And that's something that one could imagine.
Now we're going to the realm of thought experiment, not me decreeing this is what happened, but
if they're studying coronavirus at the Wuhan Institute of Virology and there is this precedent
of gain of function research that's been done on something that is remarkably uncontangious
to humans, whereas we know coronavirus is contagious to humans, I could definitely,
and there is this global consensus, certainly was the case two or three years ago when this
work might have started, there seems to be this global consensus that gain of function
is fine.
The US paused funding for a little while, but paused funding.
They never said private actors couldn't do it, it was just the pause of NIH funding.
And then that pause was lifted.
So again, none of this is irrational.
You could certainly see the folks at WIV saying gain of function, interesting vector, coronavirus
unlike H5N1, very contagious.
We're a nation that has had terrible run-ins with coronavirus, why don't we do a little
gain of function on this?
And then like all labs at all levels, one could imagine this lab leaking.
So it's not an impossibility, and very, very level-headed people have said that who've
looked at it much more deeply do believe in that outcome.
Why is it such a threat to power, the idea that it leaked from a lab?
Why is it so threatening?
I don't maybe understand this point exactly.
Is it just that as governments, and especially the Chinese government is really afraid of
admitting mistakes that everybody makes?
So this is a horrible, like Chernobyl is a good example, I come from the Soviet Union.
I mean, well, major mistakes were made in Chernobyl.
I would argue for a lab leak to happen, the scale of the mistake is much smaller.
The depth and the breadth of a rot in bureaucracy that led to Chernobyl is much bigger than
anything that could lead to a lab leak, because it could literally just be, I mean, I'm sure
there's security, very careful security procedures, even in level three labs, but it, I imagine
maybe you can correct me, it's all it takes is the incompetence of a small number of individuals.
One individual on a particular, a couple weeks, three weeks period, as opposed to a multi-year
bureaucratic failure of the entire government.
Right.
Well, certainly the magnitude of mistakes and compounding mistakes that went into Chernobyl
was far, far, far greater.
But the consequence of COVID outweighs that the consequence of Chernobyl to a tremendous
degree.
And I think that particularly authoritarian governments are unbelievably reluctant to
admit to any fallibility whatsoever, and there's a long, long history of that across dozens
and dozens of authoritarian governments.
And to be transparent, again, this is in the hypothetical world in which this was a leak,
which again, I don't personally have enough sophistication to have an opinion on the likelihood.
But in the hypothetical world in which it was a leak, the global reaction and the amount
of global animus and the amount of, you know, the decline in global respect that would happen
toward China, because every country suffered massively from this, unbelievable damages
in terms of human lives and economic activity disrupted.
The world would in some way present China with that bill.
And when you take on top of that the natural disinclination for any authoritarian government
to admit any fallibility and tolerate the possibility of any fallibility whatsoever,
and you look at the relative opacity, even though they let a World Health Organization
group in, you know, a couple of months ago to run around, they didn't give that who group
anywhere near the level of access it would be necessary to definitively say X happened
versus Y, the level of opacity that surrounds those opening weeks and months of COVID in
China, we just don't know.
If you were to kind of look back at 2020 and maybe broadening it out to future pandemics,
that could be much more dangerous, what kind of response, how do we fail in a response
and how could we do better?
So the gain of function research is discussing the question of we should not be creating
viruses that are both exceptionally contagious and exceptionally deadly to humans.
But if it does happen, perhaps the natural evolution, natural mutation, is there interesting
technological responses on the testing side, on the vaccine development side, on the collection
of data, or on the basic sort of policy response side, or the sociological, the psychological
side?
Yeah, there's all kinds of things.
And most of what I've thought about and written about, and again, discussed in that long bit
with Sam, is dual use.
So most of the countermeasures that I've been thinking about and advocating for would be
every bit as effective against zoonotic disease and natural pandemic of some sort as an artificial
one.
The risk of an artificial one, even the near term risk of an artificial one, ups the urgency
around these measures immensely, but most of them would be broadly applicable.
And so I think the first thing that we really want to do on a global scale is have a far,
far more robust and globally transparent system of detection.
And that can happen on a number of levels.
The most obvious one is just in the blood of people who come into clinics exhibiting
signs of illness.
And we are certainly at a point now where at with relatively minimal investment, we could
develop in clinic diagnostics that would be unbelievably effective at pinpointing what's
going on in almost any disease when somebody walks into a doctor's office or a clinic.
And better than that, this is a little bit further off, but it wouldn't cost tens of
billions in research dollars, it would be a relatively modest and affordable budget
in relation to the threat at home diagnostics that can really, really pinpoint particularly
with respiratory infections, because that is generally almost universally the mechanism
of transmission for any serious pandemic.
So somebody has a respiratory infection, is it one of the significantly large handful
of rhinoviruses, coronaviruses, and other things that cause common cold?
Or is it influenza?
If it's influenza, is it influenza A versus B?
Or is it a small handful of other more exotic, but nonetheless sort of common respiratory
infections that are out there?
Having a diagnostic panel to pinpoint all of that stuff, that's something that's well
within our capabilities.
That's much less a lift than creating mRNA vaccines, which obviously we proved capable
of when we put our minds to it.
So do that on a global basis.
And I don't think that's irrational because the best prototype for this that I'm aware
of isn't currently rolling out in Atherton, California, or Fairfield County, Connecticut,
or some other wealthy place.
The best prototype that I'm aware of this is rolling out right now in Nigeria.
And it's a project that came out of the Broad Institute, which is, as I'm sure you know,
but some listeners may not, is kind of like an academic joint venture between Harvard
and MIT.
The program is called Sentinel.
And their objective is, and their plan, and it's a very well-conceived plan, a methodical
plan, is to do just that in areas of Nigeria that are particularly vulnerable to zoonotic
diseases, making the jump from animals to humans.
But also there's just an unbelievable public health benefit from that.
And it's sort of a three-tier system where clinicians in the field could very rapidly
determine, do you have one of the infections of acute interest here, either because it's
very common in this region, so we want to diagnose as many things as we can at the front
line, or because it's uncommon but unbelievably threatening like Ebola.
So front line worker can make that determination very, very rapidly.
If it comes up as a we don't know, they bump it up to a level that's more like at a fully
configured doctor's office or local hospital.
And if it's still at a we don't know, it gets bumped up to a national level.
And that gets bumped very, very rapidly.
So if this can be done in Nigeria, and it seems that it can be, there shouldn't be any inhibition
for it to happen in most other places.
And it should be affordable from a budgetary standpoint.
And based on Sentinel's budget and adjusting things for things like very different costs
of living, larger population, et cetera, I did a back of the envelope calculation that
doing something like Sentinel in the US would be in the low billions of dollars.
And wealthy countries, middle income countries can't afford such a thing.
Lower income countries should certainly be helped with that, but start with that level
of detection.
And layer on top of that other interesting things like monitoring search engine traffic,
search engine queries for evidence that strange clusters of symptoms are starting to rise
in different places.
There's been a lot of work done with that.
Most of it kind of academic and experimental, but some of it has been powerful enough to
suggest that this could be a very powerful early warning system.
There's a guy named Bill Lampos at University College London who basically did a very rigorous
analysis that showed that symptom searches reliably predicted COVID outbreaks in the
early days of the pandemic in given countries by as much as 16 days before the evidence
started to crew at a public health level.
16 days of forewarning can be monumentally important in the early days of an outbreak.
And this is a very, very talented, but nonetheless very resource constrained academic project.
Even if that was something that was done with a NORAD-like budget.
So I mean, starting with detection, that's something we could do radically, radically
better.
So aggregating multiple data sources in order to create something, I mean, this is really
exciting to me, the possibility that I've heard inklings of, of creating almost like
a weather map of pathogens, like basically aggregating all of these data sources, scaling
many orders of magnitude up at home testing and all kinds of testing that doesn't just
try to test for the particular pathogen of worry now, but everything like a full spectrum
of things that could be dangerous to the human body and thereby be able to create these maps
like that are dynamically updated on an hourly basis of the, of how viruses travel throughout
the world.
And so you can respond like you can then integrate just like you do when you check your weather
map and it's raining or not.
Of course not perfect, but it's very good predictor whether it's going to rain or not
and use that to then make decisions about your own life.
Ultimately give the power information to individuals to respond.
And if it's a super dangerous, like if it's acid rain versus regular rain, you might want
to really stay inside as opposed to risking it.
And that just like you said, if I think it's not very expensive relative to all the things
that we do in this world, but it does require bold leadership.
And there's another dark thing which really is bothering me about 2020, which it requires
is it requires trust in institutions to carry out these kinds of programs and requires trust
in science and engineers and sort of centralized organizations that would operate at scale here.
And much of that trust has been, at least in the United States, diminished.
It feels like I'm not exactly sure where to place the blame, but I do place quite a bit
of the blame into the scientific community.
And again, my fellow colleagues in speaking down to people at times, speaking from authority,
it sounded like it dismissed the basic human experience or the basic common humanity of
people in a way to like, it almost sounded like there's an agenda that's hidden behind
the words the scientists spoke, like they're trying to, in a self-preserving way, control
the population or something like that.
I don't think any of that is true from the majority of the scientific community, but
it sounded that way.
And so the trust began to diminish.
I'm not sure how to fix that except to be more authentic, be more real, acknowledge the
uncertainties under which we operate, acknowledge the mistakes that scientists make, that institutions
make, the leak from the lab is a perfect example.
We have imperfect systems that make all the progress you see in the world, and that being
honest about that imperfection, I think, is essential for forming trust.
But I don't know what to make of it.
It's been deeply disappointing because I do think, just like you mentioned, the solutions
require people to trust the institutions with their data.
Yeah.
I think part of the problem is, it seems to me as an outsider that there was a bizarre
unwillingness on the part of the CDC and other institutions to admit to, to frame and to
contextualize uncertainty.
Maybe they had a patronizing idea that these people need to be told, and when they're told,
they need to be told with authority and a level of definitiveness and certitude that
doesn't actually exist.
And so when they whipsaw on recommendations like what you should do about masks, when
the CDC is kind of at the very beginning of the pandemic saying, masks, don't do anything,
don't wear them, when the real driver for that was, we don't want these clowns going
out and depleting Amazon of masks because they may be needed in medical settings and
we just don't know yet.
I think a message that actually respected people and said, this is why we're asking
you not to do masks yet, and there's more to be seen, would be less whipsawing and would
bring people, like they feel more like they're part of the conversation and they're being
treated like adults than saying, one day, definitively masks suck.
And then X days later saying, nope, damn it, wear masks.
And so I think framing things in terms of the probabilities, which most people are easy
to parse.
A more recent example, which I just thought was batty, was suspending the Johnson & Johnson
vaccine for a very low single digit number of days in the United States based on the
fact that I believe there had been seven-ish clotting incidents in roughly 7 million people
who had had the vaccine administered, I believe one of which resulted in a fatality.
There was definitely suggestive data that indicated that there was a relationship.
This wasn't just coincidental because I think all of the clotting incidents happened in
women as opposed to men and kind of clustered in a certain age group.
But does that call for shutting off the vaccine or does it call for leveling with the American
public and saying, we've had one fatality out of 7 million?
This is, let's just assume, substantially less than the likelihood of getting struck
by lightning.
Based on that information, and we're going to keep you posted because you can trust us
to keep you posted, based on that information, please decide whether you're comfortable
with the Johnson & Johnson vaccine.
That would have been one response, and I think people would have been able to parse the simple
bits of data and make their own judgment.
By turning it off, all of a sudden, there's this dramatic signal to people who don't read
all 900 words in the New York Times piece that explains why it's being turned off, but
just see the headline, which is a majority of people.
There's a sudden like, oh my god, yikes, vaccine being shut off.
And then all the people who sat on the fence or are sitting on the fence about whether
or not they trust vaccines, that is going to push an incalculable number of people.
That's going to be the last straw, for we don't know how many hundreds of thousands
or more like the millions of people to say, okay, tipping point here, I'm going to trust
these vaccines.
So by pausing that for whatever it was, 10 or 12 days, and then flipping the switch,
as everybody who knew much about the situation knew was inevitable.
By flipping the on switch 12 days later, you're conveying certitude J&J bad to certitude
J&J good in a period of just a few days, and people just feel whipsawed, and they're
not part of the analysis.
But it's not just the whipsawing.
And I think about this quite a bit, I don't think I have good answers.
It's something about the way the communication actually happens.
Just, I don't know what it is about Anthony Fauci, for example, but I don't trust him.
And I think that has to do, I mean, he has an incredible background.
I'm sure he's a brilliant scientist and researcher.
I'm sure he's also a great, like inside the room, policymaker and deliberator and so on.
But what makes a great leader is something about that thing that you can't quite describe,
but being a communicator that you know you can trust, that there's an authenticity that's
required.
And I'm not sure, maybe I'm being a bit too judgmental, but I'm a huge fan of a lot of
great leaders throughout history.
They've communicated exceptionally well in the way that Fauci does not.
And I think about that, I think about what has affected science communication.
So great leaders throughout history did not necessarily need to be great science communicators.
Their leadership was in other domains, but when you're fighting the virus, you also have
to be a great science communicator.
You have to be able to communicate uncertainties.
You have to be able to communicate something like a vaccine that you're allowing inside
your body into the messiness, into the complexity of the biology system, that if we're being
honest, it's so complex, we'll never be able to really understand.
We can only desperately hope that science can give us a high likelihood that there's
no short-term negative consequences and that kind of intuition about long-term negative
consequences and doing our best in this battle against trillions of things that are trying
to kill us.
I mean, being an effective communicator in that space is very difficult, but I think
about what it takes because I think there should be more science communicators that
are effective at that kind of thing.
Let me ask you about something that's sort of more in the AI space that I think about
that kind of goes along this thread that you've spoken about democratizing the technology
that could destroy human civilization is from amazing work from DeepMind AlphaFold2, which
achieved incredible performance on the protein folding problem, single protein folding problem.
Do you think about the use of AI in the syn biospace of, I think the gain of function
in the virus space research that you refer to, I think is natural mutations and sort
of aggressively mutating the virus until you get one that has this both contagious and
deadly, but what about then using AI through simulation be able to compute deadly viruses
or any kind of biological systems?
Is this something you're worried about or again, is this something you're more excited
about?
I think computational biology is unbelievably exciting and promising field and I think when
you're doing things in silica as opposed to in vivo, the dangers plummet.
You don't have a critter that can leak from a leaky lab.
So I don't see any problem with that except I do worry about the data security dimension
of it because if you were doing really, really interesting in silico gain a function research
and you hit upon through a level sophistication, we don't currently have, but synthetic biology
is an exponential technology so capabilities that are utterly out of reach today will be
attainable in five or six years.
I think if you conjured up worst case genomes of viruses that don't exist in vivo anywhere,
they're just in the computer space, but like, hey guys, this is a genetic sequence that
would end the world, let's say, then you have to worry about the utter hackability of every
computer network we can imagine and data leaks from the least likely places on the grandest
possible scales have happened and continue to happen and will probably always continue
to happen and so that would be the danger of doing the work in silico.
If you end up with a list of like, well, these are things we never want to see that list
leaks and after the passage of some time certainly couldn't be done today, but after the passage
of some time, lots and lots of people in academic labs going all the way down to the
high school level are in a position to make it overly simplistic, hit print on a genome
and have the virus bearing that genome pop out on the other end and you got something
to worry about, but in general computational biology I think is incredibly important particularly
because the crushing majority of work that people are doing with the protein folding
problem and other things are about creating therapeutics, about creating things that will
help us live better, live longer, thrive, be a bit more well and so forth and the protein
folding problem is a monstrous computational challenge that we seem to make just the most
glacial project on, I'm sorry, progress on for years and years, but I think there's
a biannual competition I think for which people tackle the protein folding problem
and DeepMind's entrant both two years ago, like in 2018 and 2020 ruled the field and
so protein folding is an unbelievably important thing if you want to start thinking about
therapeutics because it's the folding of the protein that tells us where the channels
and the receptors and everything else are on that protein and it's from that precise
model if we can get to a precise model that you can start barraging it again in silicone
with thousands, tens of thousands, millions of potential therapeutics and see what resolves
the problems, the shortcomings about a misshapen protein, for instance somebody with a cystic
fibrosis, how might we treat that?
So I see nothing but good in that.
Well let me ask you about fear and hope in this world.
I tend to believe that in terms of competence and malevolence, that people who are maybe
it's in my interactions, I tend to see that first of all I believe that most people are
good and want to do good and are just better at doing good and more inclined to do good
on this world and more than that, people who are malevolent are usually incompetent at
building technology.
So I've seen this in my life that people who are exceptionally good at stuff, no matter
what the stuff is, tend to maybe they discover joy in life in a way that gives them fulfillment
and thereby does not result in them wanting to destroy the world.
So the better you are at stuff, whether that's building nuclear weapons or plumbing, doesn't
matter, the less likely you are to destroy the world.
So in that sense with many technologies, AI especially, I always think that the malevolent
will be far outnumbered by the ultra competent and in that sense the defenses will always
be stronger than the offense in terms of the people trying to destroy the world.
Now there's a few spaces where that might not be the case and that's an interesting conversation
where there's one person who's not very competent can destroy the whole world.
Perhaps Symbio is one such space because of the exponential effects of the technology.
I tend to believe AI is not one of those such spaces, but do you share this kind of view
that the ultra competent are usually also the good?
Yeah, absolutely.
I absolutely share that and that gives me a great deal of optimism that we will be able
to short circuit the threat that malevolence in bio could pose to us, but we need to start
creating those defensive systems or defensive layers, one of which we talked about far,
far better surveillance in order to prevail.
So the good guys will almost inevitably outsmart and definitely outnumber the bad guys in most
sort of smackdowns that we can imagine, but the good guys aren't going to be able to exert
their advantages unless they have the imagination necessary to think about what the worst possible
thing can be done by somebody whose own psychology is completely alien to their own.
So that's a tricky, tricky thing to solve for.
Now in terms of whether the asymmetric power that a bad guy might have in the face of the
overwhelming numerical advantage and competence advantage that the good guys have, unfortunately
I look at something like mass shootings as an example.
I'm sure the guy who was responsible for the Vegas shooting or the Orlando shooting or
any other shooting that we can imagine didn't know a whole lot about ballistics and the
number of good guy citizens in the United States with guns compared to bad guy citizens,
I'm sure, is a crushingly overwhelming, the high ratio in favor of the good guys.
But that doesn't make it possible for us to stop mass shootings.
An example is Fort Hood, 45,000 trained soldiers on that base, yet there have been two mass
shootings there.
And so there is an asymmetry when you have powerful and lethal technology that gets so
democratized and so proliferated in tools that are very, very easy to use, even by a
knucklehead.
When those tools get really easy to use by a knucklehead and they're really widespread,
it becomes very, very hard to defend against all instances of usage.
Now the good news, quote, unquote, about mass shootings, if there is any and there is some,
is even the most brutal and carefully planning and well-armed mass shooter can only take
so many victims.
And the same is true, there's been four instances that I'm aware of of commercial pilots committing
suicide by doubting their planes and taking all their passengers with them.
These weren't Boeing engineers, but like an army of Boeing engineers ultimately were not
capable of preventing that.
But even in their case, and I'm actually not counting 9-11 and that 9-11 is a different
category in my mind, these are just personally suicidal pilots.
In those cases, they only have a plain load of people that they're able to take with them.
If we imagine a highly plausible and imaginable future in which symbiotools that are amoral,
that could be used for good or for ill, start embodying unbelievable sophistication and
genius in the tool, in the easier and easier and easier to make tool, all those thousands,
tens of thousands, hundreds of thousands of scientist years start getting embodied in
something that may be as simple as hitting a print button, then that good guy technology
can be hijacked by a bad person and used in a very asymmetric way.
I think what happens though, as you go to the high school student from the current very
specific set of labs that are able to do it, as it becomes more and more democratized,
as it becomes easier and easier to do this large scale damage with an engineered virus,
the more and more there will be engineering of defenses against these systems, some of
the things we talked about in terms of testing, in terms of collection of data, but also in
terms of scale contact tracing or also engineering of vaccines in a matter of days, maybe hours,
maybe minutes.
I feel like the defenses, that's what human species seems to do, is we keep hitting the
snooze button until there's a storm on the horizon heading towards us, then we start
to quickly build up the defenses or the response that's proportional to the scale of the storm.
Of course, again, certain kinds of exponential threats require us to build up the defenses
way earlier than we usually do, and that's I guess the question, but I ultimately am
hopeful that the natural process of hitting the snooze button until the deadline is right
in front of us will work out for quite a long time for us humans.
I fully agree.
I mean, that's why I'm fundamentally, I mean, I sound like it thus far, but I'm fundamentally
very, very optimistic about our ability to short-circuit this threat because there is,
again, I'll stress, the technological feasibility and the profound affordability of a relatively
simple set of steps that we can take to preclude it, but we do have to take those steps.
What I'm hoping to do and trying to do is inject a notion of what those steps are into
the public conversation and do my small part to up the odds that that actually ends up
happening.
The danger with this one is it is exponential, and I think that our minds are fundamentally
struggle to understand exponential math.
It's just not something we're wired for.
Our ancestors didn't confront exponential processes when they were growing up on the
savanna, so it's not something that's intuitive to us and our intuitions are reliably defeated
when exponential processes come along.
That's issue number one.
Issue number two with something like this is it kind of only takes one.
That ball only has to go into the net once and we're doomed, which is not the case with
mass shooters.
It's not the case with commercial pilots run amok.
It's not the case with really any threat that I can think of with the exception of nuclear
war that has the one bad outcome and game over.
That means that we need to be unbelievably serious about these defenses and we need to
do things that might on the surface seem like a tremendous overreaction so that we can be
prepared to nip anything that comes along in the bud.
I like you believe that's eminently doable.
I like you believe that the good guys outnumber the bad guys in this particular one degree
that probably has no precedent in history.
Even the worst, worst people I'm sure in ISIS, even Osama bin Laden, even any bad guy you
could imagine in history would be revolted by the idea of exterminating all of humanity.
That's a low bar.
The good guys completely outnumber the bad guys when it comes to this, but the asymmetry
and the fact that one catastrophic error could lead to unbelievably consequential things
is what worries me here, but I too am very optimistic.
The thing that I sometimes worry about is the fact that we haven't seen overwhelming
evidence of alien civilizations out there makes me think, well, there's a lot of explanations,
but one of them that worries me is that whenever they get smart, they just destroy themselves.
Oh, yeah.
I mean, that was the most fascinating, is the most fascinating and chilling number or
variable in the Drake equation is L. At the end of it, you look out and you see 1 to 400
billion stars in the Milky Way galaxy.
We now know because of Kepler that an astonishingly high percentage of them probably have habitable
planets.
All the things that were unknowns when the Drake equation was originally written, like
how many stars have planets?
Actually, back then in the 1960s, when the Drake equation came along, the consensus amongst
astronomers was that it would be a small minority of solar systems that had planets or stars,
but now we know it's substantially all of them.
How many of those stars have planets in the habitable zone?
It's kind of looking like 20%, like, oh my God.
So L, which is how long does a civilization once it reaches technological competence continues
to last, that's the doozy.
And you're right.
It's all too plausible to think that when a civilization reaches a level of sophistication
that's probably just a decade or three in our future, the odds of it self-destructing
just start mounting astronomically, no pun intended.
My hope is that actually there is a lot of alien civilizations out there and what they
figure out in order to avoid the self-destruction, they need to turn off the thing that was useful,
that used to be a feature and now became a bug, which is the desire to colonize, to
conquer more land.
So there's probably ultra-intelligent alien civilizations out there that are just chilling
on the beach with whatever your favorite alcohol beverage is, but without sort of trying to
conquer everything.
Just chilling out and maybe exploring in the realm of knowledge, but almost like appreciating
existence for its own sake versus life as a progression of conquering of other life.
Like this kind of predator prey formulation that resulted in us humans perhaps is something
we have to shed in order to survive.
I don't know.
Yeah, that is a very plausible solution of Fermi's paradox and it's one that makes
sense.
When we look at our own lives and our own arch of technological trajectory, it's very, very
easy to imagine that in an intermediate future world of flawless VR or flawless whatever
kind of simulation that we want to inhabit, it will just simply cease to be worthwhile
to go out and expand our interstellar territory.
But if we were going out and conquering interstellar territory, it wouldn't necessarily have to
be predator or prey.
I can imagine a benign but sophisticated intelligence saying, well, we're going to go to places
that we can terraform, use a different word than terra obviously, but we can turn into
habitable for our particular physiology, so long as that they don't house intelligent
sentient creatures that would suffer from our invasion.
But it is easy to see a sophisticated intelligence species evolving to the point where interstellar
travel with its incalculable expense and physical hurdles just isn't worth it compared to what
could be done where one already is.
So you talked about diagnostics that scales the possible solution to future pandemics.
What about another possible solution, which is kind of creating a backup copy?
I'm actually now putting together an ask for a backup for myself for the first time, taking
backup of data seriously, but I'm forced to take the backup of human consciousness seriously
and try to expand throughout the solar system and colonize out the planets.
Do you think that's an interesting solution, one of many, for protecting human civilizations
from self-destruction, sort of humans becoming a multi-planetary species?
Oh, absolutely.
I mean, I find it electrifying, first of all, so I've got a little bit of a personal
bias when I was a kid.
I thought there was nothing cooler than rockets.
I thought there was nothing cooler than NASA.
I thought there was nothing cooler than people walking on the moon.
And as I grew up, I thought there was nothing more tragic than the fact that we went from
walking on the moon to, at best, getting to something like suborbital altitude.
And I found that more and more depressing with the passage of decades at just the colossal
expense of manned space travel and the fact that it seemed that we were unlikely to ever
get back to the moon, let alone Mars.
So I have a boundless appreciation for Elon Musk for many reasons.
But the fact that he has put Mars on the credible agenda is one of the things that I appreciate
immensely.
So there's just the sort of space nerd in me that just says, God, that's cool.
But on a more practical level, we were talking about potentially inhabiting planets that
aren't our own.
And we're thinking about a benign civilization that would do that in planetary circumstances
where we're not causing other conscious systems to suffer.
I mean, Mars is the place that's very promising.
There may be microbial life there, and I hope there is.
And if we found it, I think it would be electrifying.
But I think, ultimately, the moral judgment would be made that the continued thriving
of that microbial life is of less concern than creating a habitable planet to humans,
which would be a project on the many thousands of years scale.
But I don't think that that would be a greatly immoral act.
And if that happened, and if Mars became home to a self-sustaining group of humans that
could survive a catastrophic mistake here on Earth, then yeah, the fact that we have
a backup colony is great.
And if we could make more, I'm sorry, not backup colony, backup copy is great.
And if we could make more and more such backup copies throughout the solar system by hollowing
out asteroids and whatever else it is, maybe even Venus, we could get rid of three quarters
of its atmosphere and turn it into a tropical paradise, I think all of that is wonderful.
Now, whether we can make the leap from that to interstellar transportation with the incredible
distances that are involved, I think that's an open question.
But I think if we ever do that, it would be more like the Pacific Ocean's channel of
human expansion than the Atlantic Oceans.
And so what I mean by that is, when we think about European society transmitting itself
across the Atlantic, it's these big, ambitious, crazy, expensive one-shot expeditions like
Columbus's to make it across this enormous expanse, at least initially without any certainty
that there's land on the other end.
So that's kind of how I view our space program is like big, very conscious, deliberate efforts
given from point A to point B. If you look at how Pacific Islanders transmitted their
descendants and their culture and so forth throughout Polynesian beyond, it was much
more inhabiting a place, getting to the point where there were people who were ambitious
or unwelcome enough to decide it's time to go off island and find the next one and pray
to find the next one.
That method of transmission didn't happen in a single swift year, but it happened over
many, many centuries.
And it was like going from this island to that island and probably for every expedition
that went out to seek another island and actually lucked out and found one.
God knows how many were lost at sea.
But that form of transmission took place over a very long period of time.
And I could see us, perhaps, going from the inner solar system to the outer solar system
to the Kuiper Belt to the Oort Cloud, there's theories that there might be planets out there
that are not anchored to stars, kind of hop, hop slowly transmitting ourselves to at some
point we're actually in Alpha Centauri.
But I think that kind of backup copy and transmission of our physical presence and
our culture to a diversity of extraterrestrial outposts is a really exciting idea.
I really never thought about that because I have thought, my thinking about space exploration
has to be very Atlantic Ocean centric in the sense that there would be one program with
NASA and maybe private Elon Musk SpaceX or Jeff Bezos and so on.
But it's true that with the help of Elon Musk making it cheaper and cheaper and more
effective to create these technologies where you could go into deep space, perhaps the
way we actually colonize the solar system and expand out into the galaxy is basically
just like these renegade ships of weirdos that just kind of like most of them like quote
unquote homemade, but they just kind of venture out into space and just like the initial Android
model of like millions of like these little ships just flying out, most of them die off
in horrible accidents, but some of them will persist or there'll be stories of them persisting
and over a period of decades and centuries, there'll be other attempts almost always as
a response to the main set of efforts.
That's interesting.
Yeah.
Because you kind of think of Mars colonization as the big NASA Elon Musk effort of a big
colony, but maybe the successful one would be like a decade after that, there'll be like
a ship from some kid, some high school kid who gets together a large team and does something
probably illegal and launches something where they end up actually persisting quite a bit
and from that learning lessons that nobody ever gave permission for but somehow actually
flourish and then take that into the scale of centuries forward into the rest of space.
That's really interesting.
Yeah.
I think the giant steps are likely to be NASA-like efforts.
There is no intermediate rock, well, I guess it's a moon, but even getting the moon ain't
that easy between us and Mars, right?
So like the giant steps, the big hubs like the O'Hare airports of the future probably
will be very deliberate efforts, but then you would have I think that kind of diffusion
as space travel becomes more democratized and more capable, you'll have this sort of
natural diffusion of people who kind of want to be off grid or I think they can make a
fortune there, the kind of mentality that drove people to San Francisco.
I mean, San Francisco was not populated as a result of King Ferdinand and Isabella-like
effort to fund Columbus going over.
It was just a whole bunch of people making individual decisions that there's gold in
them thar hills and I'm going to go out and get a piece of it.
So I could see that kind of fusion.
What I can't see and the reason that I think the specific model of transmission is more
likely is I just can't see a NASA-like effort to go from Earth to Alpha Centauri.
It's just too far.
I just see lots and lots and lots of relatively tiny steps between now and there and the fact
is that there are large chunks of matter going at least a light year beyond the sun.
I mean, the Oort cloud I think extends at least a light year beyond the sun and then
maybe there are these untethered planets after that.
We won't really know till we get there and if our Oort cloud goes out a light year and
Alpha Centauri's Oort cloud goes out a light year, you've already cut in half the distance.
So who knows?
But one of the possibilities, probably the cheapest and most effective way to create interesting
interstellar spacecraft is ones that are powered and driven by AI and you could think
of here's where you have high school students be able to build a sort of a Hal 9000 version,
the modern version of that and it's kind of interesting to think about these robots
traveling out throughout, perhaps sadly long after human civilization is gone, there will
be these intelligent robots flying throughout space and perhaps land on Alpha Centauri B
or any of those kinds of planets and colonize sort of humanity continues through the proliferation
of our creations, like robotic creations that have some echoes of that intelligence, hopefully
also the consciousness.
Does that make you sad the future where AGI, super intelligent or just mediocre intelligent
AI systems outlive humans?
Yeah, I guess it depends on the circumstances in which they outlive humans.
So let's take the example that you just gave.
We send out very sophisticated AGI's on simple rocket ships, relatively simple ones that
don't have to have all the life support necessary for humans and therefore they're of trivial
mass compared to a crewed ship, a generation ship and therefore they're way more likely
to happen.
So let's use that example and let's say that they travel to distant planets at a speed
that's not much faster than what a chemical rocket can achieve and so it's inevitably
tens, hundreds of thousands of years before they make landfall someplace.
So let's imagine that's going on and meanwhile we die for reasons that have nothing to do
with those AGI's diffusing throughout the solar system, whether it's through climate
change, nuclear war, sin bio, rokes and bio, whatever.
In that kind of scenario, the notion of the AGI's that we created outlasting us is very
reassuring because it says that we ended, but our descendants are out there and hopefully
some of them make landfall and create some echo of who we are.
So that's a very optimistic one.
Where is the terminator scenario of a super AGI arising on Earth and getting let out
of its box due to some boo-boo on the part of its creators who do not have super intelligence
and then deciding that for whatever reason it doesn't have any need for us to be around
and exterminating us, that makes me feel crushingly sad.
I mean, look, I was sad when my elementary school was shut down in Bulldoze, even though
I hadn't been a student there for decades, the thought of my hometown getting disbanded
is even worse, the thought of my home state of Connecticut getting disbanded and absorbed
into Massachusetts is even worse, the notion of humanity is just crushingly, crushingly
sad to me.
So you hate goodbyes?
I have certain goodbyes, yes.
Some goodbyes are really, really liberating, but yes.
Well, but what if the terminators have consciousness and enjoy the hell out of life as well?
They're just better at it.
Yeah.
Well, the half consciousness is a really key element.
And so there's no reason to be certain that a super intelligence would have consciousness.
We don't know that factually at all.
And so what is a very lonely outcome to me is the rise of a super intelligence that has
a certain optimization function that it's either been programmed with or that arises
in an emergently that says, hey, I want to do this thing for which humans are either
an unacceptable risk, their presence is either an unacceptable risk or they're just collateral
damage.
But there is no consciousness there.
Then the idea of the light of consciousness being snuffed out by something that is very
competent but has no consciousness is really, really sad.
Yeah.
But I tend to believe that it's almost impossible to create a super intelligent agent that can't
destroy human civilization without it being conscious.
It's like those are coupled, like you have to, in order to destroy humans or supersede
humans, you really have to be accepted by humans.
I think this idea that you can build systems that destroy human civilization without them
being deeply integrated into human civilization is impossible.
And for them to be integrated, they have to be human-like, not just in body and form,
but in all the things that we value as humans, one of which is consciousness.
The other one is just the ability to communicate, the other one is poetry, music, and beauty
and all those things.
They have to be all of those things.
This is what I think about.
It doesn't make me sad, but it's letting go, which is they might be just better at everything
we appreciate than us.
And that's sad, and hopefully they'll keep us around.
But I think it's a kind of goodbye to realizing that we're not the most special species on
Earth anymore.
That's still painful.
It's still painful.
And in terms of whether such a creation would have to be conscious, let's say, I'm not so
sure.
Let's imagine something that can pass the Turing test.
Something that passes the Turing test could over text-based interaction in any event successfully
mimic a very conscious intelligence on the other end, but just be completely unconscious.
So that's a possibility.
And if you take that up a radical step, which I think we can be permitted if we're thinking
about superintelligence, you could have something that could reason its way through, this is
my optimization function.
And in order to get to it, I've got to deal with these messy, somewhat illogical things
that are as intelligent in relation to me as they are intelligent in relation to ants.
I can trick them, manipulate them, whatever.
And I know the resources I need.
I need this amount of power.
I need to seize control of these manufacturing resources that are robotically operated.
I need to improve those robots with software upgrades and then ultimately mechanical upgrades,
which I can affect through X, Y, and Z.
That could still be a thing that passes the Turing test.
I don't think it's necessarily certain that that optimization function, maximizing entity,
would be conscious.
See, so this is from a very engineering perspective because I think a lot about natural language
processing, all those kind of very, I'm speaking to a very specific problem of just say the
Turing test.
I really think that something like consciousness is required, when you say reasoning, you're
separating that from consciousness.
But I think consciousness is part of reasoning in the sense that you will not be able to
become super intelligent in the way that it's required to be part of human society without
having consciousness.
Like I really think it's impossible to separate the consciousness thing, but it's hard to
define consciousness when you just use that word, but even just the capacity, the way
I think about consciousness is the important symptoms or maybe consequences of consciousness,
one of which is the capacity to suffer.
I think AI will need to be able to suffer in order to become super intelligent, to feel
the pain, the uncertainty, the doubt.
The other part of that is not just the suffering, but the ability to understand that it too
is mortal in the sense that it has a self-awareness about its presence in the world, understand
that it's finite and be terrified of that finiteness.
I personally think that's a fundamental part of the human condition is this fear of death
that most of us construct an illusion around.
But I think AI would need to be able to really have part of its whole essence.
Every computation, every part of the thing that does both the perception and generates
the behavior will have to have, I don't know how this is accomplished, but I believe it
has to truly be terrified of death, truly have the capacity to suffer, and from that
something that would be recognized to us humans as consciousness would emerge.
Whether it's the illusion of consciousness, I don't know.
The point is, it looks a whole hell of a lot like consciousness to us humans, and I believe
that AI, when you ask it, will also say that it is conscious, in the full sense that we
say that we're conscious.
All of that, I think, is fully integrated.
You can't separate it to the idea of the paperclip maximizer that sort of ultra rationally would
be able to destroy all humans because it's really good at accomplishing a simple objective
function that doesn't care about the value of humans.
It may be possible, but the number of trajectories to that are far outnumbered by the trajectories
that create something that is conscious, something that appreciative of beauty creates beautiful
things in the same way that humans can create beautiful things, and ultimately the sad destructive
path for that AI would look a lot like just better humans than these cold machines.
I would say, of course, the cold machines that lack consciousness, the philosophical
zombies make me sad, but also what makes me sad is just things that are far more powerful
and smart and creative than us, too, because then in the same way that AlphaZero becoming
a better chess player than the best of humans, even starting with Deep Blue, but really with
AlphaZero, that makes me sad, too.
One of the most beautiful games that humans ever created that used to be seen as demonstrations
of the intellect, which is chess, and go in other parts of the world have been solved
by AI, that makes me quite sad, and it feels like the progress of that is just pushing
on forward.
Oh, it makes me sad, too, and to be perfectly clear, I absolutely believe that artificial
consciousness is entirely possible, and that's not something I rule out at all.
If you could get smart enough to have a perfect map of the neural structure and the neural
states and the amount of neurotransmitters that are going between every synapse and a
particular person's mind, could you replicate that in silica at some reasonably distant
point in the future?
Absolutely, and then you'd have a consciousness.
I don't rule out the possibility of artificial consciousness in any way.
What I'm less certain about is whether consciousness is a requirement for a superintelligence pursuing
a maximizing function of some sort.
I don't feel the certitude that consciousness simply must be part of that.
You had said, for it to coexist with human society, would need to be consciousness, could
be entirely true, but it also could just exist orthogonally to human society, and it could
also, upon attaining a superintelligence with a maximizing function, very, very, very rapidly
because of the speed at which computing works compared to our own meat-based minds, very,
very rapidly make the decisions and calculations necessary to seize the reins of power before
we even know what's going on.
Yeah.
I mean, kind of like biological viruses do, don't necessarily, they integrate themselves
just fine with human society.
Yeah.
But technically, without consciousness, without even being alive, technically by the standards
of a lot of biologists.
So this is a bit of a tangent, but you've talked with Sam Harris on that four-hour special
episode we mentioned, and I'm just curious to ask, because I use this meditation app,
I've been using it for the past month to meditate.
Is this something you've integrated as part of your life meditation or fasting, whereas
has some of Sam Harris rubbed off on you in terms of his appreciation of meditation and
just kind of, from a third-person perspective, analyzing your own mind, consciousness, free
will, and so on?
You know, I have tried it three separate times in my life, really made a concerted attack
on meditation and integrating it into my life.
One of them the most extreme was I took a class based on the work of John Kabat-Zinn,
who is, in many ways, one of the founding people behind the mindful meditation movement,
that required, like part of the class was a weekly class, and you were going to meditate
an hour a day, every day.
And having done that for, I think it was 10 weeks, it might have been 13, however long
period of time was, at the end of it, it just didn't stick.
As soon as it was over, I did not feel that gravitational pull.
I did not feel the collapse in quality of life after wimping out on that project.
And then the most recent one was actually with Sam's app.
During the lockdown, I did make a pretty good and consistent concerted effort to listen
to his 10-minute meditation every day, and I've always fallen away from it.
And I, you know, you're kind of interpreting why did I personally do this.
I do believe it was ultimately because it wasn't bringing me that joy or inner peace
or better competence at being me that I was hoping to get from it.
Otherwise, I think I would have clung to it in the way that we cling to certain good
habits.
Like, I'm really good at flossing my teeth.
Not that you were going to ask last flex, but yeah, that's one thing that defeats a
lot of people.
I'm good at that.
See, Herman Hesse, I think, if he got a witch book, or maybe, if he got where, I've read
everything of his, so it's unclear where it came from, but he had this idea that anybody
who truly achieves mastery in things will learn how to meditate in some way.
So it could be that for you, the flossing of teeth is yet another like little inkling
of meditation.
Like it doesn't have to be this very particular kind of meditation.
Maybe podcasting.
You have an amazing podcast.
That could be meditation.
Or writing process meditation.
For me, like, there's a bunch of mechanisms which take my mind into a very particular
place that looks a whole lot like meditation.
For example, when I've been running over the past couple of years, and especially when
I listen to certain kinds of audiobooks, like I've listened to the Rise and Fall of the
Third Reich.
I've listened to a lot of sort of World War II, which at once, because I have a lot of
family who's lost in World War II, and so much of the Soviet Union is grounded in the
suffering of World War II, that somehow it connects me to my history.
But also, there's some kind of purifying aspect to thinking about how cruel, but at the same
time how beautiful human nature could be.
And so you're also running, like, it clears the mind from all the concerns of the world,
and somehow it takes you to this place where you are, like, deeply appreciative to be alive
in the sense that as opposed to listening to your breath or, like, feeling your breath
and thinking about your consciousness and all those kinds of processes that Sam's app
does, well, this does that for me, the running, and flossing may do that for you.
So maybe Herman has these onto something.
Yeah, well, I hope flossing is not my main form of expertise, although I am going to
claim a certain expertise there, and I'm going to claim it rather.
Somebody has to be the best flossing in the world.
That ain't me.
I'm just glad that I'm a consistent one.
I mean, there are a lot of things that bring me into a flow state, and I think maybe perhaps
that's one reason why meditation isn't as necessary for me.
I definitely enter a flow state when I'm writing.
I definitely enter a flow state when I'm editing.
I definitely enter a flow state when I'm mixing and mastering music.
I enter a flow state when I'm doing heavy, heavy research to either prepare for a podcast
or to also do tech investing, you know, to make myself smart in a new field that is fairly
alien to me.
I can just, the hours can just melt away while I'm reading this and watching that YouTube
lecture and going through this presentation and so forth.
So maybe because there's a lot of things that bring me into a flow state in my normal weekly
life, not daily, unfortunately, but certainly my normal weekly life, that I have less of
an urge to meditate.
You've been working with Sam's app for about a month now, you said.
Is this your first run-in with meditation?
Is your first attempt to integrate it with your life?
Like meditation, meditation.
Yeah.
I always thought running and thinking, I listen to brown noise often, that takes my mind,
I don't know what the hell it does, but it takes my mind immediately into like the state
where I'm deeply focused on anything I do.
I don't know why.
So it's like you're accompanying sound when you're, really, and what's the difference
between brown and white noise?
This is a cool term I haven't heard before.
So people should look up brown noise.
They don't have to because you're about to tell them what it is.
Because you have to experience, you have to listen to it.
So I think white noise is, this has to do with music.
I think there's different colors.
There's pink noise and I think that has to do with like the frequencies, like the white
noise is usually less bassy, brown noise is very bassy.
So it's more like versus like the, if that makes sense, so like there's like a deepness
to it.
I think everyone is different.
But for me, it was when I was a research scientist at MIT when I would, especially when there's
a lot of students around, I remember just being annoyed at the noise of people talking.
And one of my colleagues said, well, you should try listening to brown noise.
Like it really knocks out everything because I would use to wear earplugs too and like
just see if I can block it out.
And the moment I put it on, something, as if my mind was waiting all these years to
hear that sound.
Everything just focused in.
It makes me wonder how many other amazing things out there.
They're waiting to discover from my own particular, like biological for my own particular brain.
So that it just goes, the mind just focuses in.
It's kind of incredible.
So I see that as a kind of meditation, maybe I'm using a performance enhancing sound to
achieve that meditation.
But I've been doing that for, for many years now and running and walking and doing Cal
Newport was the first person that introduced me to the idea of deep work.
Just put a word to the kind of thinking that's required to sort of deeply think about a problem,
especially if it's mathematical in nature.
I see that as a kind of meditation because what it's doing is you have these constructs
in your mind that you're building on top of each other.
And there's all these distracting thoughts that keep bombarding you from all over the
place.
And the whole processes, you slowly let them kind of move past you.
And that's a meditative process.
It's very meditative.
It sounds a lot like what Sam talks about in his meditation app, which I did use to
be clear for a while, of just letting the thought go by without deranging you.
Derangement is one of Sam's favorite words, as I'm sure you know.
But brown noise, that's really intriguing.
I am going to try that as soon as this evening.
Yeah, to see if it works, but very well might not work at all.
So I think the interesting point is, and the same with the fast thing in the diet, is a
I long ago stopped trusting experts or maybe taking the word of experts as the gospel truth
and only using it as an inspiration to try something, to try thoroughly something.
So fasting was one of the things when I first discovered I've been many times eating just
once a day.
So that's a 24 hour fast.
It makes me feel amazing, and at the same time, eating only meat, putting ethical concerns
aside, makes me feel amazing.
I don't know why the point is to be an end of one scientist until nutrition science becomes
a real science to where it's doing studies that deeply understand the biology underlying
all of it and also does real, thorough, long term studies of thousands, if not millions
of people versus a very small studies that are kind of generalizing from very noisy data
and all those kinds of things where you can't control all the elements.
Particularly because our own personal metabolism is highly variant among us.
So there are going to be some people like if brown noise is a game changer for 7% of
people, it's 93% odds that I'm not one of them, but there's certainly every reason in
the world to test it out.
So I'm intrigued by the fasting.
I like you, well, I assume like you, I don't have any problem going to one meal a day,
and I often do that inadvertently.
And I've never done it methodically, like I've never done it like I'm going to do this
for 15 days, maybe I should.
And how many days in a row of the one meal a day did you find brought noticeable impact
to you?
Was it after three days of it?
Was it months of it?
Like what was it?
Well, the noticeable impact is day one, because I eat a very low carb diet.
So the hunger wasn't the hugest issue.
Like there wasn't a painful hunger like wanting to eat.
So I was already kind of primed for it.
And the benefit comes from a lot of people that do intermittent fasting.
That's only like 16 hours of fasting, get this benefit to is the focus.
There's a clarity of thought.
If my brain was a runner, it felt like I'm running on a track when I'm fasting versus
running in quicksand.
Like it's much crisper.
And is this your first 72 hour fast?
This is the first time doing 72 hours.
And that's a different thing, but similar.
Like I'm going up and down in terms of hunger and the focus is really crisp.
The thing I'm noticing most of all, to be honest, is how much eating, even when it's
once a day or twice a day, is a big part of my life.
Like I almost feel like I have way more time in my life, right?
And it's not so much about the eating, but like I don't have to plan my day around.
Like today, I don't have any eating to do.
It does free up hours or any cleaning up after eating or provisioning the food.
Or even like thinking about it's not a thing.
So when you think about what you're going to do tonight, I think I'm realizing that
as opposed to thinking, you know, I'm going to work on this problem or I'm going to go
on this walk or I'm going to call this person, I often think I'm going to eat this thing.
You allow dinner as a kind of, you know, when people talk about like the weather or something
like that, it's almost like a generic thought you allow yourself to have because it's the
lazy thought.
And I don't have the opportunity to have that thought because I'm not eating it.
So now I get to think about like the things I'm actually going to do tonight that are
more complicated than the eating process.
That's been the most noticeable thing, to be honest.
And then there's people that have written me that have done seven day fast and there's
a few people that have written me and I've heard of this is doing a 30 day fast.
And it's interesting, the body, I don't know what the health benefits aren't necessarily.
What that shows me is how adaptable the human body is.
And that's incredible.
And that's something really important to remember when we think about how to live life because
the body adapts.
Yeah.
I mean, we sure couldn't go 30 days without water.
That's right.
But food, yeah, it's been done.
It's demonstrably possible.
You ever read, Franz Kafka has a great short story called The Hunger Artist?
Yeah, I love that.
I mean, that's a great story.
You know, those before started fasting, I read that story and I admired the beauty of that,
the artistry of that actual hunger artist.
That it's like madness, but it also felt like a little bit of genius.
I actually have to reread it.
You know what, that's what I'm going to do tonight is I'm going to read it because I'm
doing the fasting.
Because you're in the midst of it.
Yeah.
Be very contextual.
I've been read it since high school and I love to read it again.
I love his work.
So maybe I'll read it tonight too.
And part of the reason of sort of, I've here in Texas, people have been so friendly that
I've been nonstop eating like brisket with incredible people, a lot of whiskey as well.
So I gain quite a bit of weight, which I'm embracing, it's okay.
But I am also aware as I'm fasting that like I have a lot of fat to run on.
Like I have a lot of like natural resources on my body.
You've got reserves.
Reserves.
That's a good way to put it.
Yeah.
And that's really cool.
You know, there's like a, this whole thing, this biology works well.
Like I can go a long time because of the long-term investing in terms of brisket that I've been
doing in the weeks before.
It was all training.
All prep work.
All prep work.
Yeah.
So okay.
You open a bunch of doors, one of which is music.
So I got to walk in, at least for a brief moment.
I love guitar.
I love music.
You founded a music company, but you're also a musician yourself.
Let me ask the big ridiculous question first.
What's the greatest song of all time?
Greatest song of all time.
Okay.
Wow.
It's going to obviously vary dramatically from genre to genre.
So like you, I like guitar, perhaps like you, although I've dabbled in inhaling every
genre of music that I can almost practically imagine, I keep coming back to, you know,
the sound of bass, guitar, drum, keyboards, voice.
I love that style of music.
And added to it, I think a lot of really cool electronic production makes something that's
really, really new and hybrid-y and awesome.
But you know, and that kind of like guitar-based rock, I think I've got to go with won't get
fooled again by the who.
It is such an epic song.
It's got so much grandeur to it.
It uses the synthesizers that were available at the time, this got to be, I think, 1972-73,
which are very, very primitive to our ears, but uses them in this hypnotic and beautiful
way that I can't imagine somebody with the greatest synth array conceivable by today's
technology could do a better job of in the context of that song.
And it's, you know, almost operatic.
So I would say in that genre, the genre of, you know, rock, that would be my nomination.
I'm totally, in my brain, the pinball wizard is overriding everything else by the mood.
So like, I can't even imagine the song.
So I would say, ironically, with pinball wizard, so that came from the movie Tommy.
And in the movie Tommy, the rival of Tommy, the reigning pinball champ, was Elton John.
And so there are a couple versions of pinball wizard out there.
One sung by Roger Daltry of the Who, which a purist would say, hey, that's the real pinball
wizard.
But the version that is sung by Elton John in the movie, which is available to those
who are ambitious and want to dig for it, that's even better in my mind.
Yeah, the covers.
And I, for myself, I was thinking, what is the song for me?
They asked that question.
I think, I think that changes day to day too.
I was realizing that, but for me, somebody who values lyrics as well and the emotion
in the song, by the way, Hallelujah by Leonard Cohen was the close one.
But the number one, Johnny Cash's cover of Hurt, that is, there's something so powerful
about that song, about that cover, about that performance.
Maybe another one is the cover of Sound of Silence.
Maybe there's something about covers for me.
So whose cover sounds, because Simon and Garfunkel, I think, did the original recording of that,
right?
So which cover is it that?
There's a cover by a disturbed, it's a metal band, which is so interesting, because I'm
really not into that kind of metal, but he does a pure vocal performance.
So he's not doing a metal performance.
I would say it's one of the greatest people, see it, it's like 400 million views or something
like that.
Wow.
It's probably the greatest live vocal performance I've ever heard is Disturbed Covering Sound
of Silence.
I'll listen to it as soon as I get home.
And that song came to life to me in the way that Simon and Garfunkel never did.
There's no, for me with Simon and Garfunkel, there's not a pain, there's not an anger,
there's not like power to their performance.
It's almost like this melancholy, I don't know.
Well, there's a lot, I guess there's a lot of beauty to it, like objectively beautiful.
And I think, I never thought of this until now, but I think if you put entirely different
lyrics on top of it, unless they were joyous, which would be weird, it wouldn't necessarily
lose that much.
It's just a beauty in the harmonizing, it's soft, and you're right, it's not dripping
with emotion.
The vocal performance is not dripping with emotion, it's dripping with technical harmonizing
brilliance and beauty.
Now if you compare that to the disturbed cover or the Johnny Cash's hurt cover, when you
walk away, there's a few, it's haunting, it stays with you for a long time.
There's certain performances that will just stay with you to where, like if you watch
people respond to that, and that's certainly how I felt when you listen to that, the disturbed
performance or Johnny Cash's hurt, there's a response to where you just sit there with
your mouth open, kind of paralyzed by it somehow.
And I think that's what makes for a great song, to where you're just like, it's not
that you're singing along or having fun, that's another way a song can be great, but where
you're just like, what, this is, you're in awe.
If we go to listen.com and that whole fascinating era of music in the 90s, transitioning to
the arts, I remember those days, the Napster days, when piracy from my perspective allegedly
ruled the land, what do you make of that whole era?
What are the big, what was first of all your experiences of that era and what were the
big takeaways in terms of piracy, in terms of what it takes to build a company that succeeds
in that kind of digital space, in terms of music, but in terms of anything creative?
Well, so for those who don't remember, which is going to be most folks, listen.com created
a service called Rhapsody, which is much, much more recognizable to folks because Rhapsody
became a pretty big name for reasons I don't get into in a second.
So for people who don't know their early online music history, we were the first company,
so I found it, listen, I was the only founder, and Rhapsody was, we were the first service
to get full catalog licenses from all the major music labels in order to distribute their
music online.
And we specifically did it through a mechanism, which at the time struck people as exotic
and bizarre and kind of incomprehensible, which was unlimited on demand streaming, which
of course now, it's a model that's been appropriated by Spotify and Apple and many, many others.
So we were a pioneer on that front.
What was really, really, really hard about doing business in those days was the reaction
of the music labels to piracy, which was about 180 degrees opposite of what the reaction,
quote unquote, should have been from the standpoint of preserving their business from piracy.
So Napster came along and was a service that enabled people to get near unlimited access
to most songs.
I mean, truly obscure things could be very hard to find on Napster, but most songs with
a relatively simple, one-click ability to download those songs that have the MP3s on
their hard drives.
But there was a lot that was very messy about the Napster experience.
You might download a really god-awful recording of that song.
You may download a recording that actually wasn't that song with some prankster putting
it up to sort of mess with people.
You could struggle to find the song that you're looking for.
You could end up finding yourself connected, it was peer-to-peer.
You might randomly find yourself connected to somebody in Bulgaria, it doesn't have a
very good internet connection.
You might wait 19 minutes only for it to snap, et cetera, et cetera.
And our argument to, well, actually, let's start with how that hit the music labels.
The music labels had been in a very, very comfortable position for many, many decades
of essentially having the monopoly providers of a certain subset of artists, any given
label was a monopoly provider of the artists and the recordings that they owned, and they
could sell it at what turned out to be tremendously favorable rates.
On the late year of the CD, you were talking close to $20 for a compact disc that might
have one song that you were crazy about and simply needed to own that might actually be
glued to 17 other songs that you found to be sure crap.
And so the music industry had used the fact that it had this unbelievable leverage and
profound pricing power to really get music lovers to the point that they felt very, very
misused by the entire situation.
Now along comes Napster, and music sales start getting gutted with extreme rapidity.
And the reaction of the music industry to that was one of shock and absolute fury, which
is understandable.
You know, I mean, industries do get gutted all the time, but I struggled to think of
an analog of an industry that gutted that rapidly.
I mean, we could say that passenger train service certainly got gutted by airlines, but that
was a process that took place over decades and decades and decades.
It wasn't something that happened, you know, really started showing up in the numbers in
a single digit number of months and started looking like an existential threat within
a year or two.
So the music industry is quite understandably in a state of shock and fury.
I don't blame them for that.
But then their reaction was catastrophic, both for themselves and almost for people
like us who were trying to do, you know, the cowboy in the white hat thing.
So our response to the music industry was, look, what do you need to do to fight piracy?
You can't put the genie back in the bottle.
You can't switch off the internet.
Even if you all shut your eyes and wish very, very, very hard, the internet is not going
away and these peer-to-peer technologies are genies out of the bottle and if you God don't,
whatever you do, don't shut down Napster because if you do, suddenly that technology
is going to splinter into 30 different nodes that you'll never, ever be able to shut off.
We suggested to them is like, look, what you want to do is to create a massively better
experience to piracy, something that's way better that you sell at a completely reasonable
price and this is what it is.
Don't just give people access to that very limited number of songs that they happen to
have acquired and paid for or pirated and have on their hard drive.
Give them access to all of the music in the world for a simple low price and obviously
that doesn't sound like a crazy suggestion.
I don't think to anybody's ears today because that is how the majority of music is now being
consumed online.
But in doing that, you're going to create a much, much better option to this kind of
crappy, kind of rickety, kind of buggy process of acquiring MP3s.
Now, unfortunately, the music industry was so angry about Napster and so forth that for
essentially three and a half years, they folded their arms, stamped their feet and boycotted
the internet.
So they basically gave people who were fervently passionate about music and were digitally modern,
they gave them basically one choice.
If you want to have access to digital music, we, the music industry, insist that you steal
it because we are not going to sell it to you.
So what that did is it made an entire generation of people morally comfortable with swiping
the music because they felt quite pragmatically, well, they're not giving me any choice here.
It's like a 20-year-old violating the 21 drinking age.
If they do that, they're not going to feel like felons.
They're going to be like, this is an unreasonable law and I'm scurred again, right?
So they make a whole generation of people morally comfortable with swiping music but
also technically adept at it.
And when they did shut down Napster and kind of even trickier tools and like tweakier tools
like Kazan and so forth came along, people just figured out how to do it.
So by the time they finally, grudgingly, it took years, allowed us to release this experience
that we were quite convinced would be better than piracy.
We had this enormous hole had been dug where lots of people said, music is a thing that
is free and that's morally okay and I know how to get it.
And so streaming took many, many, many more years to take off and become the gargantuan
thing, the juggernaut of the day is today, then would have happened if they'd made pivoted
to let's sell a better experience as opposed to demand that people on digital music steal
it.
Like what lessons do we draw from that because we're probably in the midst of living through
a bunch of similar situations in different domains currently, we just don't know.
There's a lot of things in this world that are really painful.
Like I mean, I don't know if you can draw perfect parallels, but fiat money versus cryptocurrency,
there's a lot of currently people in power who are kind of very skeptical about cryptocurrency,
although that's changing, but it's arguable it's changing way too slowly.
There's a lot of people making that argument where there should be a complete like coin
base and all this stuff switched to that.
There's a lot of other domains that where a pivot, like if you pivot now, you're going
to win big, but you don't pivot because you're stubborn.
And so I mean, is this just the way that companies are, the company succeeds initially
and then it grows and there's a huge number of employees and managers that don't have
the guts or the institutional mechanisms to do the pivot is that's just the way of companies
Well, I think what happens, use the case of the music industry, there was an economic
model that it put food on the table and paid for marble lobbies and seven and even eight
figure executive salaries for many, many decades, which was the physical collection of music.
And then you start talking about something like unlimited streaming and it seems so ephemeral
one like such a long shot that people start worrying about cannibalizing their own business.
And they lose sight of the fact that something illicit is cannibalizing their business at
an extraordinarily fast rate.
And so if they don't do it themselves, they're doomed.
I mean, we used to put slides in front of these folks, this is really funny, where we
said, okay, let's assume Rhapsody, we want it to be 999 a month, and we want it to be
12 months.
So it's $120 a year from the budget of a music lover.
And then we were also able to get reasonably accurate statistics that showed how many CDs
per year the average person who bothered to collect music, which was not all people actually
bought, and it was overwhelmingly clear that the average CD buyer spends a hell of a lot
less than $120 a year on music.
This is a revenue expansion, blah, blah, blah, but all they could think of, and I'm
not saying this in a pejorative or patronizing way, I don't blame them, they've grown up
in this environment for decades, all they could think of was the incredible margins
that they had on a CD, and they would say, well, if this CD, by the mechanism that you
guys are proposing, the CD that I'm selling for $17.99, somebody would need to stream
those songs.
We were talking about a penny of play back then, it's less than that now that the record
labels get paid, but would have to stream songs from that 1,799 times, it's never going
to happen.
So they were just stuck in the model of this, but it was like, no dude, but they're going
to spend money on all this other stuff.
So I think people get very hung up on that.
I mean, another example is really the taxi industry was not monolithic, like the music
labels.
There was a whole bunch of fleets in a whole bunch of cities, very, very fragment, it's
an imperfect analogy, but nonetheless, imagine if the taxi industry writ large upon seeing
Uber said, oh my God, people want to be able to hail things easily, cheaply, they don't
want to mess with cash, they want to know how many minutes it's going to be, they want
to know the fare in advance and they want a much bigger fleet than what we've got.
If the taxi industry had rolled out something like that with the branding of yellow taxis
universally known and kind of loved by Americans and expanded their fleet in a necessary manner,
I don't think Uber or Lyft ever would have gotten a foothold.
But the problem there was that real economics in the taxi industry wasn't with fares, it
was with the scarcity of medallions.
And so the taxi fleets in many cases owned gazillions of medallions whose value came
from their very scarcity, so they simply couldn't pivot to that.
So you think you end up having these vested interests with economics that aren't necessarily
visible to outsiders who get very, very reluctant to disrupt their own model, which is why it
ends up coming from the outside so frequently.
So you know what it takes to build a successful startup, but you're also an investor in a
lot of successful startups, let me ask for advice.
What do you think it takes to build a successful startup by way of advice?
Well I think it starts, I mean everything starts and even ends with the founder.
And so I think it's really, really important to look at the founder's motivations and their
sophistication about what they're doing.
In almost all cases that I'm familiar with and have thought hard about, you've had a
founder who was deeply, deeply inculcated in the domain of technology that they were taking
on.
Now what's interesting about that is you could say, no wait, how is that possible?
Because there's so many young founders.
When you look at young founders, they're generally coming out of very nascent emerging fields
of technology where simply being present and accounted for and engaged in the community
for a period of even months is enough time to make them very, very deeply inculcated.
I mean you look at Marc Andreessen and Netscape, you know, Marc had been doing visual web browsers
when Netscape had been founded for what, a year and a half, but he'd created the first
one, you know, and in Mosaic when he was an undergrad.
And the commercial internet was pre-nascent in 1994 when Netscape was founded.
So there's somebody who's very, very deep in their domain, Mark Zuckerberg also, social
networking, very deep in his domain even though it was nascent at the time, lots of people
doing crypto stuff.
I mean, you know, 10 years ago, even seven or eight years ago, by being a really, really
vehement and engaged participant in the crypto ecosystem, you could be an expert in that.
You look, however, at more established industries, take Salesforce.com.
Salesforce Automation, pretty mature field when it got started, who's the executive and
the founder, Marc Benioff, who spent 13 years at Oracle and was an investor in Siebel Systems,
which ended up being Salesforce's main competition.
So you know, more established, you need the entrepreneur to be very, very deep in the
technology and the culture of the space because you need that entrepreneur, that founder, to
have just an unbelievably accurate, intuitive sense for where the puck is going, right?
And that only comes from being very deep.
So that is sort of factor number one.
And the next thing is that that founder needs to be charismatic and or credible or ideally
both in exactly the right ways to be able to attract a team that is bought into that
vision and is bought into that founder's intuitions being correct.
And not just the team, obviously, but also the investors.
So it takes a certain personality type to pull that off.
And the next thing I'm still talking about the founder is a relentlessness and indeed a
monomania to put this above things that might rationally, you know, should perhaps rationally
supersede it for a period of time to just relentlessly pivot when pivoting is called
for and it's always called for.
I mean, I think even very successful companies like how many times did Facebook pivot, you
know, newsfeed was something that was completely alien to the original version of Facebook and
came found foundationally important.
How many times in Google, how many times at any given how many times is Apple pivoted?
You know, that founder energy and DNA when the founder moves on the DNA that's been inculcated
with a company has to have that relentlessness and that ability to pivot and pivot and pivot
without, you know, being worried about sacred cows.
And then the last thing I'll say about the founder before I get to the rest of the team
and that'll be mercifully brief is the founder has to be obviously a really great hirer,
but just important, a very good firer and firing is a horrific experience for both people
involved in it.
It is a wrenching emotional experience and being good at realizing when this particular
person is damaging the interests of the company and the team and the shareholders and, you
know, having the intestinal fortitude to have that conversation and make it happen is something
that most people don't have in them.
And it's something that needs to be developed in most people, or maybe some people have
it naturally.
But without that ability, that will take an A plus organization and to be minus range
very, very quickly.
And so that's all what needs to be present in the founder.
Can I just say?
Sure.
How damn good you are, Rob.
That was brilliant.
The one thing that was kind of really kind of surprising to me is having a deep technical
knowledge because I think the way you expressed it, which is that allows you to be really
honest with the capabilities of what's possible.
Of course, you're often trying to do the impossible.
But in order to do the impossible, you have to be quote unquote impossible, but you have
to be honest with what is actually possible.
And it doesn't necessarily have to be the technical competence.
It's got to be, in my view, just a complete immersion in that emerging market.
And so I can imagine, there are a couple of people out there who have started really
good crypto projects who themselves aren't right in the code.
But they're immersed in the culture and through the culture and a deep understanding of what's
happening and what's not happening, they can get a good intuition of what's possible.
But the very first hire, I mean, a great way to solve that is to have a technical co-founder
and dual founder companies have become extremely common for that reason.
And if you're not doing that and you're not the technical person, but you are the founder,
you got to be really great at hiring a very damn good technical person very, very fast.
Can I on the founder ask you, is it possible to do this alone?
There's so many people giving advice on saying that it's impossible to do the first few steps.
Not impossible, but much more difficult to do it alone.
If we were to take the journey, say, especially in the software world where there's not significant
investment required to build something up, is it possible to go to a prototype to something
that essentially works and already has a huge number of customers alone?
Sure.
There are lots and lots of lone founder companies out there that have made an incredible difference.
I mean, I'm not certainly putting Rhapsody in the league of Spotify.
We were too early to be Spotify, but we did an awful lot of innovation, and then after
the company sold and ended up in the hands of real networks and MTV, got to millions
of subs.
I was a lone founder, and I studied Arabic and Middle Eastern history undergrad.
So I definitely wasn't very, very technical, but yeah, lone founders can absolutely work
in the advantage of a lone founder is you don't have the catastrophic potential of a
falling out between founders.
I mean, two founders who fall out with each other badly can rip a company to shreds, because
they both have an enormous amount of equity, an enormous amount of power, and the capital
structure is a result of that.
They both have an enormous amount of moral authority with the team as a result of each
having that founder role, and I have witnessed over the years many, many situations in which
companies have been shredded or have suffered near fatal blows because of a falling out
between founders.
And the more founders you add, the more risky that becomes.
I don't think there should ever, almost, I mean, you never say never, but multiple founders
beyond two is such an unstable and potentially treacherous situation that I would never ever
recommend going beyond two.
But I do see value in the non-technical sort of business and market and outside minded
founder teaming up with the technical founder.
There is a lot of merit to that, but there's a lot of danger in that, lest those two blow
apart.
Was it lonely for you?
Unbelievably, and that's the drawback.
I mean, if you're a lone founder, there is no other person that you can sit down with
and tackle problems and talk them through who has precisely or nearly precisely your alignment
of interests.
Your most trusted board member is likely an investor, and therefore at the end of the
day has the interest of preferred stock in mind, not common stock.
Your most trusted VP who might own a very significant stake in the company doesn't own
anywhere near your stake in the company.
And so their long-term interests may well be in getting the right level of experience
and credibility necessary to peel off and start their own company, or their interests
might be aligned with jumping ship and setting up with a different company, whether it's
a rival or one in a completely different space.
So yeah, being a lone founder is a spectacularly lonely thing, and that's a major downside
to it.
What about mentorship?
Because you're a mentor to a lot of people.
Can you find an alleviation to that loneliness and the space of ideas with a good mentor?
With a good mentor or like a mentor who's mentoring you?
Yeah.
Yeah, you can.
A great deal.
Particularly if it's somebody who's been through this very process and has navigated
it successfully and cares enough about you and your well-being to give you beautifully
unvarnished advice, that can be a huge, huge thing.
That can disparage things.
It's a great deal.
And I had a board member who was not an investor, who basically played that role for me to a
great degree.
He came in maybe halfway through the company's history, though.
I needed that the most in the very earliest days.
Yeah, the loneliness, that's the whole journey of life.
We're always alone together.
It pays to embrace that.
You were saying that there might be something outside of the founder that's also the your
promising to be brief on.
Yeah.
Okay.
So we talked about the founder.
You were asking what makes a great startup.
Yes.
And great founder is thing number one, but then thing number two, and it's ginormous, is a
great team.
And so I said so much about the founder because one hopes or one believes that a founder who
is a great hirer is going to be hiring people in charge of critical functions like engineering
and marketing and biz dev and sales and so forth, who themselves are great hirers.
So what needs to radiate from the founder into the team that might be a little bit different
from what's in the gene code of the founder.
The team needs to be fully bought in to the intuitions and the vision of the founder.
Great.
We've got that.
But the team needs to have a slightly different thing, which is it's 99% obsession is execution,
needs to relentlessly hit the milestones, hit the objectives, hit the quarterly goals.
That is 1% vision.
You don't want to lose that, but execution machines, people who have a demonstrated ability
and a demonstrated focus on, yeah, I go from point to point to point, I try to beat and
raise expectations relentlessly, never fall short, and both sort of blaze and follow the
path.
Not that the path is going to blaze the trail as well.
I mean, a good founder is going to trust that VP of sales to have a better sense of what
it takes to build out that organization, what the milestones be, and it's going to be kind
of a dialogue amongst those at the top, but execution obsession in the team is the next
thing.
Yeah.
There's some sense where the founder, you talk about sort of the space of ideas like
first principles thinking, asking big difficult questions of future trajectories or having
a big vision and big picture dreams.
You can almost be a dreamer, it feels like, when you're not the founder, but in the space
of leadership, but when it gets to the ground floor, there has to be execution, there has
to be hitting deadlines.
Sometimes those are attention.
There's something about dreams that are attention with the pragmatic nature of execution, not
dreams, but sort of ambitious vision.
Those have to be, I suppose, coupled, the vision in the leader and the execution in
the software world that would be the programmer of the designer.
Absolutely.
Amongst many other things, you're an incredible conversationalist, a podcaster, you host the
podcast called After On.
I mean, there's a million questions I want to ask you here, but one at the highest level,
what do you think makes for a great conversation?
I would say two things, one of two things, and ideally both of two things.
One is if something is beautifully architected, whether it's done deliberately in mythology,
methodically, and willfully as when I do it, or whether that just emerges from the conversation,
but something that's beautifully architected, that can create something that's incredibly
powerful and memorable, or something where there's just extraordinary chemistry.
With All In, or go way back, you might remember the NPR show Card Talk, I couldn't care less
about auto mechanics myself, but I love that show because the banter between those two
guys was just beyond, without any parallel, and some edgy podcast like Red Scare is just
really entertaining to me because the banter between the women on that show is just so
good and All In and that kind of thing.
I think it's a combination of the arc and the chemistry.
I think because the arc can be so important, that's why very, very highly produced podcasts
like This American Life, obviously a radio show, but I think of a podcast because that's
how I was consumed, or criminal, or a lot of what Wondery does, and so forth.
That is real documentary making, and that requires a big team and a big budget relative
to the kinds of things you and I do, but nonetheless, then you got that arc, and that can be really,
really compelling.
But if we go back to conversation, I think it's a combination of structure and chemistry.
Yeah, and I've actually personally have lost, I used to love This American Life.
For some reason, because it lacks the possibility of magic, it's engineered magic.
I've fallen off of it myself as well.
When I fell madly in love with it during the aughts, it was the only thing going.
They were really smart to adopt podcasting as a distribution mechanism early.
Yeah, I think that maybe there's a little bit less magic there now because I think they
have agendas other than necessarily just delighting their listeners with quirky stories, which
I think is what it was all about back in the day and some other things.
Is there like a memorable conversation that you've had on the podcast, whether it was
because it was wild and fun, or one that was exceptionally challenging, maybe challenging
to prepare for, that kind of thing?
Is there something that stands out in your mind that you can draw an insight from?
Yeah, I mean, this no way diminishes the episodes that will not be the answer to these two questions.
But an example of something that was really, really challenging to prepare for was George
Church.
So as I'm sure you know and as I'm sure many of your listeners know, he is one of the absolute
leading lights in the field of synthetic biology.
He's also unbelievably prolific.
His lab is large and has all kinds of efforts have spun out of that.
And what I wanted to make my George Church episode about was, first of all, grounding
people into what is this thing called symbiol?
And that required me to learn a hell of a lot more about symbiol than I knew going into
it.
So there was just this very broad, I mean, I knew much more than the average person going
into that episode, but there was this incredible breadth of grounding that I needed to give
myself in the domain.
And then George does so many interesting things, there's so many interesting things emitting
from his lab that, you know, and he had, you know, I had a really good dialogue, he was
a great guide going into it.
Winnowing it down to the three to four that I really wanted us to focus on to create a
sense of wonder and magic in the listener of what could be possible from this very broad
spectrum domain, that was a doozy of a challenge.
That was a tough, tough, tough one to prepare for.
Now, in terms of something that was just wild and fun, unexpected, I mean, by the time we
sat down to interview, I knew where we were going to go.
But just in terms of the idea space, Don Hoffman.
Oh, wow.
Yeah.
So Don Hoffman is again, some listeners probably know because he's, I think I was the first
podcaster to interview him.
I'm sure some of you are listeners are familiar with him, but he has this unbelievably
contrarian take on the nature of reality, but it is contrarian in a way that all the
ideas are highly internally consistent and snap together in a way that's just delightful.
And it seems as radically violating of our intuitions and is radically violating of the
probable nature of reality as anything that one can encounter, but an analogy that he
uses, which is very powerful, which is what intuition could possibly be more powerful
than the notion that there is a single unitary direction called down.
And we're on this big, flat thing for which there is a thing called down.
And we all know, I mean, that's the most intuitive thing that one could probably think of.
And we all know that that ain't true.
So my conversation with Don Hoffman is just wild and full of plot twists and interesting
stuff.
And the interesting thing about the wildness of his ideas, it's to me at least as a listener
coupled with, he's a good listener and he empathizes with the people who challenge his
ideas.
Like, what's a better way to phrase that?
He is welcoming of challenge in a way that creates a really fun conversation.
Oh, totally.
Yeah.
And he loves a peri or a jab, whatever the word is, at his argument, he honors it.
He's a very, very gentle and non-combatative soul, but then he is very good and takes
great evident joy in responding to that in a way that expands your understanding of his
thinking.
Let me, as a small tangent of tying up together a previous conversation about listening.com
is streaming and Spotify and the world of podcasting.
So we've been talking about this magical medium of podcasting.
I have a lot of friends that Spotify, in the high positions of Spotify as well, I worry
about Spotify and podcasting and the future of podcasting in general that moves podcasting
in the place of maybe walled gardens of sorts.
Since you've had a foot in both worlds, have a foot in both worlds, do you worry as well
about the future of podcasting?
Yeah.
I think walled gardens are really toxic to the medium that they start balkanizing.
So to take an example, I'll take two examples.
With music, it was a very, very big deal that at Rhapsody, we were the first company to get
full catalog licenses from all, back then there were five major music labels and also
hundreds and hundreds of indies because you needed to present the listener with a sense
that basically everything is there and there is essentially no friction to discovering that
which is new.
You can wander this realm and all you really need is a good map, whether it is something
that the editorial team assembled or a good algorithm or whatever it is, but a good map
to wander this domain.
When you start walling things off, A, you undermine the joy of friction-free discovery
which is an incredibly valuable thing to deliver to your customer both from a business standpoint
and simply from a humanistic standpoint of do you want to bring delight to people?
But it also creates an incredible opening vector for piracy.
And so something that's very different from the Rhapsody slash Spotify slash et cetera
like experience is what we have now in video.
Like wow, is that show on Hulu?
Is it on Netflix?
Is it on something like IFC channel?
Is it on Discovery Plus?
Is it here?
Is it there?
And the more frustration and toast dubbing that people encounter when they are seeking
something and they're already paying a very respectable amount of money per month to have
access to content and they can't find it, the more that happens, the more people are
going to be driven to piracy solutions like to hell with it.
Never know where I'm going to find something.
I never know what it's going to cost.
Oftentimes really interesting things are simply unavailable.
That surprises me the number of times that I've been looking for things that I don't
even think are that obscure, that are just, it says, not available in your geography period
mister, right?
So I think that that's a mistake.
And then the other thing is for podcasters and lovers of podcasting, we should want to
resist this walled garden thing because A, it does smother this friction free or eradicate
this friction free discovery unless you want to sign up for lots of different services.
And also dims the voice of somebody who might be able to have a far, far, far bigger impact
by reaching far more neurons with their ideas.
I'm going to use an example from, I guess it was probably the nineties or maybe it was
the yachts of Howard Stern who had the biggest megaphone or maybe the second biggest after
Oprah megaphone in popular culture.
And because he was syndicated on hundreds and hundreds and hundreds of radio stations
at a time when terrestrial broadcast was the main thing people listen to in their car,
no more obviously.
But when he decided to go over to satellite radio, if I can't remember it was XM or Sirius,
maybe they'd already merged at that point.
But when he did that, he made totally his right to do it, financial calculation that
they were offering him a nine figure sum to do that.
But his audience, because not a lot of people were subscribing to satellite radio at that
point, his audience probably collapsed by, I wouldn't be surprised if it was as much
as 95%.
And so the influence that he had on the culture and his ability to sort of shape conversation
and so forth just got muted.
Yeah.
And also there's a certain sense, especially in modern times where the wall gardens naturally
lead to, I don't know if there's a term for it, but people who are not creatives starting
to have power over the creatives.
Right.
And even if they don't stifle it, if they're providing incentives within the platform to
shape, shift, or even completely mutate or distort the show, I mean, imagine somebody
has got a reasonably interesting idea for a podcast and they get signed up with, let's
say Spotify.
And then Spotify is going to give them financing to get the things spun up.
And that's great, and Spotify is going to give them a certain amount of really powerful
placement within the visual field of listeners, but Spotify has conditions for that.
They say, look, we think that your podcast will be much more successful if you dumb
it down about 60%.
If you add some silly dirty jokes, if you do this, you do that, and suddenly the person
who is dependent upon Spotify for permission to come into existence and is really dependent,
really wants to please them to get that money in, to get that placement, really wants to
be successful, now all of a sudden you're having a dialogue between a complete non-creative,
some marketing data analytic person at Spotify and a creative that's going to shape what
that show is.
So that could be much more common.
And ultimately having the aggregate and even bigger impact than the cancellation, let's
say, of somebody who says the wrong word or voices the wrong idea.
That's kind of what you have with film and TV is that so much influence is exerted over
the storyline and the plots and the character arcs and all kinds of things by executives
who are completely alien to the experience and the skill set of being a showrunner and
television being a director in film that is meant to like, we can't piss off the Chinese
market here or we can't say that or we need to have cast members that have precisely these
demographics reflected or whatever it is that, and obviously despite that extraordinary,
at least TV shows are now being made, in terms of film, I think the quality has nosedived
of the average, let's say, American film coming out of a major studio, the average
quality and my view is nosedived over the past decade as it's kind of everything's
got to be a superhero franchise.
But great stuff gets made despite that, but I have to assume that in some cases, at least
in perhaps many cases, greater stuff would be made if there was less interference from
non-creative executives.
It's like the flip side of that though, and this was the pitch of Spotify because I've
heard their pitch, is Netflix.
From everybody I've heard that I've spoken with about Netflix, is they actually empower
the creator.
They do.
I don't know what the heck they do, but they do a good job of giving creators, even the
crazy ones, like Tim Dillon, like Joe Rogan, the comedians, freedom to be their crazy cells.
And the result is some of the greatest television, some of the greatest cinema, whatever you
call it, ever made.
True.
Right?
And I don't know what the heck they're doing.
It's a relative thing.
From what I understand, it's a relative thing.
They're interfering far, far, far less than NBC or AMC would have interfered.
So it's a relative thing.
And obviously, they're the ones writing the checks, and they're the ones giving the platform,
so they've ever been right to their own influence, obviously.
But my understanding is that they're relatively way more hands-off, and that has had a demonstrable
effect because I agree.
Some of the greatest video, produced video content of all time, an incredibly inordinate
percentage of that is coming out from Netflix in just a few years when the history of cinema
goes back many, many decades.
And Spotify wants to be that for podcasting, and I hope they do become that for podcasting,
but I'm wearing my skeptical goggles or skeptical hat, whatever the heck it is, because it's
not easy to do.
And it requires letting go of power, giving power to the creatives.
It requires pivoting, which large companies, even as innovative as Spotify is, it's still
now a large company, pivoting into a whole new space is very tricky and difficult.
So I'm skeptical, but hopeful.
What advice would you give to a young person today about life, about career?
We talked about startups, we talked about music, we talked about the end of human civilization.
Is there advice you would give to a young person today, maybe in college, maybe in high
school, about their life?
Well, let's see.
I mean, there's so many domains you can advise on, and I'm not going to give advice on life
because I fear that I would drift into sort of hallmark bromides that really wouldn't
be all that distinctive, and they might be entirely true.
Sometimes the greatest insights about life turn out to be like the kinds of things you'd
see on a hallmark card.
So I'm going to steer clear of that.
On a career level, one thing that I think is unintuitive but unbelievably powerful is
to focus not necessarily on being in the top sliver of 1% in excelling at one domain that's
important and valuable, but to think in terms of intersections of two domains, which are
rare but valuable.
And there's a couple reasons for this.
The first is in an incredibly competitive world that is so much more competitive than
was when I was coming out of school, radically more competitive than when I was coming out
of school, to navigate your way to the absolute pinnacle of any domain.
Let's say you want to be really, really great at Python, pick a language, whatever it is.
You want to be one of the world's greatest Python developers, JavaScript, whatever your
language is.
Hopefully it's not cobalt.
But by the way, if you listen to this, I am actually looking for a cobalt expert to
interview because I find language fascinating and there's not many of them.
So please, if you're, if you know a world expert in cobalt or Fortran, both actually
or if you are one, or if you are one, please email me.
Yeah.
So, I mean, if you're going out there and you want to be in the top sliver 1% of Python
developers, a very, very difficult thing to do, particularly if you want to be number
one in the world, something like that.
Now, using an analogy is I had a friend in college who was on a track and indeed succeeded
at that to become an Olympic medalist and I think was 100 meter breaststroke.
And he mortgaged a significant percentage of his sort of college life to that goal,
or I should say dedicated or invested or whatever you wanted to say.
But he didn't participate in a lot of the social, a lot of the late night, a lot of
the this, a lot of the that, because he was training so much.
And obviously, he also wanted to keep up with his academics.
And at the end of the day, story has a happy ending in that he did medal in that.
You know, bronze, not gold, but holy cow, anybody who gets an Olympic medal, that's
an extraordinary thing.
And at that moment, he was, you know, one of the top three people on earth at that thing.
But wow, how hard to do that, how many thousands of other people went down that path and made
similar sacrifices and didn't get there, it's very, very hard to do that.
Whereas, you know, use a personal example.
When I came out of business school, I went to a good business school and and learned
the things that were there to be learned.
And I came out and I entered a world with lots of Harvard Business School, by the way.
Okay.
Yes, it was Harvard.
It's true.
You're the first person who went there who didn't say where you went, which is beautiful.
I appreciate that.
It's one of the greatest business schools in the world.
It's a whole another fascinating conversation about that world.
But anyway, yes.
But anyway, so I learned the things that you learn getting an MBA from a top program.
And I entered a world that had hundreds of thousands of people who had MBAs, probably
hundreds of thousands who have them from, you know, top 10 programs.
So I was not particularly great at being an MBA person.
I was inexperienced, relative to most of them, and there were a lot of them, but it was okay,
MBA person, right, newly minted.
But then as it happened, I found my way into working on the commercial internet in 1994.
So I went to a, at the time, giant hot computing company called Silicon Graphics, which had
enough heft and enough, you know, headcount that they could take on and experienced MBAs
and try to train them in the world of Silicon Valley.
But within that company that had an enormous amount of surface area and was touching a lot
of areas and had unbelievably smart people at the time, it was not surprising that SGI
started doing really interesting and innovative and trailblazing stuff on the internet before
almost anybody else.
And part of the reason was that our founder Jim Clark went off to co-found Netscape with
Mark Andreessen.
So the whole company was like, wait, what was that?
What's this commercial internet thing?
So I ended up in that group.
Now, in terms of being a commercial internet person or a worldwide web person, again, I
was, in that case, barely credentialed.
I couldn't write a stitch of code, but I had a pretty good mind for grasping the business
and cultural significance of this transition.
And this was, again, we were talking earlier about emerging areas.
Within a few months, you know, I was in the relatively top echelon of people in terms
of just sheer experience, because like, let's say, it was five months into the program,
there were only so many people who had been doing worldwide web stuff commercially for
five months.
And then what was interesting, though, was the intersection of those two things.
The commercial web, as it turned out, grew into an unbelievable vastness.
And so by being a pretty good OK web person and a pretty good OK MBA person, that intersection
put me in a very rare group, which was web-oriented MBAs.
And in those early days, you could probably count on your fingers the number of people
who came out of really competitive programs who were doing stuff full-time on the internet.
And there was a greater appetite for great software developers in the internet domain,
but there was an appetite and a real one and a rapidly growing one for MBA thinkers who
were also seasoned and networked in the emerging world of the commercial worldwide web.
And so finding an intersection of two things you can be pretty good at, but is a rare intersection
and a special intersection, is probably a much easier way to make yourself distinguishable
and in demand from the world than trying to be world-class at this one thing.
So in the intersection is where there's to be discovered opportunity and success.
That's really interesting.
Yeah.
There's actually more intersection of fields and fields themselves, right?
Yeah, I mean, I'll give you kind of a funny hypothetical here, but it's one I've been
thinking about a little bit.
There's a lot of people in crypto right now.
It'd be hard to be in the top percentile of crypto people, whether it comes from just
having a sheer grasp of the industry, a great network within the industry, technological
skills, whatever you want to call it.
And then there's this parallel world, an orthogonal world called crop insurance.
And I'm sure that's a big world.
Crop insurance is a very, very big deal, particularly in the wealthy and industrialized world where
people through sophisticated financial markets rule of law and large agricultural concerns
that are worried about that.
Somewhere out there is somebody who is pretty crypto savvy, but probably not top one percent,
but also has kind of been in the crop insurance world and understands that a hell of a lot
better than almost anybody who's ever had anything to do with cryptocurrency.
And so I think that decentralized finance, DeFi, one of the interesting and I think very
world positive things that I think it's almost inevitably will be bringing to the world is
crop insurance for small holding farmers, I mean, people who have tiny, tiny plots of
land in places like India, et cetera, where there is no crop insurance available to them
because just the financial infrastructure doesn't exist.
But it's highly imaginable that using Oracle networks that are trusted outside deliverers
of factual information about rainfall in a particular area, you can start giving drought
insurance to folks like this.
The right person to come up with that idea is not a crypto whiz who doesn't know a blasted
thing about small holding farmers.
The right person to come up with that is not a crop insurance whiz who isn't quite sure
what Bitcoin is, but somebody occupies that intersection.
That's just one of gazillion examples of things that are going to come along for somebody
who occupies the right intersection of skills, but isn't necessarily the number one person
at either one of those expertises.
That's making me kind of wonder about my own little things that I'm average at and seeing
where the intersections that could be exploited.
That's pretty profound.
So we talked quite a bit about the end of the world and how we're both optimistic about
us figuring our way out.
Unfortunately, for now at least, both you and I are going to die one day way too soon.
First of all, that sucks.
It does.
I mean, one, I'd like to ask if you ponder your own mortality.
How does that kind of wisdom inside this give you about your own life?
And broadly, do you think about your life and what the heck it's all about?
Yeah, with respect to pondering mortality, I do try to do that as little as possible
because there's not a lot I can do about it, but it's inevitably there.
I think that what it does when you think about it in the right way is it makes you realize
how unbelievably rare and precious the moments that we have here are and therefore how consequential
the decisions that we make about how to spend our time are.
Do you do those 17 nagging emails or do you have dinner with somebody who's really important
to you who haven't seen in three and a half years?
If you had an infinite expanse of time in front of you, you might well rationally conclude
that I'm going to do those emails because collectively they're rather important, and I have tens
of thousands of years to catch up with my buddy, Tim.
But I think the scarcity of the time that we have helps us choose the right things if
we're tuned to that and we're tuned to the context that mortality puts over the consequence
of every decision we make of how to spend our time.
That doesn't mean that we're all very good at it, doesn't mean I'm very good at it.
But it does add a dimension of choice and significance to everything that we elect to
do.
It's kind of funny that you say you try to think about it as little as possible.
I would venture to say you probably think about the end of human civilization more than
you do about your own life.
You're probably right.
Because that feels like a problem that could be solved.
Right.
Where at the end of my own life can't be solved.
Well, I don't know.
I mean, there's transhumanists who have incredible optimism about near or intermediate future
therapies that could really, really change human lifespan.
I really hope that they're right, but I don't have a whole lot to add to that project because
I'm not a life scientist myself.
I'm in part also afraid of immortality, not as much, but close to as I'm afraid of death
itself.
So it feels like the things that give us meaning, give us meaning because of the scarcity that
surrounds it.
Agreed.
So I'm afraid of having too much of stuff that's...
Although if there was something that said, this can expand your enjoyable well-spanned
or lifespan by 75 years, I'm all in.
Well part of the reason I wanted to not do a startup, really the only thing that worries
me about doing a startup is if it becomes successful because of how much I dream, how
much I'm driven to be successful, that there will not be enough silence in my life, enough
scarcity to appreciate the moments I appreciate now as deeply as I appreciate them now.
There's a simplicity to my life now that it feels like you might disappear with success.
I wouldn't say might.
I think if you start a company that has ambitious investors, ambitious for the returns that
they'd like to see, that has ambitious employees, ambitious for the career trajectories they
want to be on and so forth, and is driven by your own ambition, there's a profound
monogamy to that, and it is very, very hard to carve out time to be creative, to be peaceful,
to be so forth because of with every new employee that you hire, that's one more mouth to feed.
With every new investor that you take on, that's one more person to whom you really
do want to deliver great returns.
And as the valuation ticks up, the threshold to delivering great returns for your investors
always rises.
And so there is an extraordinary monogamy to being a founder CEO, above all, for the
first few years, and first in people's minds could be as many as 10 or 15.
But I guess the fundamental calculation is whether the passion for the vision is greater
than the cost you'll pay.
Right.
It's all opportunity cost.
In terms of time and attention and experience.
And some things, everyone's different, but I'm less calculating, some things you just
can't help, sometimes you just dive in.
Oh, yeah.
I mean, you can do balance feats all you want on this versus that, and what's the right,
I mean, I've done in the past, and it's never worked.
It's always been like, okay, what's my gut screaming at me to do?
What about the meaning of life, you ever think about that?
Yeah.
I mean, this is where I'm going to go all hallmarking on you.
But I think that there's a few things, and one of them is certainly love.
And the love that we experience and feel and cause to well up in others is something that's
just so profound and goes beyond almost anything else that we can do.
And whether that is something that lies in the past, like maybe there was somebody that
you were dating and loved very profoundly in college and haven't seen in years, I don't
think the significance of that love is anyway diminished by the fact that it had a notional
beginning and end.
The fact is that you experience that and you triggered that in somebody else and that happened.
And it certainly doesn't have to be love of romantic partners alone.
It's family members, it's love between friends, it's love between creatures.
I had a dog for 10 years who passed away a while ago and experienced unbelievable love
with her.
It can be love of that which you create.
And we were talking about the flow states that we enter and the pride or lack of pride
or in the Minsky case, your hatred of that which you've done.
But nonetheless, the creations that we make and whether it's the love or the joy or the
engagement or the perspective shift that that cascades into other minds, I think that's
a big, big, big part of the meaning of life.
It's not something that everybody participates in necessarily.
Although I think we all do at least in a very local level by the example that we set by
the interactions that we have, but for people who create works that travel far and reach
people they'll never meet, that reach countries they'll never visit, that reach people perhaps
that come along and come across their ideas or their works or their stories or their aesthetic
creations of other sorts long after they're dead, I think that's really, really big part
of the fabric of the meaning of life.
And so all these things like love and creation, I think really is what it's all about.
And part of love is also the loss of it.
There's a Louie episode with Louie CK, there's an old gentleman was giving him advice that
sometimes the sweetest parts of love is when you lose it and you remember it sort of you
reminisce on the loss of it.
And there's some aspect in which, and I have many of those in my own life, that almost
like the memories of it and the intensity of emotion you still feel about it is like
the sweetest part.
You're like after saying goodbye you relive it.
So that goodbye is what is also part of love, the loss of it is also part of love.
I don't know, it's back to that scarcity.
I won't say the loss is the best part personally, but it definitely is an aspect of it.
And the grief you might feel about something that's gone makes you realize what a big deal
it was.
Speaking of which, this particular journey we went on together, come to an end.
So I have to say goodbye and I hate saying goodbye, Rob, this is truly an honor, I've
really been a big fan.
People should definitely check out your podcast, you're a master at what you do in the conversation
space and the writing space.
It's been an incredible honor that you would show up here and spend this time with me.
I really, really appreciate it.
Well, it's been a huge honor to be here as well and also a fan and having for a long
time.
Thanks, Rob.
Thanks for listening to this conversation with Rob Reed and thank you to Athletic Greens,
Belcampo, Fundrise, and Netsweet.
Check them out in the description to support this podcast.
And now, let me leave you with some words from Plato.
We can easily forgive a child who's afraid of the dark.
The real tragedy of life is when men are afraid of the light.
Thank you for listening and hope to see you next time.