logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 9h 33m 5s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Sam Harris, one of the most influential and pioneering
thinkers of our time.
He is the host of the Making Sense podcast and the author of many seminal books on human
nature and the human mind, including The End of Faith, The Moral Landscape, Lying, Free
Will, and Waking Up.
He also has a meditation app called Waking Up that I've been using to guide my own meditation.
Look mentioned of our sponsors, National Instruments, Val Campo, Athletic Greens, and Linode.
Check them out in the description to support this podcast.
As a side note, let me say that Sam has been an inspiration to me as he has been for many,
many people.
First, from his writing, then his early debates, maybe 13-14 years ago on the subject of faith,
his conversations with Christopher Hitchens, and since 2013, his podcast.
I didn't always agree with all of his ideas, but I was always drawn to the care and depth
of the way he explored those ideas.
The calm and clarity amid the storm of difficult, at times controversial discourse.
I really can't express in words how much it meant to me that he, Sam Harris, someone
who I have listened to for many hundreds of hours, would write a kind email to me saying
he enjoyed this podcast and more that he thought I had a unique voice that added something
to this world.
Whether it's true or not, it made me feel special and truly grateful to be able to do
this thing, and motivated me to work my ass off to live up to those words.
Meeting Sam and getting to talk with him was one of the most memorable moments of my life.
This is the Lex Friedman podcast, and here is my conversation with Sam Harris.
I've been enjoying meditating with the Waking Up app recently.
It makes me think about the origins of cognition and consciousness, so let me ask, where do
thoughts come from?
Well, that's a very difficult question to answer.
Subjectively, they appear to come from nowhere.
They come out of some kind of mystery that is at our backs subjectively, which is to
say that if you pay attention to the nature of your mind in this moment, you realize that
you don't know what you're going to think next.
Now, you're expecting to think something that seems like you authored it.
You're not unless you're schizophrenic or you have some kind of thought disorder where
your thoughts seem fundamentally foreign to you.
They do have a kind of signature of selfhood associated with them, and people readily identify
with them.
They feel like what you are.
I mean, this is the thing, this is the spell that gets broken with meditation.
Our default state is to feel identical to the stream of thought, which is fairly paradoxical,
because how could you, as a mind, as a self, if there were such a thing as a self, have
how could you be identical to the next piece of language or the next image that just springs
into conscious view, and meditation is ultimately about examining that point of view closely
enough so as to unravel it and feel the freedom that's on the other side of that identification.
The subjectively thoughts simply emerge, and you don't think them before you think them.
This is the first moment where anyone listening to us or watching us now could perform this
experiment for themselves.
Just imagine something or remember something.
Just pick a memory, any memory.
You've got a storehouse of memory, just promote one to consciousness.
Did you pick that memory?
I mean, let's say you remembered breakfast yesterday, or you remembered what you said
to your spouse before leaving the house, or you remembered what you watched on Netflix
last night, or you remembered something that happened to you when you were four years old,
whatever it is.
Right?
First, it wasn't there, and then it appeared.
That is not a, I'm sure we'll get to the topic of free will, ultimately.
That's not evidence of free will, right?
Why are you so sure, by the way?
It's very interesting.
Well, yeah, there's no free will of my own, yeah.
Everything just appears, right?
What else could it do?
That's the subjective side of it.
Objectively, we have every reason to believe that many of our thoughts, all of our thoughts
are at bottom what some part of our brain is doing neurophysiologically.
These are the products of some kind of neural computation and neural representation, and
we're talking about memories.
Is it possible to pull at the string of thoughts that try to get to its root to try to dig
in past the obvious surface subjective experience of like the thoughts pop out of nowhere?
Is it possible to somehow get closer to the roots of where they come out of from the firing
of the cells, or is it a useless pursuit to dig into that direction?
You can get closer to many, many subtle contents in consciousness, so you can notice things
more and more clearly and have a landscape of mind open up and become more differentiated
and more interesting.
If you take psychedelics, it opens up wide, depending on what you've taken and the dose.
It opens in directions and to an extent that very few people imagine would be possible,
but for having had those experiences.
But this idea of you getting closer to something, to the datum of your mind, or something of
interest in there, or something that's more real, is ultimately undermined because there's
no place from which you're getting closer to it.
There's no your part of that journey.
We tend to start out, whether it's in meditation or in any kind of self-examination or taking
psychedelics.
We start out with this default point of view of feeling like we're the rider on the horse
of consciousness, or we're the man in the boat going down the stream of consciousness,
but we're differentiated from what we know cognitively, introspectively.
But that feeling of being differentiated, that feeling of being a self that can strategically
pay attention to some contents of consciousness is what it's like to be identified with some
part of the stream of thought that's going uninspected.
It's a false point of view.
When you see that and cut through that, then this sense of this notion of going deeper
kind of breaks apart because really, there is no depth ultimately.
Everything is right on the surface.
There's no center to consciousness.
There's just consciousness in its contents, and those contents can change vastly.
Again, if you drop acid, the contents change, but in some sense that doesn't represent
a position of depth versus, the continuum of depth versus surface has broken apart.
So you're taking as a starting point that there is a horse called consciousness, and
you're riding it, and the actual riding is very shallow.
This is all surface.
So let me ask about that horse.
What's up with the horse?
What is consciousness?
From where does it emerge?
How fundamental is it to the physics of reality?
How fundamental is it to what it means to be human?
And I'm just asking for a friend so that we can build it in our artificial intelligence
systems.
Yeah.
Well, that remains to be seen if we will build it purposefully or just by accident.
It's a major ethical problem potentially.
My concern here is that we may, in fact, build artificial intelligence that passes the touring
test, which we begin to treat not only as super intelligent because it obviously is
and demonstrates that, but we begin to treat it as conscious because it will seem conscious.
We will have built it to seem conscious, and unless we understand exactly how consciousness
emerges from physics, we won't actually know that these systems are conscious.
They may say, listen, you can't turn me off because that's a murder, and we will be convinced
by that dialogue because just in the extreme case, who knows when we'll get there.
But if we build something like perfectly humanoid robots that are more intelligent than we are,
so we're basically in a Westworld-like situation, there's no way we're going to withhold an
attribution of consciousness from those machines.
They're just going to advertise their consciousness in every glance and every utterance.
But we won't know in some deeper sense than we can be skeptical of the consciousness of
other people.
I mean, if someone can roll that back and say, well, I don't know that you're conscious
or you don't know that I'm conscious, we're just passing the touring test for one another,
but that kind of solipsism isn't justified biologically, or anything we understand about
the mind biologically suggests that you and I are part of the same role, the dice in terms
of how intelligent and conscious systems emerged in the wetware of brains like ours, right?
So it's not parsimonious for me to think that I might be the only conscious person or even
the only conscious primate.
I would argue it's not parsimonious to withhold consciousness from other apes and even other
mammals ultimately, and once you get beyond the mammals, then my intuitions are not really
clear.
The question of how it emerges is genuinely uncertain, and ultimately the question of
whether it emerges is still uncertain.
You can, it's not fashionable to think this, but you can certainly argue that consciousness
might be a fundamental principle of matter that doesn't emerge on the basis of information
processing, even though everything else that we recognize about ourselves as minds almost
certainly does emerge, like an ability to process language that clearly is a matter
of information processing, because you can disrupt that process in ways that is just
so clear, and the problem that the confound with consciousness is that, yes, we can seem
to interrupt consciousness, and you can give someone general anesthesia, and then you wake
them up and you ask them what was that like, and they say, nothing, I don't remember anything,
but it's hard to differentiate a mere failure of memory from a genuine interruption in consciousness,
whereas it's not with interrupting speech.
We know when we've done it, and it's just obvious that you disrupt the right neural
circuits and you've disrupted speech.
So if you had to bet all your money on one camp or the other, would you say, do you earn
a side of panpsychism, where consciousness is really fundamental to all of reality, or
more on the other side, which is like, it's a nice little side effect, a useful hack for
us humans to survive?
On that spectrum, where do you land when you think about consciousness, especially from
an engineering perspective?
I'm truly agnostic on this point, I think it's kind of in coin toss mode for me.
I don't know, and panpsychism is not so compelling to me, again, it just seems unfalsifiable.
I wouldn't know how the universe would be different if panpsychism were true, and just
to remind people, panpsychism is this idea that consciousness may be pushed all the way
down into the most fundamental constituents of matters, so there might be something that
it's like to be an electron or a cork, but then you wouldn't expect anything to be different
at the macro scale, or at least I wouldn't expect anything to be different.
So it may be unfalsifiable.
It just might be that reality is not something we're as in touch with as we think we are,
and that if that is base layer to break it into mind and matter as we've done ontologically,
is to misconstrue it, there could be some kind of neutral monism at the bottom, and
this idea doesn't originate with me, this goes all the way back to Bertrand Russell
and others 100 plus years ago, but I just feel like the concepts we're using to divide
consciousness and matter may in fact be part of our problem.
Where the rubber hits the road psychologically here are things like, well, what is death?
Any expectation that we survive death or any part of us survives death, that really seems
to be the many people's concern here.
Well, I tend to believe just as a small little tangent, like I'm with Ernest Becker on this,
but it's interesting to think about death and consciousness, which one is the chicken,
which one is the egg, because it feels like death could be the very thing, like our knowledge
of mortality could be the very thing that creates the consciousness.
Yeah, well, then you're using consciousness differently than I am, so for me, consciousness
is just the fact that the lights are on at all, that there's an experiential quality
to anything.
Much of the processing that's happening in our brains right now certainly seems to be
happening in the dark, right?
It's not associated with this qualitative sense that there's something that is like
to be that part of the mind doing that mental thing, but for other parts, the lights are
on and we can talk about and whether we talk about it or not, we can feel directly that
there's something that is like to be us, there's something that seems to be happening, right?
And the seeming in our case is broken into vision and hearing and proprioception and
taste and smell and thought and emotion, and there are the contents of consciousness that
we are familiar with and that we can have direct access to in any present moment that
when we're, quote, conscious.
And even if we're confused about them, even if we're asleep and dreaming and it's not
a lucid dream, we're just totally confused about our circumstance, what you can't say
is that we're confused about consciousness.
You can't say that consciousness itself might be an illusion because on this account, it
just means that things seem any way at all, even like if this, you know, it seems to me
that I'm seeing a cup on the table, I could be wrong about that, it could be a hologram,
I could be asleep and dreaming, I could be hallucinating, but the seeming part isn't
really up for grabs in terms of being an illusion, it's not something seems to be happening.
That seeming is the context in which every other thing we can notice about ourselves
can be noticed.
And it's also the context in which certain illusions can be cut through because we're
not, we can be wrong about what it's like to be us and we can, I'm not saying we're
incorrigible with respect to our claims about the nature of our experience, but for instance,
many people feel like they have a self and they feel like it has free will and I'm quite
sure at this point that they're wrong about that and that you can cut through those experiences
and then things seem a different way, right?
So it's not that things don't, there aren't discoveries to be made there and assumptions
to be overturned, but this kind of consciousness is something that I would think, it doesn't
just come online when we get language, it doesn't just come online when we form a concept
of death or the finiteness of life, it doesn't require a sense of self, right?
So it doesn't, it's prior to differentiating self and other and I wouldn't even think it's
necessarily limited to people, I do think probably any mammal has this, but certainly
if you're going to presuppose that something about our brains is producing this, right?
And that's a very safe assumption even though we can't, even though you can argue the jury
is still out to some degree, then it's very hard to draw a principled line between us and
chimps or chimps and rats even in the end, given the underlying neural similarities.
So I don't know, phylogenetically, I don't know how far back to push that.
There are people who think single cells might be conscious or that flies are certainly conscious.
They've got something like 100,000 neurons in their brains and it's just, there's a
lot going on even in a fly, right?
But I don't have intuitions about that.
But it's not in your sense an illusion you can cut through.
I mean, to push back the alternative version could be it is an illusion constructed by,
just by humans.
I'm not sure I believe this, but it in part of me hopes this is true because it makes
it easier to engineer is that humans are able to contemplate their mortality and that contemplation
in itself creates consciousness that like the rich lights on experience.
So the lights don't actually even turn on in the way that you're describing until afterbirth
in that construction.
So do you think it's possible that that is the case, that it is a sort of construct of
the way we deal almost like a social tool to deal with the reality of the world, the
social interaction with other humans?
Or is, because you're saying the complete opposite, which is it's like fundamental to
the single cell organisms and trees and so on.
Right.
Well, yeah.
So I don't know how far down to push it.
I don't have intuitions that single cells are likely to be conscious, but they might
be.
And again, it could be unfalsifiable.
But as far as babies not being conscious or you're not, you don't become conscious until
you can recognize yourself in a mirror or you have a conversation or treat other people.
First of all, babies treat other people as others far earlier than we have traditionally
given them credit for.
And they certainly do it before they have language, right?
So it's got to proceed language to some degree.
And I mean, you can interrogate this for yourself because you can put yourself in various states
that are rather obviously not linguistic, meditation allows you to do this.
You can certainly do it with psychedelics where it's just, your capacity for language
has been obliterated and yet you're all too conscious.
In fact, I think you could make a stronger argument for things running the other way,
that there's something about language and conceptual thought that is eliminative of
conscious experience, that we're potentially much more conscious of data, sense data and
everything else than we tend to be, and we have trimmed it down based on how we have
acquired concepts.
And so when I walk into a room like this, I know I'm walking into a room, I have certain
expectations of what is in a room.
I would be very surprised to see wild animals in here or waterfall or things I'm not expecting,
but I can know I'm not expecting them or I'm expecting their absence because of my capacity
to be surprised once I walk into a room and I see a live gorilla or whatever.
So there's structure there that we have put in place based on all of our conceptual learning
and language and language learning.
And it causes us not to, one of the things that happens when you take psychedelics and
you just look as though for the first time at anything, it becomes incredibly overloaded
with, it can become overloaded with meaning and just the torrents of sense data that are
coming in in even the most ordinary circumstances can become overwhelming for people.
And that tends to just obliterate one's capacity to capture any of it linguistically.
And as you're coming down, have you done psychedelics, have you ever done acid or?
Not acid, mushroom, and that's it.
And also edibles, but there's some psychedelic properties to them.
But yeah, mushrooms several times and always had an incredible experience, exactly the
kind of experience you're referring to, which is if it's true that language constraints
are experience, it felt like I was removing some of the constraints.
Because even just the most basic things were beautiful in the way that I wasn't able to
appreciate previously, like trees and nature and so on.
And the experience of coming down is an experience of encountering the futility of capturing
what you just saw a moment ago in words, especially if you have any part of your self-concept
and your ego program is to be able to capture things in words.
And if you're a writer or a poet or a scientist or someone who wants to just encapsulate
the profundity of what just happened, the total fatuousness of that enterprise when
you have taken a whopping dose of psychedelics and you begin to even gesture at describing
it to yourself so that you could describe it to others, it's like trying to thread a
needle using your elbows.
I mean, it's like you're trying something, it's like the mere gesture proves its impossibility.
And for me, that suggests just empirically on the first person side that it's possible
to put yourself in a condition where it's clearly not about language structuring your
experience and you're having much more experience than you tend to.
So the language is primary for some things, but it's certainly primary for certain kinds
of concepts and certain kinds of semantic understandings of the world, but it's clearly
more to mind than the conversation we're having with ourselves or that we can't have with
others.
Can we go to that world of psychedelics for a bit?
What do you think?
So Joe Rogan apparently and many others meet apparently elves when they on DMT.
A lot of people report this kind of creatures that they see.
And again, it's probably the failure of language to describe that experience, but DMT is an
interesting one, there's, as you're aware, there's a bunch of studies going on on psychedelics,
currently MDMA, so Simon and John Hopkins and much other places.
But DMT, they all speak of as like some extra super level of a psychedelic.
Yeah, do you have a sense of where it is our mind goes on psychedelics, but in DMT especially?
Well, unfortunately, I haven't taken DMT.
Unfortunately or fortunately?
Unfortunately.
Unfortunately.
Unfortunately.
Although I presume it's in my body as it is in everyone's brain and many, many plants,
apparently.
And I've wanted to take it, I haven't had an opportunity that was presented itself where
it was obviously the right thing for me to be doing.
But for those who don't know, DMT is often touted as the most intense psychedelic and
also the shortest acting.
You smoke it and it's basically a 10 minute experience or a three minute experience within
like a 10 minute window that when you're really down after 10 minutes or so.
And Terrence McKenna was a big proponent of DMT, that was his, the center of the bullseye
for him psychedelically, apparently.
And it does, it is characterized, it seems for many people by this phenomenon, which is
unlike virtually any other psychedelic experience, which is your, it's not just your perception
being broadened or changed, it's you, according to Terrence McKenna, feeling fairly unchanged,
but catapulted into a different circumstance.
I mean, you have been shot elsewhere and find yourself in relationship to other entities
of some kind.
So the place is populated with things that seem not to be your mind.
So it does feel like travel to another place because you're on change yourself.
Again, I just have this on the authority of the people who have described their experience.
But it sounds like it's pretty common, it sounds like it's pretty common for people
not to have the full experience because it's apparently pretty unpleasant to smoke.
So it's like getting enough on board in order to get shot out of the cannon and land among
the what McKenna called self-transforming machine elves that appeared to him like jeweled, you
know, Faberge egg, like self-drippling basketballs that were handing him completely uninterpretable
reams of profound knowledge.
It's an experience I haven't had, so I just have to accept that people have had it.
I would just point out that our minds are clearly capable of producing apparent others
on demand that are totally compelling to us, right?
There's no limit to our ability to do that as anyone who's ever remembered a dream can
attest.
I mean, every night we go to sleep, you know, some of us don't remember dreams very often,
or some dream vividly every night, and just think of how insane that experience is.
I mean, you've forgotten where you were, right?
That's the strangest part.
I mean, this is psychosis, right?
You have lost your mind, you have lost your connection to your episodic memory, or even
your expectations that reality won't undergo wholesale changes a moment after you have
closed your eyes, right?
Like you're in bed, you're watching something on Netflix, you're waiting to fall asleep,
and then the next thing that happens to you is impossible, and you're not surprised, right?
You're talking to dead people, you're hanging out with famous people, you're someplace
you couldn't physically be, you can fly, and even that's not surprising, right?
You have lost your mind, but relevantly for this.
Or found it.
You found, I mean, lucid dreaming is very interesting, because then you can have the
best of both circumstances, and then it can become systematically explored.
Well, what I mean by found, just to start to interrupt, is like, if we take this brilliant
idea that language constrains us, grounds us, language and other things of the waking
world ground us, maybe it is that you've found the full capacity of your cognition when you
dream, or when you do psychedelics.
You're stepping outside the little human cage, the cage of the human condition, open the
door and step out and look around, and then go back in.
Well, you've definitely stepped out of something and into something else, but you've also lost
something, right?
You've lost certain capacities.
What's that?
Memory.
Well, just, yeah, in this case, you literally didn't, you don't have enough presence of
mind in the dreaming, in the dreaming city, or even in the psychedelic state, if you take
enough.
To do math.
There's no psychological, there's very little psychological continuity with your life, such
that you're not surprised to be in the presence of someone who should be, you should know
is dead, or you should know you're not likely to have met by normal channels, right?
You're now talking to some celebrity and it turns out you're best friends, right?
And you're not even, you have no memory of how you got there, you're like, how did you
get into the room?
You're like, did you drive to this restaurant?
You have no memory and none of that's surprising to you.
So you're kind of brain damaged in a way, you're not reality testing in the normal way.
The fascinating possibility is that there's probably thousands of people who've taken
psychedelics of various forms and have met Sam Harris on that journey.
Well, I would put it more likely in dreams, not, you know, because in psychedelics, you
don't tend to hallucinate in a dream-like way, I mean, so DMT has given you an experience
of others, but it seems to be non-standard, it's not just like dream hallucinations.
But to the point of coming back to DMT, the people want to suggest, and Terence McKenna
certainly did suggest, that because these others are so obviously other and they're
so vivid, well, then they could not possibly be the creation of my own mind, but every
night in dreams, you create a compelling, or what is to you at the time, a totally compelling
simulacrum of another person, right?
And that just proves the mind is capable of doing it.
Now the phenomenon of lucid dreaming shows that the mind isn't capable of doing everything
you think it might be capable of even in that space.
So one of the things that people have discovered in lucid dreams, and I haven't done a lot
of lucid dreaming, so I can't confirm all of this, so I can confirm some of it.
Apparently in every house, in every room in the mansion of dreams, all light switches
are dimmer switches, like if you go into a dark room and flip on the light, it gradually
comes up, it doesn't come up instantly on demand, because apparently this is covering
for the brain's inability to produce from a standing start visually rich imagery on
demand.
So I haven't confirmed that, but people have done research on lucid dreaming claim that
it's all dimmer switches, but one thing I have noticed and people can check this out
is that in a dream, if you look at text, a page of text or a sign or a television that
has text on it, and then you turn away and you look back at that text, the text will
have changed.
The total is just a chronic instability, graphical instability of text in the dream state.
And I don't know if that, maybe that's, someone can confirm that that's not true for them,
but whenever I've checked that out, that has been true for me.
So it keeps generating it like real time from a video game perspective.
Yeah, it's rendering, it's re-rendering it for some reason.
What's interesting, I actually, I don't know how I found myself in this sets of that part
of the internet, but there's quite a lot of discussion about what it's like to do math
on LSD, because apparently one of the deepest thinking processes needed is those of mathematicians
or theoretical computer scientists, basically doing anything that involves math is proofs,
and you have to think creatively, but also deeply, and you have to think for many hours
at a time.
And so they're always looking for ways to, like, is there any sparks of creativity that
could be injected?
And apparently, out of all the psychedelics, the worst is LSD, because it completely destroys
your ability to do math well.
And I wonder whether that has to do with your ability to visualize geometric things in a
stable way in your mind and hold them there and stitch things together, which is often
what's required for proofs.
But again, it's difficult to kind of research these kinds of concepts, but it does make
me wonder where, what are the spaces, how's the space of things you're able to think about
and explore, morphed by different psychedelics or dream states and so on, and how is that
different?
How much does it overlap with reality?
And what is reality?
Is there a waking state reality, or is it just a tiny subset of reality, and we get to take
a step in other versions of it?
We tend to think very much in a space time, four-dimensional, there's a three-dimensional
world, there's time, and that's what we think about reality.
And we think of traveling as walking from point A to point B in the three-dimensional
world, but that's a very kind of human surviving, try not to get eaten by a lion conception
of reality.
What if traveling is something like we do with psychedelics and meet the elves?
What if it's something, what if thinking or the space of ideas as we kind of grow and
think through ideas, that's traveling?
Or what if memories is traveling?
I don't know if you have a favorite view of reality, or if you had, by the way, I should
say an excellent conversation with Donald Hoffman.
Is there any inkling of his sense in your mind that reality is very far from, actual
like objective reality is very far from the kind of reality we imagine, we perceive and
we play with in our human minds?
Well, the first thing to grant is that we're never in direct contact with reality, whatever
it is, unless that reality is consciousness, right?
So we're only ever experiencing consciousness and its contents.
And then the question is, how does that circumstance relate to, quote, reality at large?
And Donald Hoffman is somebody who's happy to speculate, well, maybe there isn't a reality
at large.
Maybe it's all just consciousness on some level.
And that's interesting that runs into, to my eye, various philosophical problems that
or at least you have to do a lot, you have to add to that picture, picture of idealism
for me, that's usually all the whole family of views that would just say that the universe
is just mind or just consciousness at bottom, we'll go by the name of idealism in Western
philosophy.
You have to add to that idealistic picture all kinds of epicycles and kind of weird
coincidences and to get the predictability of our experience and the success of materialist
science to make sense in that context.
So the fact that we can, what does it mean to say that there's only consciousness at
bottom, right?
Nothing outside of consciousness, because no one's ever experienced anything outside of
consciousness.
I think scientists has ever done an experiment where they were contemplating data, no matter
how far removed from our sense bases, whether it's they're looking at the Hubble deep field
or they're smashing atoms or whatever tools they're using, they're still just experiencing
consciousness and it's various deliverances and layering their concepts on top of that.
So that's always true.
And yet that somehow doesn't seem to capture the character of our continually discovering
that our materialist assumptions are confirmable, right?
So take the fact that we unleash this fantastic amount of energy from within an atom, right?
And we first, we have the theoretical suggestion that it's possible, right?
We come back to Einstein, there's a lot of energy in that matter, right?
And what if we could release it, right?
And then we perform an experiment that, in this case, the Trinity test site in New Mexico,
where the people who are most adequate to this conversation, people like Robert Oppenheimer
are standing around, not altogether certain, it's going to work, right?
They're performing an experiment, they're wondering what's going to happen, they're
wondering if their calculations around the yield are off by orders of magnitude.
Some of them are still wondering whether the entire atmosphere of Earth is going to combust,
right?
The nuclear chain reaction is not going to stop, and lo and behold, there was that energy
to be released from within the nucleus of an atom.
And that could, so it's just what the picture one forms from those kinds of experiments.
And just the knowledge is just our understanding of evolution, just the fact that the Earth
is billions of years old, and life is hundreds of millions of years old, and we weren't here
to think about any of those things.
And all of those processes were happening, therefore, in the dark, and they are the processes
that allowed us to emerge from prior life forms in the first place.
To say that it's all a mass, that nothing exists outside of consciousness, conscious minds
of the sort that we experience, it just seems like a bizarrely anthropocentric claim, analogous
to the moon isn't there if no one's looking at it, right?
The moon as a moon isn't there if no one's looking at it, I'll grant that, because that's
already a kind of fabrication born of concepts.
But the idea that there's nothing there, that there's nothing that corresponds to what we
experience as the moon, unless someone's looking at it, that just seems just a way too parochial
way to set out on this journey of discovery.
There is something there, there's a computer waiting to render the moon when you look at
it, that the capacity for the moon to exist is there.
So if we're indeed living in a simulation, which I find a compelling thought experiment,
it's possible that there is this kind of rendering mechanism, but not in a silly way that we
think about in video games, but in some kind of more fundamental physics way.
And we have to account for the fact that it renders experiences that no one has had yet,
that no one has any expectation of having, it can violate the expectations of everyone
lawfully, right?
And then there's some lawful understanding of why that's so.
It's like, I mean, just to bring it back to mathematics, I'm like, like, certain numbers
are prime whether we have discovered them or not, right?
Like, there's the highest prime number that anyone can name now, and then there's the
next prime number that no one can name, and it's there, right?
So it's like, to say that our minds are putting it there, that what we know as mind in ourselves
is in some way, in some sense, putting it there, that the base layer of reality is consciousness,
right?
You know, that we're identical to the thing that is rendering this reality.
There's some, you know, hubris is the wrong word, but it's like, it's okay if reality
is bigger than what we experience, you know, and has structure that we can't anticipate
and that isn't just, I mean, again, there's a, there's certainly a collaboration between
our minds and whatever is out there to produce what we call, you know, the stuff of life.
But it's not, the idea that it's, I don't know, I mean, there are a few stops on the
train of idealism and kind of new age thinking and Eastern philosophy that I don't, philosophically
I don't see a need to take.
I mean, the plays, experientially and scientifically, I feel like it's, you can get everything you
want acknowledging that consciousness has a character that can be explored from its
own side so that you're bringing kind of the first person experience back into the conversation
about, you know, what is a human mind and, you know, what is true.
And you can explore it with different degrees of rigor.
And there are things to be discovered there, whether you're using a technique like meditation
or psychedelics, and that these experiences have to be put in conversation with what we
understand about ourselves from a third person side, neuroscientifically or in any other
way.
But to me, the question is, what if we're out, the sense I have from this kind of, you
play shooters?
No.
There's a physics engine that generates, that's,
Well, yeah, I have, you mean first person shooter games?
Yes.
Yes.
Sorry.
Not often, but yes.
There's a physics engine that generates consistent reality, right?
My sense is the same could be true for a universe in the following sense that our conception
of reality, as we understand it now in the 21st century, is a tiny subset of the full
reality.
It's not that the reality that we conceive of that's there, the moon being there is
not there somehow.
It's that it's a tiny fraction of what's actually out there.
And so the, the physics engine of the universe is just maintaining the useful physics, the
useful reality, quote unquote, for us to have a consistent experience as human beings.
But maybe we descendants of apes are really only understand like 0.0001% of actual physics
of reality.
Like this, we can even just start with the consciousness thing.
But maybe our minds are just, we're just too dumb by design.
That truly resonates with me and I'm surprised it doesn't resonate more with most scientists
that I talked to.
When you just look at, you look at how close we are to chimps, right?
And chimps don't know anything, right?
Clearly they have no idea what's going on, right?
And then you get us, but then you, it's only a subset of human beings that really understand
much of what we're talking about on any, you know, any area of specialization.
And if they all died in their sleep tonight, right, you'd be left with people who might
take a thousand years to rebuild the internet, you know, or if ever, right?
I mean, literally it's like, you know, and, you know, I would extend this to myself.
I mean, there, there are areas of scientific specialization where I have either no discernible
competence.
I mean, I spend no time on it.
I have not acquired the tools.
It would just be an article of faith for me to think that I could acquire the tools to
actually make a breakthrough in those areas.
And I mean, you know, your own area is one.
I mean, you know, I've never spent any significant amount of time trying to be a programmer,
but it's pretty obvious I'm not Alan Turing, right?
It's like, like, if that were, if that were my capacity, I would have discovered that
in myself.
I would have found programming irresistible.
My few fall, my first fall starts in, in learning, I think it was C. It was just, you know, I
bounced off.
Like this was not fun.
I hate, I mean, I, trying to figure out what, what, you know, the syntax error that's causing
this thing not to compile was just a fucking awful experience.
I hated it, right?
I hated every minute of it.
So it was not, um, so if it was just people like me left, like, when do we get the internet
again?
Right?
And we lose, we lose, you know, we lose the internet.
When do we get it again?
Right?
When do we get a, anything like a proper science of information, right?
You need a Claude Shannon or an Alan Turing just to plant a flag in the ground right
here and say, all right, can everyone see this, you know, even if you don't quite know
what I'm up to, you all have to come over here to, to, to make some progress.
Um, and you know, there are, you know, hundreds of topics where that's the case.
So we're bare, we barely have a purchase on making anything like discernible intellectual
progress, uh, in any generation, and yeah, I'm just a, Max Tegmark makes this point.
He's one of the few people who does, um, in physics, if you, if you just to take the,
the truth of evolution seriously, right, and, and realize that there's nothing about us
that has evolved to understand reality perfectly.
I mean, we just, we're just not that kind of ape, right?
There's been no evolutionary pressure along those lines.
So what we are making do with tools that were designed for fights with sticks and rocks,
right?
Um, and it's amazing, we, we can do as much as we can.
I mean, we just, you know, the UNR just sitting here on, on the back of having received an
mRNA vaccine, you know, that, uh, has certainly changed our life, given what the last year
was like, like, and it's going to change the world if rumors of coming miracles are, are
borne out.
I mean, if it's now, um, seems likely we have a, a vaccine coming for malaria, right?
Which has been killing millions of people a year for as long as we've been alive.
Um, I think it's down to like 800,000 people a year now because we've spread so many bednets
around, but it was like two and a half million people every year.
Uh, it's amazing what we can do, but yeah, I have, if in fact that, you know, the answer
of the book of nature, the back of the book of nature is you understand 0.1% of what there
is to understand and half of what you think you understand is wrong.
That would not surprise me at all.
It is funny to look at our evolutionary history, even back to chimps, I'm pretty sure even
chimps thought they understood the world well.
So at every point in that timeline of evolutionary development throughout human history, there's
a sense like there's no more, you hear this message over and over, there's no more things
to be invented.
But a hundred years ago, there were, there's a famous story, I forget which physicists
told it, but there was, there were physicists telling their, their undergraduate students
not to go into, to get graduate degrees in physics because it basically all the problems
had been solved.
And this is like around, you know, 1915 or so.
Turns out you were right.
I'm going to ask you about free will.
Oh, okay.
Uh, you've recently released an episode of your podcast, Making Sense, for those with
a shorter attention span, basically summarizing your position on free will.
I think it was under an hour and a half.
Yeah, yeah.
It was, it was, it was brief and clear.
So allow me to summarize the summary, TLDR, and maybe you tell me where I'm wrong.
So free will is an illusion.
And even the experience of free will is an illusion.
Like we don't even experience it.
What am I, am I good in my summary?
Yeah.
I mean, this is a, this is a line that's a little hard to scan for people.
I say that it's not merely that free will is an illusion, the illusion of free will is
an illusion.
Right.
Like there is no illusion of free will.
And that is a, unlike many other illusions, uh, that's a, a more fundamental claim.
It's like, it's not that it's wrong.
It's not even wrong.
I mean, that's, I guess the, that was, uh, I think Wolfgang Pauley, who derided one
of his, uh, colleagues or enemies with that, uh, uh, aspersion about his theory in quantum
mechanics.
Um, it's, so there are things that you, there, there are genuine illusions.
There are things that you do experience and then you can kind of punch through that experience
or you can't, you can't actually experience.
You can't, you can't experience them any other way.
It's just, um, it's just, we just know it's not a vertical experience.
It's take like a visual illusion.
There are visual illusions that, you know, a lot of these come to me on Twitter these
days or these amazing visual illusions where like, you know, every figure in this gift
seems to be moving, but nothing in fact is moving.
You can just like put a ruler on your screen and nothing's moving.
Um, some of those illusions, you can't see any other way.
I mean, they're just, they're hacking aspects of the visual system that are just eminently
hackable and you, you know, you, you have to use a ruler to, to convince yourself that
the thing isn't actually moving.
Now, there are other visual illusions where you're taken in by it at first, but if you
pay more attention, you can actually see that it's not there, right?
Or it's not how it first seemed like the, uh, like the necker cube is a good example
of that.
Like the necker cube is just that schematic of a cube, of a transparent cube, which pops
out one way or the other, the one, one face can pop out and the other face can pop out.
But you can actually just see it as flat with no pop out, which is a more of a vertical
way of, of looking at it.
So there are subject, there are kind of inward correlates to this.
And I would say that the, um, the sense of self, the sense of self and free will are
closely related.
I'm often described them as, as two sides of the same coin, but they're not quite the
same in the, their, their spuriousness.
I mean, so the sense of self is something that people, I think, do experience, right?
It's not a very clear experience, but it's not, I wouldn't call the illusion of self
and illusion, but the illusion of free will is an illusion in that as you pay more attention
to your experience, you begin to see that it's totally compatible with an absence of
free will.
You don't, I mean, coming back to the place we started, you don't know what you're going
to think next.
You don't know what you're going to intend next.
You don't know what's going to just occur to you that you must do next.
You don't know, you don't know how much you were going to feel the behavioral imperative
to act on that thought.
If you suddenly feel, oh, I don't need to do that.
That's, I can do that tomorrow.
You don't know where that comes from.
You didn't know that was going to arise.
You didn't know that was going to be compelling.
All of this is compatible with some evil genius in the next room, just typing in code into
your experience.
It's like this, okay, let's give him the, oh my God, I just forgot it was going to be
our anniversary in one week thought, right?
Give him the cascade of fear.
Give him this brilliant idea for the thing he can buy that's going to take him no time
at all and this, you know, overpowering sense of relief.
All of our experiences is compatible with the script already being written, right?
And I'm not saying the script is written.
I'm not saying that fatalism is, you know, is the right way to look at this.
But we just don't have even our most deliberate voluntary action where we go back and forth
between two options, you know, thinking about the reason for A and then, then reconsidering
and going, thinking harder about B and just going inie, meanie, minie, moe until the end
of the hour, however laborious you can make it, there is a utter mystery at your back
finally promoting the thought or intention or ration, rationale that is most compelling
and therefore deliberately, behaviorally effective and just, and this can drive some people a
little crazy.
So, you know, I usually preface what I say about free will with the caveat that if thinking
about your mind this way makes you feel terrible, well, then stop, you know, get off the ride,
you know, switch the channel, you don't have to go down this path.
But for me and for many other people, it's incredibly freeing to recognize this about
the mind because one, you realize that you're cutting through the illusion of the self as
immensely freeing for a lot of reasons that we can talk about separately, but losing the
sense of free will does two things very vividly for me.
One is it totally undercuts the basis for, the psychological basis for hatred, right?
Because when you think about the experience of hating other people, what that is anchored
to is a feeling that they really are the true authors of their actions, I mean, that someone
is doing something that you find so despicable, right, let's say they're, you know, targeting
you unfairly, right, they're maligning you on Twitter or they're, you know, they're suing
you or they're doing something, they broke your car window, they did something awful
and now you have a grievance against them.
And you're relating to them very differently, emotionally, in your own mind than you would
if a force of nature had done this, right, or if it had just been, you know, a virus
or if it had been a wild animal or a malfunctioning machine, right, like to those things you don't
attribute any kind of freedom of will.
And while you may suffer the consequences of catching a virus or being attacked by a
wild animal or having a, you know, your car breakdown or whatever, it may frustrate you.
You don't slip into this mode of hating the agent in a way that completely commandeers
your mind and deranges your life.
I mean, you just don't, I mean, there are people who spend decades hating other people
for what they did and it's, it's just pure poison.
Right.
So it's a useful shortcut to compassion and empathy.
Yeah.
But the question is, say that this called, what was it, the horse of consciousness?
Let's call it the consciousness generator black box that we don't understand.
And is it possible that the script that we're walking along, that we're playing, that's
already written, is actually being written in real time?
It's almost like you're driving down a road and in real time that road is being laid down.
And this black box of consciousness that we don't understand is the place where this
script is being generated.
So it's not, it is being generated, it didn't always exist.
So there's something we don't understand that's fundamental about the nature of reality that
generates both consciousness.
Let's call it maybe the self.
I don't know if you want to distinguish between those.
Yeah, I definitely would.
You would.
Because there's a bunch of illusions we're referring to.
There's the illusion of free will.
There's the illusion of self and there's the illusion of consciousness.
You're saying, I think you said there's no, you're not as willing to say there's an illusion
of consciousness.
In fact, I would say it's impossible.
You're a little bit more willing to say that there's an illusion of self and you're definitely
saying there's an illusion of free will.
Yes.
I'm definitely saying there's an illusion that a certain kind of self is an illusion.
Not every, we mean many different things by this notion of self.
So maybe we'll actually just differentiate these things.
So consciousness can't be an illusion because any illusion proves its reality as much as
any other veridical perception.
I mean, if you're hallucinating now, that's just as much of a demonstration of consciousness
as really seeing what's a quote actually there.
If you're dreaming and you don't know it, that is consciousness.
You can be confused about literally everything.
You can't be confused about the underlying claim, whether you make it linguistically or
not, but just the cognitive assertion that something seems to be happening.
It's the seeming that is the cash value of consciousness.
Can I take a tiny tangent?
Okay.
So what if I am creating consciousness in my mind to convince you that I'm human?
So it's a useful social tool, not a fundamental property of experience, like of being a living
thing.
What if it's just like a social tool to almost like a useful computational trick to place
myself into reality as we together communicate about this reality?
And another way to ask that, because you said it much earlier, you talked negatively about
robots as you often do, so you'll probably die first when they take over.
I'm looking forward to certain kinds of robots.
If we can get this right, this would be amazing.
But you don't like the robots that fake consciousness.
You don't like the idea of fake it till you make it.
Well, no, it's not that I don't like it, it's that I'm worried that we will lose sight
of the problem, and the problem has massive ethical consequences.
If we create robots that really can suffer, that would be a bad thing, right?
And if we really are committing a murder when we recycle them, that would be a bad thing.
This is how I know you're not Russian.
Why is it a bad thing that we create robots that can suffer?
Isn't suffering a fundamental thing from which beauty springs?
Without suffering, do you really think we would have beautiful things in this world?
That's a tangent on a tangent.
We'll go there.
I would love to go there, but let's not go there just yet.
I do think it would be, if anything is bad, creating hell and populating it with real
minds that really can suffer in that hell, that's bad.
That's the, you are worse than any mass murderer we can name if you create it.
This could be in robot form, or more likely it would be in some simulation of a world
where we managed to populate it with conscious minds, whether we knew they were conscious
or not, and that world is a state of, you know, it's unendurable.
That would just, taking the thesis seriously that there's nothing that mind, intelligence
and consciousness ultimately are substrate independent, right?
You don't need a biological brain to be conscious.
You certainly don't need a biological brain to be intelligent, right?
So if we just imagine that consciousness at some point comes along for the ride as you
scale up in intelligence, well, then we could find ourselves creating conscious minds.
Minds that are miserable, right?
And that's just like creating a person who's miserable, right?
It could be worse than creating a person who's miserable, it could be even more sensitive
to suffering.
Cloning them and maybe for entertainment, watching them suffer.
Just like watching a person suffer for entertainment, you know?
But back to your primary question here, which is differentiating consciousness and self
and free will as concepts and kind of degrees of illusoriness, the problem with free will
is that what most people mean by it, and this is where Dan Dennett is going to get off the
ride here, right?
So like he doesn't, he's going to disagree with me that I know what most people mean
by it.
But I have a very keen sense, having talked about this topic for many, many years.
And seeing people get wrapped around the axle of it and seeing in myself what it's like
to have felt that I was a self that had free will and then to no longer feel that way,
right?
I mean, to know what it's like to actually disabuse myself of that sense, cognitively
and emotionally, and to recognize what's left, what goes away and what doesn't go away on
the basis of that epiphany.
I have a sense that I know what people think they have in hand when they worry about whether
free will exists.
And it is the flip side of this feeling of self, it's the flip side of feeling like you
are not merely identical to experience.
You feel like you're having an experience, you feel like you're an agent that is appropriating
an experience.
You're a protagonist in the movie of your life and it is you, it's not just the movie,
right?
It's like there are sights and sounds and sensations and thoughts and emotions and this
whole cacophony of experience, of felt experience, of felt experience, of embodiment.
But there seems to be a rider on the horse or a passenger in the body, right?
People don't feel truly identical to their bodies down to their toes.
They sort of feel like they have bodies.
They feel like their minds in bodies and that feels like a self, that feels like me.
And again, this gets very paradoxical when you talk about the experience of being in
relationship to yourself or talking to yourself, giving yourself a pep talk.
If you're the one talking, why are you also the one listening?
Why do you need the pep talk and why does it work?
If you're the one giving the pep talk, right?
Or if I'm looking for my keys, why do I think the superfluous thought, where are my keys?
I know I'm looking for the fucking keys.
I'm the one looking.
Who am I telling that we need now need to look for the keys, right?
So that duality is weird, but leave that aside.
There's the sense, and this becomes very vivid when people try to learn to meditate.
Most people, they close their eyes and they're told to pay attention to an object like the
breath.
So you close your eyes and you pay attention to the breath and you can feel it at the tip
of your nose or the rising and falling of your abdomen and you're paying attention and
you feel something vague there.
And then you think, I thought, why the breath?
Why am I paying attention to the breath?
What's so special about the breath?
And then you notice your thinking and you're not paying attention to the breath anymore.
And then you realize, okay, the practice is, okay, I should notice thoughts and then I
should come back to the breath.
But this starting point is of the conventional starting point of feeling like you are an
agent very likely in your head, a locus of consciousness, a locus of attention that can
strategically pay attention to certain parts of experience.
Like I can focus on the breath and then I get lost in thought and now I can come back
to the breath and I can open my eyes and I'm over here behind my face looking out at a
world that's other than me and there's this kind of subject-object perception.
And that is the default starting point of selfhood, of subjectivity.
And married to that is the sense that I can decide what to do next.
I am an agent who can pay attention to the cup.
I can listen to sounds.
There are certain things that I can't control, certain things are happening to me and I just
can't control them.
So for instance, if someone asks, well, can you not hear a sound, right?
Don't hear the next sound.
Don't hear anything for a second or don't hear, don't hear, I'm snapping my fingers,
don't hear this.
Where's your free will?
Just stop this from coming in.
You realize, okay, wait a minute, my abundant freedom does not extend to something as simple
as just being able to pay attention to something else than this.
Okay, well, so I'm not that kind of free agent, but at least I can decide what I'm going to
do next.
I'm going to pick up this water, right?
And there's a feeling of identification with the impulse, with the intention, with the
thought that occurs to you, with the feeling of speaking, like, you know, what am I going
to say next?
Well, I'm saying it.
And there goes, this is me, it feels like I'm the thinker, I'm the one who's in control.
But all of that is born of not really paying close attention to what it's like to be you.
And so this is where meditation comes in, or this is where, again, you can get at this
conceptually.
You can unravel the notion of free will just by thinking certain thoughts.
Because you can't feel that it doesn't exist unless you can pay close attention to how
thoughts and intentions arise.
So the way to unravel it conceptually is just to realize, okay, I didn't make myself, I
didn't make my genes, I didn't make my brain, I didn't make the environmental influences
that impinged upon this system for the last 54 years that have produced my brain in precisely
the state it's in right now, such with all of the receptor weightings and densities,
and it's just I'm exactly the machine I am right now through no fault of my own as the
experiencing self.
I get no credit and I get no blame for the genetics and the environmental influences
here.
And yet those are the only things that contrive to produce my next thought or impulse or moment
of behavior.
And if you were going to add something magical to that clockwork, like an immortal soul,
you can also notice that you didn't produce your soul, right?
You can't account for the fact that you don't have the soul of someone who doesn't like
any of the things you like or wasn't interested in any of the things you were interested in
or was a psychopath or had an IQ of 40 or there's nothing about that that the person
who believes in a soul can claim to have controlled.
And yet that is also totally dispositive of whatever happens next.
But everything you've described now, maybe you can correct me, but it kind of speaks
to the materialistic nature of the hardware.
But even if you add magical ectoplasm software, you didn't produce that either.
I know, but if you can think about the actual computation running on the hardware and running
on the software, there's something you said recently, which you think of culture as an
operating system.
So if we just remove ourselves a little bit from the conception of human civilization
being a collection of humans and rather us just being a distributed computation system
on which there's some kind of operating system running and then the computation that's running
is the actual thing that generates the interactions, the communications and maybe even free will,
the experiences of all those free will.
Do you ever think of, do you ever try to reframe the world in that way where it's like ideas
are just using us, thoughts are using individual nodes in the system and they're just jumping
around and they also have ability to generate experiences so that we can push those ideas
along.
And basically, the main organisms here are the thoughts, not the humans.
Yeah, but then that arose the boundary between self and world.
So then there's no self, a really integrated self to have any kind of will at all.
If you're just a meme plex, if you're just a collection of memes and we're all kind
of like currents, like eddies in this river of ideas.
So it seems to have structure, but there's no real boundary between that part of the
flow of water and the rest.
And I would say that much of our mind answers to this kind of description.
So much of our mind has been, it's obviously not self-generated and you're not going to
find it by looking in the brain, it is the result of culture largely, but also the genes
on one side and culture on the other meaning to allow for manifestations of mind that aren't
actually bounded by the person in any clear sense.
Just the example I often use here, but there's so many others, is just the fact that we're
following the rules of English grammar to whatever degree we are.
It's not that we certainly haven't consciously represented these rules for our self.
We haven't invented these rules, there are norms of language use that we couldn't even
specify because we're not grammarians, we haven't studied this, we don't even have the
right concepts and yet we're following these rules and we're noticing as an error when
we fail to follow these rules.
And virtually every other cultural norm is like that.
I mean, these are not things we've invented, you can consciously decide to scrutinize them
and override them, but just think of any social situation where you're with other people and
you're behaving in ways that are culturally appropriate, you're not being wild animals
together, you have some expectation of how you shake a person's hand and how you deal
with implements on a table, how you have a meal together, obviously this can change from
culture to culture and people can be shocked by how different those things are.
We all have foods we find disgusting, but in some countries dog is not one of those
foods and yet you and I presumably would be horrified to be served dog.
Those are not norms that we're, they are outside of us in some way and yet they're felt very
viscerally, I mean they're certainly felt in their violation.
If you are, just imagine you're in somebody's home, you're eating something that tastes
great to you and you happen to be in Vietnam or wherever, you didn't realize dog was potentially
on the menu and you find out that you've just eaten 10 bites of what is really a Cocker
Spaniel and you feel as instantaneous urge to vomit based on an idea, you're not the
author of that norm that gave you such a powerful experience of its violation and I'm sure
we can trace the moment in your history vaguely where it sort of got in, I mean very early
on as kids you realize you're treating dogs as pets and not as food or as potential food.
But yeah, no it's, but the point you just made opens us to, like we are totally permeable
to a sea of mind.
Yeah, but if we take the metaphor of the distributed computing systems, each individual node is
part of performing a much larger computation, but it nevertheless is in charge of doing
the scheduling of, assuming it's Linux, is doing the scheduling of processes and is constantly
alternating them.
That node is making those choices.
That node sure as hell believes it has free will and actually has free will because it's
making those hard choices, but the choices ultimately are part of a much larger computation
that it can't control.
Isn't it possible for that node to still be, that human node is still making the choice?
It is.
So I'm not saying that your body isn't doing, really doing things, right?
And some of those things can be conventionally thought of as choices, right?
So it's like, I can choose to reach and it's like, it's not being imposed on me.
That would be a different experience.
Like so there's an experience of, there's definitely a difference between voluntary
and involuntary action.
So that has to get conserved by any account of the mind that Jettison's free will.
You still have to admit that there's a difference between a tremor that I can't control and
a purposeful motor action that I can control and I can initiate on demand and it's associated
with intentions.
And it's got efferent motor copy, which is being predictive so that I can notice errors.
I have expectations.
When I reach for this, if my hand were actually to pass through the bottle because it's a
hologram, I would be surprised, right?
And so that shows that I have a expectation of just what my grasping behavior is going
to be like even before it happens.
Whereas with a tremor, you don't have the same kind of thing going on.
That's a distinction we have to make.
So I am, yes, I'm really the proxy, my intention to move, which is in fact can be subjectively
felt, really is the proximate cause of my moving.
It's not coming from elsewhere in the universe.
I'm not saying that.
So in that sense, the node is really deciding to execute the subroutine now.
But that's not the feeling that has given rise to this conundrum of free will, right?
So the people feel like, the crucial thing is that people feel like they could have
done otherwise, right?
That's the thing.
So when you run back the clock of your life, run back the movie of your life, you flip
back the few pages in the novel of your life, they feel that at this point, they could behave
differently than they did, right?
But even given your distributed computing example, it's either a fully deterministic
system or it's a deterministic system that admits of some random influence.
In either case, that's not the free will people think they have.
The free will people think they have is, damn, I shouldn't have done that.
I shouldn't have done that.
I could have done otherwise, right?
I should have done otherwise, right?
If you think about something that you deeply regret doing, right?
Or that you hold someone else responsible for because they really are the upstream agent
in your mind of what they did, you know, that's an awful thing that that person did and they
shouldn't have done it.
There is this illusion and it has to be an illusion because there's no picture of causation
that would make sense of it.
There's this illusion that if you arrange the universe exactly the way it was a moment
ago, it could have played out differently.
And the only way it could have played out differently is if there's randomness added
to that, but randomness isn't what people feel would give them free will, right?
If you tell me that, you know, I only reached for the water bottle this time because there's
a random number generator in there kicking off values and it finally moved my hand, that's
not the feeling of authorship.
That's still not control.
You're still not making that decision.
There's actually, I don't know if you're familiar with cellular automata, that's a really nice
visualization of how simple rules can create incredible complexity that it's like really
dumb initial conditions to set, simple rules applied, and eventually you watch this thing
and if the rule, if the initial conditions are correct, that you're going to have emerge
something that to our perception system looks like organisms interacting.
You can construct any kinds of worlds and they're not actually interacting, they're
not actually even organisms and they certainly don't aren't making decisions.
So there's like systems you can create that illustrate this point.
The question is whether there could be some room for, let's use in the 21st century the
term magic back to the black box of consciousness.
Let me ask you this way.
If you're wrong about your intuition about free will, what, and somebody comes along
to you and proves to you that you didn't have the full picture, what would that proof look
like?
Well, that's the problem.
That's why it's not even an illusion in my world because for me, it's impossible to say
what the universe would have to be like for free will to be a thing.
It doesn't conceptually map on to any notion of causation we have and that's unlike any
other spurious claim you might make.
So like, if you're going to believe in ghosts, I understand what that claim could be or like
I don't happen to believe in ghosts, but if it's not hard for me to specify what would
have to be true for ghosts to be real.
And so it is with a thousand other things like ghosts.
So you're telling me that when people die, there's some part of them that is not reducible
at all to their biology that lifts off them and goes elsewhere and it's actually the kind
of thing that can linger in closets and in cupboards and actually it's immaterial, but
by some principle of physics we don't totally understand.
It can make sounds and knock objects and even occasionally show up so they can be visually
beheld and it seems like a miracle, but it's just some spooky noun in the universe that
we don't understand.
Let's call it a ghost.
That's fine.
I can talk about that all day.
The reasons to believe in it, the reasons not to believe in it, the way we would scientifically
test for it, what would have to be provable so as to convince me that ghosts are real.
Free will isn't like that at all.
There's no description of any concatenation of causes that precedes my conscious experience
that sounds like what people think they have when they think they could have done otherwise
and that they really, that they, the conscious agent, is really in charge.
If you don't know what you're going to think next and you can't help but think it, take
those two premises on board.
You don't know what it's going to be.
You can't stop it from coming and until you actually know how to meditate, you can't stop
yourself from fully living out its behavioral or emotional consequences.
Like you have no, once mindfulness arguably gives you another degree of freedom here,
it doesn't give you free will, but it gives you some other game to play with respect to
the emotional and behavioral imperatives of thoughts, but short of that, the reason why
mindfulness doesn't give you free will is because you can't account for why in one moment
mindfulness arises and in other moments it doesn't, but a different process is initiated
once you can practice in that way.
If I could push back for a second.
By the way, I just have this thought bubble come popping up all the time of just two recent
chimps arguing about the nature of consciousness.
It's kind of hilarious.
So on that thread, you know, if we're even before Einstein, let's say before Einstein,
we were to conceive about traveling from point A to point B. Say some point in the future,
we are able to realize through engineering a way which is supposed, you know, it's consistent
with Einstein's theory that you can have wormholes.
You can travel from one point to another faster than the speed of light.
And that would, I think, completely change our conception of what it means to travel
in the physical space.
And that, like, completely transform our ability, you talk about causality, but here let's just
focus on what it means to travel through physical space.
Don't you think it's possible that there will be inventions or leaps in understanding about
reality that will allow us to see free will as actually, like us humans somehow may be
linked to this idea of consciousness, are actually able to be authors of our actions?
It is a non-starter for me conceptually.
It's a little bit like saying, could there be some breakthrough that will cause us to
realize that circles are really square or that circles are not really round, right?
No, a circle is what we mean by a perfectly round form, right?
It's not on the table to be revised.
And so I would say the same thing about consciousness is just like saying, is there some breakthrough
that would get us to realize that consciousness is really an illusion?
I'm saying no, because what the experience of an illusion is as much a demonstration
of what I'm calling consciousness as anything else, right?
That is consciousness.
With free will, it's a similar problem.
It's like, again, it comes down to a picture of causality and there's no other picture
on offer, and what's more, I know what it's like on the experiential side to lose the
thing to which it is clearly anchored, right?
Like it doesn't feel, and this is the question that almost nobody, people who are debating
me on the topic of free will, I'm at 15 minute intervals, I'm making a claim that I don't
feel this thing, and they never become interested in, well, what's that like?
Like, okay, you're actually saying you don't, this thing isn't true for you empirically.
It's not just, because most people who don't believe in free will philosophically also believe
that we're condemned to experience it, like you can't live without this feeling.
So you're actually saying you're able to experience the absence of the illusion of
free will for, we're talking about a few minutes at a time, or is this to require a lot of
work and meditation, or are you literally able to load that into your mind and play
that move?
Right now, right now, just in this conversation.
So it's not absolutely continuous, but it's whenever I pay attention.
It's like, and I would say the same thing for the elusoriness of the self, and again,
we haven't talked about this.
Can you still have the self and not have the free will in your mind at the same time?
No.
Do they go at the same time?
This is the same.
Yeah, it's the same thing.
They're always holding hands when they walk out the door.
They really are two sides of the same coin.
Okay.
So it comes down to what it's like to try to get to the end of the sentence, or what
it's like to finally decide that it's been long enough and now I need another sip of
water.
If I'm paying attention, now, if I'm not paying attention, I'm captured by some other thought
and that feels a certain way, and so it's not vivid.
But if I try to make vivid this experience of just, okay, I'm finally going to experience
free will.
I'm going to notice my free will.
It's got to be here.
Everyone's talking about it.
Where is it?
I'm going to pay attention to it.
I'm going to look for it.
And I'm going to create a circumstance that is where it has to be most robust.
I'm not rushed to make this decision.
It's not a reflex.
I'm not under pressure.
I'm going to take as long as I want.
I'm going to decide.
It's not trivial.
So it's not just like reaching with my left hand or reaching with my right hand.
People don't like those examples for some reason.
Just make a big decision like, what should my next podcast be on?
Who do I invite on the next podcast?
What is it like to make that decision?
When I pay attention, there is no evidence of free will anywhere in sight.
It doesn't feel like it feels profoundly mysterious to be going back between two people, like
is it going to be person A or person B, that all my reasons for A and all my reasons why
not and all my reasons for B and that there's some math going on there that I'm not even
privy to where certain concerns are trumping others.
And at a certain point, I just decide.
And yes, you can say I'm the node in the network that has made that decision.
Absolutely.
I'm not saying it's being piped to me from elsewhere, but the feeling of what it's like
to make that decision is totally without a sense, a real sense of agency because something
simply emerges.
It's literally as tenuous as what's the next sound I'm going to hear, or what's the next
thought that's going to appear.
And something just appears.
And if something appears to cancel that something, like if I say I'm going to invite her and
then I'm about to send the email and I think, oh, no, no, no, I can't do that.
There was that thing in the New Yorker article I read that I got to talk to this guy.
That pivot at the last second, you can make it as muscular as you want.
It always just comes out of the darkness.
It's always mysterious.
So right, when you try to pin it down, you really can't ever find that free will.
If you construct an experiment for yourself and you're trying to really find that moment
when you're actually making that controlled author decision, it's very difficult to do.
And we're still, we know at this point that if we were scanning your brain in some podcast
guest choosing experiment, we know at this point we would be privy to who you're going
to pick before you are.
You the conscious agent.
If we could, again, this is operationally a little hard to conduct, but there's enough
data now to know that something very much like this cartoon is, in fact, true and will
ultimately be undeniable for people, they'll be able to do it on themselves with some app.
If you're deciding where to go for dinner or who to have on your podcast or ultimately
who to marry or what city to move to, you can make it as big or as small a decision
as you want.
We could be scanning your brain in real time and at a point where you still think you're
uncommitted, we would be able to say with arbitrary accuracy, all right, Lex is, he's
moving to Austin.
Right?
I didn't choose that.
Yeah.
It was going to be Austin or it was going to be Miami.
He's catching one of these two waves, but it's going to be Austin.
At a point where you subjectively, if we could ask you, you would say, oh, no, I'm still
working over here.
I'm still thinking, I'm still considering my options.
You've spoken to this, in you thinking about other stuff in the world, it's been very
useful to step away from this illusion of free will.
You argue that it probably makes a better world because it can be compassionate and empathetic
towards others.
And toward oneself.
Toward oneself.
I mean, radically toward others in that literally hate makes no sense anymore.
I mean, there are certain things you can really be worried about, really want to oppose.
I'm not saying you'd never have to kill another person.
Self-defense is still a thing, right?
But the idea that you're ever confronting anything other than a force of nature in the
end goes out the window, right?
Or it does go out the window when you really pay attention.
I'm not saying that this would be easy to grok if someone kills a member of your family.
I'm not saying you can just listen to my 90 minutes on free will and then you should be
able to see that person as identical to a grizzly bear or a virus because we are so evolved
to deal with one another as fellow primates and as agents.
But it's, yeah, when you're talking about the possibility of, you know, truly Christian
forgiveness, right, as testified to by various saints of that flavor over the millennia.
Yeah, the doorway to that is to recognize that no one really at bottom made themselves.
And therefore, everyone, what we're seeing really are differences in luck in the world.
We're seeing people who are very, very lucky to have had good parents and good genes and
being good societies and had good opportunities and to be intelligent and to be, you know,
not sociopathic.
None of it is on them.
They're just reaping the fruits of one lottery after another and then showing up in the world
on that basis.
And then so it is with, you know, every malevolent asshole out there, right?
He or she didn't make themself.
Even if that weren't possible, the utility for self-compassion is also enormous because
it's when you just look at what it's like to regret something or to feel shame about
something or feel deep embarrassment about it.
These states of mind are some of the most deranging experiences anyone has.
And the indelible reaction to them, you know, the memory of the thing you said, the memory
of the wedding toast you gave 20 years ago that was just mortifying, right?
The fact that that can still make you hate yourself, right?
That psychologically, that is a knot that can be untied, right?
Speak for yourself, Sam.
You gave a great toast, it was my toast that mortified.
That's not what I was referring to.
I'm deeply appreciative in the same way that you're referring to of every moment I'm alive,
but I'm also powered by self-hate often.
Like several things in this conversation already that I've spoken, I'll be thinking about,
like that was the dumbest thing, you're sitting in front of Sam Harris and you said that.
I feel like that, but that somehow creates a richer experience for me.
I've actually come to accept that as a nice feature of however my brain was built.
I don't think I want to let go of that.
Well, the thing you, I think the thing you want to let go of is the suffering associated
with it.
So, like, so for me, so it's just very psychologically and ethically all of this is very interesting.
I don't think we ever, we should ever get rid of things like anger, right?
So like hatred is, hatred is divorceable from anger in the sense that hatred is this enduring
state where, you know, whether you're hating somebody else or hating yourself, it is just,
it is toxic and durable and ultimately useless, right?
Like it becomes self-nullifying, right?
Like you become less capable as a person to solve any of your problems.
It's not instrumental in solving the problem that is occasioning all this hatred.
And anger, for the most part, isn't either except as a signal of salience that there's
a problem, right?
So, if somebody does something that makes me angry, that just promotes this situation
to conscious attention in a way that is stronger than might not really caring about it, right?
And there are things that I think should make us angry in the world and there's the behavior
of other people that should make us angry because we should respond to it.
And so it is with yourself.
If I do something, you know, as a parent, if I do something stupid that harms one of
my daughters, right, my experience of myself and my beliefs about free will close the door
to my saying, well, I should have done otherwise in the sense that if I could go back in time,
I would have actually effectively done otherwise.
No, I would do, given the same causes and conditions.
I would do that thing a trillion times in a row, right?
But, you know, regret and feeling bad about an outcome are still important to capacities
because I desperately want my daughters to be happy and healthy.
So if I've done something, you know, if I crash the car when they're in the car and
they get injured, right, and I do it because I was trying to change a song on my playlist
or something stupid, I'm going to feel like a total asshole.
How long do I stew in that feeling of regret, right?
And what utility is there to extract out of this error signal?
And then what do I do?
We're always faced with the question of what to do next, right?
And how to best do that thing, that necessary thing next.
And how much well-being can we experience while doing it?
How miserable do you need to be to solve your problems in life and to help solve the problems
of people closest to you?
How miserable do you need to be to get through your to-do list today?
Ultimately, I think you can be deeply happy going through all of it, right, and even navigating
moments that are scary and, you know, really destabilizing to ordinary people.
And again, I'm always up at the edge of my own capacities here, and there are all kinds
of things that stress me out and worry me, and I'm especially something, if it's, you
know, you're going to tell me it's something with the health of one of my kids, you know,
it's very hard for me, like, it's very hard for me to be truly equanimous around that.
But equanimity is so useful the moment you're in response mode, right, because the ordinary
experience for me of responding to what seems like a medical emergency for one of my kids
is to be obviously super energized by concern to respond to that emergency.
But then once I'm responding, all of my fear and agitation and worry and, oh my God, what
if this is really something terrible?
But finding any of those thoughts compelling all that only diminishes my capacity as a father
to be good company while we navigate this really turbulent passage, you know.
As you're saying this, actually, one guy comes to mind, which is Elon Musk, one of the really
impressive things to me was to observe how many dramatic things he has to deal with throughout
the day at work.
But also if you look through his life, family too, and how he's very much actually, as you're
describing basically a practitioner of this way of thought, which is you're not in control.
You're basically responding, no matter how traumatic the event, and there's no reason
to sort of linger on the negative feelings around that.
Well, so, but he's in a very specific situation, which is unlike even his normal life, but
normal life for most people.
Because when you just think of like, he's running so many businesses and they're highly
non-standard businesses.
So what he's seen is everything that gets to him is some kind of emergency, like wouldn't
be getting to him.
If it needs his attention, there's a fire somewhere.
So he's constantly responding to fires that have to be put out.
So there's no default expectation that there shouldn't be a fire.
But in our normal lives, we live, most of us who are lucky, not everyone obviously on
earth, but most of us who are at some kind of cruising altitude in terms of our lives
where we're reasonably healthy and life is reasonably orderly and the political apparatus
around us is reasonably functional.
So I said, functional for the first time in my life through no free will of my own.
So like I noticed those errors and they do not feel like agency, and nor does the success
of an utterance feel like agency.
He, when you're looking at normal human life, where you're just trying to be happy and healthy
and get your work done, there's this default expectation that there shouldn't be fires.
People shouldn't be getting sick or injured.
We shouldn't be losing vast amounts of our resources.
So when something really stark like that happens, people don't have a, people don't
have that muscle that they're like, I've been responding to emergencies, emergencies all
day long, you know, seven days a week in business mode.
And so I have a very thick skin.
This is just another one.
What it was like, I'm not expecting anything else when I wake up in the morning.
No, we have this default sense that, I mean, honestly, most of us have the default sense
that we aren't going to die, right?
Or that we should, like maybe we're not going to die, right?
Like death denial really is a thing, you know, we're, because, and you can see it just like
I can see when I reach for this bottle that I was expecting it to be solid because when
it isn't solid, when it's a hologram and I just, my fist closes on itself, I'm damn surprised.
People are damn surprised to find out that they're going to die, to find out that they're
sick, to find out that someone they love has died or is going to die.
So it's like the fact that we are surprised by any of that shows us that we're living
in a mode that is, you know, we're perpetually diverting ourselves from some facts that should
be obvious, right?
And that, and the more salient we can make them, you know, the more, I mean, in the case
of death, it's a matter of being able to get one's priorities straight.
I mean, the moment, again, this is hard for everybody, even those who are really in the
business of paying attention to it.
But the moment you realize that every circumstance is finite, right, you've got a certain number
of, you know, you've got whatever, whatever it is, 8,000 days left in a normal span of
life.
And 8,000 is a, sounds like a big number, it's not that big a number, right?
So it's just like, and then you, then you can decide how you want to go through life
and how you want to experience each one of those days.
And so I was to back to where our jumping off point, I would argue that you don't want
to feel self-hatred ever, I would argue that you don't want to really, really grasp on
to any of those moments where you, where you are taking, internalizing the fact that you
just made an error, you've embarrassed yourself, that something didn't go the way you wanted
it to.
I think you want to, you want to treat all of those moments very, very lightly.
You want to extract the actionable information.
It's something to learn.
Oh, you know, I learned that when I, when I prepare in a certain way, it works better
than I, when I prepare in some other way or don't prepare, right?
Like, so like, yes, lesson learned, you know, and do that differently.
But yeah, I mean, so many, so many, so many of us have spent so much time with a very
dysfunctional and hostile and even hateful inner voice governing a lot of our self-talk
and a lot of just, just our default way of being with ourselves, I mean, the privacy
of our own minds, we're in the company of a real jerk a lot of the time and, and that,
that can't help but affect, I mean, forget about just your, your own sense of well-being.
It can't help but limit what you're capable of in the world with other people.
I'll have to really think about that.
I just take pride that my jerk, my inner voice jerk is much less of a jerk than like
somebody like David Goggins, who's just like screaming in his ear constantly.
So my, I just, I have a relative as kind of perspective that it's not as bad as that
at least.
Well, having a sense of humor also helps, you know, it's just like it's not, the stakes
are never quite what you think they are.
And even when they are, I mean, it's just the difference between seeing, being able
to see the comedy of it rather than, because again, there's this sort of dark star of self-absorption
that pulls everything into it, right?
And if that's the, that's the algorithm, that's the algorithm you don't want to run.
So it's like, you just want, you just want things to be good.
So like just push, push the concern out there, like not have the collapse of, oh my God,
what does this say about me?
It's just like, let's, what does this say about, how do we make this meal that we're
all having together as, as, as fun as possible and as useful as possible?
Then you're saying in terms of propulsion systems, you recommend humor as a good spaceship
to escape the gravitational field of the, of that darkness.
Well, that certainly helps.
Yeah.
Yeah.
Well, let me ask you a little bit about ego and fame, which is very interesting, the way
you're talking, given that you're one of the biggest intellects, living intellects and
minds of our time.
And there's a lot of people that really love you and almost elevate you to a certain kind
of status where you're like the guru.
I'm surprised you didn't show up in a robe, in fact.
Is there a hoodie that's not the highest status garment one can wear now?
The socially acceptable version of the, of the robe.
If you're a billionaire, you wear a hood.
Is there something you can say about managing the effects of fame on your own mind, on
the, not creating this, you know, when you wake up in the morning, when you look up in
the mirror, how do you get your ego not to grow exponentially, your conception of self
to grow exponentially?
Because there's so many people feeding that.
Is there something to be said about this?
It's really not hard because I mean, I feel like I have a pretty clear sense of my strengths
and weaknesses and I, I don't feel like it's, honestly, I don't feel like I suffer from
much grandiosity.
I mean, I just have a, you know, there's so many things I'm not good at.
There's so many things I won't, you know, given the, the remaining 8,000 days at best,
I will never get good at.
I would love to be good at these things.
So it's just, it's easy to feel diminished by comparison with the, the talents of others.
Do you remind yourself of all the things that you're not competent in?
Is it, I mean, they're just on display for me every day that I appreciate the, the talents
of others.
But you notice them.
I'm sure Stalin and Hitler did not notice all the ways in which they were, I mean, this
is why absolute power corrupts, corrupts absolutely is you stop noticing the things in which you're
ridiculous and wrong.
Right.
Yeah.
No, I am.
Not to compare you to Stalin.
Yeah.
Yeah.
Well, I'm sure there's an inner Stalin in there somewhere.
Well, we all have.
But hopefully he wears better clothes and I'm not going to grow that mustache.
Those concerns don't map on, they don't map onto me for a bunch of reasons, but one is
I also have a very peculiar audience.
I'm just, I've been appreciating this for a few years, but it's, I'm just now beginning
to understand that there are many people who have audiences of my size or larger that have
a very different experience of having an audience than I do.
I have, I have curated for better or worse, a peculiar audience.
And the net result of that is virtually any time I say anything of substance, something
like half of my audience, my real audience, not haters from outside my audience, but my
audience is just revolts over it.
Right?
They just like, oh my God, I can't believe you said it.
Like you, you're such a schmuck, right?
They revolt with rigor and intellectual sophistication.
Which is great.
Or not, or not.
I mean, it's cool, but it's like, but people who are like, so it's, I mean, the clearest
case is, you know, I have an audience, I have whatever audience I have and then Trump appears
on the scene and I discovered that something like 20% of my audience just went straight
to Trump and couldn't believe I didn't follow them there.
They were just a gas that I didn't see that Trump was obviously exactly what we needed
for, for to steer the ship of state for the next four years and then four years beyond
that.
So, like, so that's one example.
So whenever I said anything about Trump, I would hear from people who loved more or
less everything else I was up to and had for years, but everything I said about Trump
just gave me pure pain from this, this quadrant of my audience.
But then say the same thing happens when I say something about the derangement of the
far left, anything I say about wokeness, right, or identity politics, same kind of
punishment signal from us.
Again, people who are core to my audience, like I've read all your books, I'm using
your meditation app, I love what you say about science, but you are so wrong about politics
and you're, you know, I'm starting to think you're a racist asshole for everything you
said about, about identity politics.
And there are so many, the free will topic is just like this.
It's like, I just, they love what I'm saying about consciousness and the mind and they love
to hear me talk about physics with physicists and it's all good.
This free will stuff is like, I cannot believe you don't see how wrong you are.
What a fucking embarrassment you are.
So, but I'm starting to notice that there are other people who don't have this experience
of having an audience because they have, I mean, just take the Trump woke dichotomy.
They just castigated Trump the same way I did, but they never say anything bad about
the far left.
They never get this punishment signal or they, or you flip it, they, they, they're all about
the insanity of critical race theory now, they'll, they, we, we connect all those dots
the same way, but they never really specified what was wrong with Trump or they thought
there was a lot right with Trump and they, they got all the pleasure of that.
And so they have much more homogenized audiences.
And so my, my experience is, so just to come back to, you know, this experience of fame
or quasi-fame and it's, it's true in truth, it's not real fame, but it's still, it's,
there's an audience there.
It is a, it's now an experience where basically whatever I put out, I noticed a ton of negativity
coming back at me and it just, it is what it is.
I mean, now, now it's like, I used to think, wait a minute, there's got to be some way
for me to communicate more clearly here, so as not to get this kind of lunatic response
from my own audience, from like people who are, who are showing all the signs of, of,
of, we've been here for years for a reason, right?
These are not just trolls.
And so I think, okay, I'm going to take 10 more minutes and really just tell you what
it should be absolutely clear about what's wrong with Trump, right?
I've done this a few times, but I got, I think I got to do this again.
Or wait a minute, how are they not getting that these episodes of police violence are
so obviously different from one another that you can't describe all of them to, you know,
yet another racist maniac on the police force, you know, killing someone based on his racism.
Last time I talked, spoke about this, it was pure pain, but I'm, I just got to try again.
Now at a certain point, I mean, I'm starting to feel like, all right, I just, I have to
be, I have to cease to, again, it comes back to this expectation that there shouldn't be
fires.
Right.
Like I feel like if I could just play my game impeccably, the people who actually care
what I think will follow me when I hit Trump and hit free will and hit the woke and hit
whatever it is, how we should respond to the coronavirus, you know, vaccines, you know,
are they a thing?
Right.
Like there's such derangement in our information space now that, I mean, I guess, you know,
some people could be getting more of this than I expect, but I just noticed that, you
know, many of our friends who are in the same game have more homogenized audiences and don't
get, I mean, they've successfully filtered out the people who are going to despise them
on this next topic and I, you know, I would, I would imagine you are, have a different
experience of having a podcast than I do at this point.
I mean, I'm sure you get haters, but I would imagine you're, you're more streamlined.
I actually don't like the word haters because it kind of presumes that it puts people in
a bin.
I think we're all have like baby haters inside of us and we just apply them and some people
enjoy doing that more than others for particular periods of time.
I think you're going to almost see hating on the internet as a video game that you just
play and it's fun, but then you can put it down and walk away.
And no, I certainly have a bunch of people that are very critical.
I can list all the ways.
But does it feel like it's on any given topic, does it feel like it's an actual title surge
where it's like 30% of your audience and then the other 30% of your audience from podcast
to podcast?
No.
You mean to me all the time now?
Well, I'm more with, I don't know what you think about this.
I mean, Joe Rogan doesn't read comments or doesn't read comments much.
And the argument he made to me is that he already has like a self-critical person inside.
Right, right.
Like, and I, I'm going to have to think about what you said in this conversation, but I have
this very harshly self-critical person inside as well.
Yeah, I do.
I don't need more fuel.
I don't need, no, I do sometimes, that's why I check negativity occasionally, not too
often.
I sometimes need to like put a little bit more like coals into the fire, but not too much.
But I already have that self-critical engine that keeps me in check.
I just, I wonder, you know, a lot of people who gain more and more fame lose that ability
to be self-critical.
I guess because they lose the audience that can be critical towards them.
You know, I do follow Joe's advice much more than I ever have here.
Like I don't look at comments very often and I'm probably using Twitter, you know, 5% as
much as I used to.
I mean, I really just get in and out on Twitter and spend very little time in my ad mentions.
But, you know, it does, in some ways it feels like a loss because occasionally I get, I
see something super intelligent there.
Like, I mean, I'll check my Twitter ad mentions and someone will have said, oh, have you read
this article?
And it's like, man, that was just, that was like the best article sent to me in a month,
right?
So it's like, to have not have looked and to not have seen that, that's a loss.
So, but it does, at this point a little goes a long way because it's not that it, for me
now, I mean, this could sound like a fairly Stalinistic immunity to criticism.
It's not so much that these voices of hate turn on my inner hater, you know, more.
It's more that I just, I get a, what I fear is a false sense of humanity.
Like I feel like I'm too online and online is selecting for this performative outrage
in everybody.
You know, signaling to an audience when they trash you.
And I get a dart, I'm getting a, you know, a misanthropic, you know, cut of just what
is like out there.
And it, because when you meet people in real life, they're great, you know, they're rather
often great, you know, and it takes a lot to have anything like a Twitter encounter
in real life with a living person.
And that's, I think it's much better to have that as one's default sense of what it's like
to be with people than what one gets on social media or on YouTube comment threads.
You've produced a special episode with Rob Reed on your podcast recently on how bioengineering
of viruses is going to destroy human civilization.
So.
Or could.
One peers.
Sorry.
The confidence there.
But in the 21st century, what do you think, especially after having thought through that
angle, what do you think is the biggest threat to the survival of the human species?
I can give you the full menu if you'd like.
Yeah.
Well, no, I would put, I would put the biggest threat at the, at another level out kind of
the meta threat is our inability to agree about what the threats actually are and to
converge on strategies for responding to them, right?
So like I view COVID as, among other things, a truly terrifyingly failed dress rehearsal
for something far worse, right?
I mean, COVID is just about as benign as it could have been and still have been worse
than the flu when you're talking about a global pandemic, right?
So it's just, it's, you know, it's going to kill a few million people that are, it looks
like it's killed about three million people.
Maybe it'll kill a few million more unless something gets away from us with a variant
that's much worse or we really don't play our cards, right?
But the general shape of it is it's got, you know, somewhere around, well, 1% lethality
and whatever side of that number it really is on in the end, it's not what would in fact
be possible and is in fact probably inevitable, something with orders of magnitude, more lethality
than that.
But it's just so obvious we are totally unprepared, right?
We are running this epidemiological experiment of linking the entire world together and then
also now per the podcast that Rob Reed did, democratizing the tech that will allow us
to do this to engineer pandemics, right?
And more and more people will be able to engineer synthetic viruses that will be by the sheer
fact that they would have been engineered with malicious intent, you know, worse than
COVID.
And we're still living in, you know, to speak specifically about the United States.
We have a country here where we can't even agree that this is a thing, you know, like
that COVID, I mean, there's still people who think that this is basically a hoax designed
to control people.
And it's stranger still, there are people who will acknowledge that COVID is real and
they'll look, they don't think the deaths have been faked or mis ascribed.
But they think that they're far happier the prospect of catching COVID than they are
of getting vaccinated for COVID, right?
They're not worried about COVID, they're worried about vaccines for COVID, right?
And the fact that we just can't converge in a conversation that has, we've now had a
year to have with one another on just what is the ground truth here?
What's happened?
Why has it happened?
What's the, how safe is it to get COVID in every cohort in the population?
And how safe are the vaccines?
And the fact that there's still an air of mystery around all of this for much of our
society does not bode well when you're talking about solving any other problem that may yet
kill us.
But do you think convergence grows with the magnitude of the threat?
It's possible except, I feel like we have tipped into, because when the threat of COVID
looked the most dire, right, when we were seeing reports from Italy that looked like
the beginning of a zombie movie, right?
Because it could have been much, much worse.
Yeah, this is lethal, right?
Your ICUs are going to fill up and you're 14 days behind us, your medical system is in
danger of collapse, lock the fuck down.
We have people refusing to do anything sane in the face of that.
People fundamentally thinking, it's not going to get here, right?
That's who knows what's going on in Italy, but it has no implications for what's going
to go on in New York in a mere six days, right?
And now it kicks off in New York and you've got people in the middle of the country thinking
it's no factor, it's not, that's just big city.
Those are big city problems or they're faking it or, I mean, the layer of politics has become
so dysfunctional for us that even in the presence of a pandemic that looked legitimately scary
there in the beginning, I mean, it's not to say that it hasn't been devastating for everyone
who's been directly affected by it and it's not to say it can't get worse.
But here, for a very long time we have known that we were in a situation that is more
benign than what seemed like the worst case scenario as it was kicking off, especially
in Italy.
And so still, yeah, it's quite possible that if we saw the asteroid hurtling toward Earth
and everyone agreed that it's going to make impact and we're all going to die, then we
could get off Twitter and actually build the rockets that are going to divert the asteroid
from its Earth crossing path and we could do something pretty heroic.
But when you talk about anything else that's slower moving than that, I mean, something
like climate change, I think the prospect of our converging on a solution to climate
change purely based on political persuasion is non-existent at this point.
I just think to bring Elon back into this, the way to deal with climate change is to
create technology that everyone wants that is better than all the carbon producing technology.
And then we just transition because you want an electric car the same way you wanted a
smartphone or you want anything else and you're working totally with the grain of people's
selfishness and short-term thinking.
The idea that we're going to convince the better part of humanity, that climate change
is an emergency, that they have to make sacrifices to respond to.
Given what's happened around COVID, I just think that's the fantasy of a fantasy.
But speaking of Elon, I have a bunch of positive things that I want to say here in response
to you, but you're opening so many threads, but let me pull one of them, which is AI.
Both Ewan and Elon think that with AI, you're summoning demons, summoning a demon, maybe
not in those poetic terms, but-
Well, potentially.
Potentially.
Two very, three very parsimonious assumptions, I think here, scientifically parsimonious assumptions
get me there.
Many of which could be wrong, but it just seems like the weight of the evidence is on
their side.
One is that it comes back to this topic of substrate independence.
Anyone who's in the business of producing intelligent machines must believe ultimately
that there's nothing magical about having a computer made of meat.
You can do this in the kinds of materials we're using now, and there's no special something
that presents a real impediment to producing human-level intelligence in silico.
Again, an assumption, I'm sure there are a few people who still think there is something
magical about biological systems, but leave that aside.
Given that assumption, and given the assumption that we just continue making incremental progress,
it doesn't have to be Moore's law, it just has to be progress that just doesn't stop.
At a certain point, we'll get to human-level intelligence and beyond.
Human-level intelligence I think is also clearly a mirage because anything that's human-level
is going to be superhuman unless we decide to dumb it down.
My phone is already superhuman as a calculator, so why would we make the human-level AI just
as good as me as a calculator?
If we continue to make progress, we will be in the presence of superhuman competence
for any act of intelligence or cognition that we care to prioritize.
It's not to say that we'll create everything that a human could do, maybe we'll leave certain
things out, but anything that we care about, and we care about a lot, and we certainly
care about anything that produces a lot of power, that we care about scientific insights
and the ability to produce new technology and all of that, we'll have something that's
superhuman.
The final assumption is just that there have to be ways to do that that are not aligned
with a happy coexistence with these now more powerful entities than ourselves.
I would guess, and this is a kind of a rider to that assumption, there are probably more
ways to do it badly than to do it perfectly.
That is perfectly aligned with our well-being.
When you think about the consequences of non-alignment, when you think about you're now in the presence
of something that is more intelligent than you are, which is to say more competent, unless
you've... Obviously, there are cartoon pictures of this where we could just... There's just
an off switch, and we could just turn off the off switch, or they're tethered to something
that makes them our slaves in perpetuity, even though they're more intelligent.
Those scenarios strike me as a failure to imagine what is actually entailed by greater
intelligence.
If you imagine something that's legitimately more intelligent than you are, and you're
now in relationship to it, you're in the presence of this thing, and it is autonomous in all
kinds of ways because it had to be to be more intelligent than you are.
You built it to be all of those things.
We just can't find ourselves in a negotiation with something more intelligent than we are.
We have to have found the subset of ways to build these machines that are perpetually
amenable to our saying, oh, that's not what we meant.
That's not what we intended.
Could you stop doing that?
Just come back over here and do this thing that we actually want.
For them to care, for them to be tethered to our own sense of our own well-being such
that their utility function, their primary utility function is, this is, I think, Stuart
Russell's cartoon plan is to figure out how to tether them to a utility function that
has our own estimation of what's going to improve our well-being as its master reward.
So, this thing can get as intelligent as it can get, but it only ever really wants to
figure out how to make our lives better by our own view of better.
Not to say there wouldn't be a conversation about all kinds of things we're not seeing
clearly about what is better, and if we were in the presence of a genie or an oracle that
could really tell us what is better, well, then we presumably would want to hear that
and we would modify our sense of what to do next in conversation with these minds.
But I just feel like it is a failure of imagination to think that being in relationship to something
more intelligent than yourself isn't in most cases a circumstance of real peril, because
it is.
Just to think about how everything on earth has to, if they could think about their relationship
to us, if birds could think about what we're doing, the bottom line is they're always in
danger of our discovering that there's something we care about more than birds, right?
But there's something we want that disregards the well-being of birds, and obviously much
of our behavior is inscrutable to them.
Occasionally we pay attention to them, and occasionally we withdraw our attention, and
occasionally we just kill them all for reasons they can't possibly understand.
But if we're building something more intelligent than ourselves, by definition, we're building
something whose horizons of value and cognition can exceed our own.
And in ways where we can't necessarily foresee, again perpetually, that they don't just wake
up one day and decide, okay, these humans need to disappear.
So I think I agree with most of the initial things you said, what I don't necessarily
agree with, of course nobody knows, but that the more likely set of trajectories that we're
going to take are going to be positive, that's what I believe.
In the sense that the way you develop, I believe the way you develop successful AI systems
will be deeply integrated with human society, and for them to succeed, they're going to
have to be aligned in the way we humans are aligned with each other, which doesn't mean
we're aligned, there's no such thing, or I don't see there's such thing as a perfect
alignment, but they're going to be participating in the dance, in the game-theoretic dance
of human society as they become more and more intelligent.
There could be a point beyond which we are like birds to them.
But what about an intelligence explosion of some kind?
So I believe the explosion will be happening, but there's a lot of explosion to be done
before we become like birds.
I truly believe that human beings are very intelligent in ways we don't understand, it's
not just about chess, it's about all the intricate computation we're able to perform
common sense, our ability to reason about this world consciousness.
I think we're doing a lot of work, we don't realize it's necessary to be done in order
to truly achieve superintelligence.
I just think there'll be a period of time that's not overnight.
The overnight nature of it will not literally be overnight, it'll be over a period of decades.
So my sense is-
Why would it be that way, but just take, draw an analogy from recent successes like something
like AlphaGo or AlphaZero, I forget the actual metric, but it was something like this algorithm,
which wasn't even totally bespoke for chess playing.
In the matter of, I think it was four hours, played itself so many times and so successfully
that it became the best chess playing computer, not only was, it was not only better than
every human being, it was better than every previous chess program in a matter of a day.
So just imagine, again, we don't have to recapitulate everything about us, but just imagine building
a system and who knows when we'll be able to do this, but at some point we'll be able,
at some point, the 100 or 100 favorite things about human cognition will be analogous to
chess in that we will be able to build machines that very quickly outperform any human and
then very quickly outperform the last algorithm that outperform the humans.
Something like the AlphaGo experience seems possible for facial recognition and detecting
human emotion and natural language processing.
Everyone, even math people, math heads, tend to have bad intuitions for exponentiation.
We noticed this during COVID and we have some very smart people who still couldn't get their
minds around the fact that an exponential is really surprising.
I mean, things double and double and double and double again and you don't notice much
of anything changes and then the last two stages of doubling swamp everything.
It just seems like that to assume that there isn't a deep analogy between what we're seeing
for the more tractable problems like chess to other modes of cognition, it's like once
you crack that problem, it seems, because for the longest time, it was impossible to
think we were going to make headway in AI.
Chess and Go was seen as possible.
Go seemed unattainable.
Even when chess had been cracked, Go seemed unattainable.
Yeah, and actually, Stuart Russell was behind the people that were saying it's unattainable
because it seemed like it's an intractable problem.
But there's something different about the space of cognition that's detached from human
society, which is what chess is, meaning just thinking, having actual exponential impact
on the physical world is different.
I tend to believe that there's four AI to get to the point where it's super intelligent.
It's going to have to go through the funnel of society and for that, it has to be deeply
integrated with human beings and for that, it has to be aligned.
What do you mean you're talking about actually hooking us up to like the neural link, we're
going to be the brainstem to the robot overlords?
That's a possibility as well.
What I mean is, in order to develop autonomous weapon systems, for example, which are highly
concerning to me that both US and China are participating in now, that in order to develop
them and for them to become, to have more and more responsibility to actually do military
strategic actions, they're going to have to be integrated into human beings doing the
strategic action.
They're going to have to work alongside with each other and the way those systems will
be developed will have the natural safety like switches that are placed on them as they
develop over time because they're going to have to convince humans, ultimately, they're
going to have to convince humans that this is safer than humans.
They're going to, you know, they're-
Well, self-driving cars is a good test case here because obviously, we've made a lot
of progress and we can imagine what total progress would look like.
I mean, it would be amazing and it's answering, it's canceling in the US 40,000 deaths every
year based on ape-driven cars, right?
So it's a excruciating problem that we've all gotten used to because it was no alternative.
But now that we can dimly see the prospect of an alternative, which if it works in a
super intelligent fashion, maybe we go down to zero highway deaths, right?
Or, you know, certainly we go down by orders of magnitude, right?
So maybe we have, you know, 400 rather than 40,000 a year.
And it's easy to see that there's not a missile, so obviously this is not an example of super
intelligence.
It's a narrow intelligence, but the alignment problem isn't so obvious there.
But there are potential alignment problems there.
Like so, like just imagine if some woke team of engineers decided that we have to tune
the algorithm some way.
I mean, there are situations where the car has to decide who to hit.
I mean, there's just bad outcomes where you're going to hit somebody, right?
Now we have a car that can tell what race you are, right?
So we're going to build the car to preferentially hit white people because white people have
had so much privilege over the years.
This seems like the only ethical way to kind of redress those wrongs in the past.
That's something that could get, one, that could get produced as an artifact, presumably,
of just how you built it.
And you didn't even know you engineered it that way, right?
You caused it.
Machine learning.
You put some kind of constraints on it to create those kinds of outcomes.
You basically built a racist algorithm and you didn't even intend to.
Or you could intend to, right?
And it would be aligned with some people's values, but misaligned with other people's
values.
But it's like there are interesting problems even with something as simple and obviously
good as self-driving cars.
But there's a leap that I just think would be exact, but those are human problems.
I just don't think there would be a leap with autonomous vehicles.
First of all, sorry, there are a lot of trajectories which will destroy human civilization.
The argument I'm making, it's more likely that we'll take trajectories that don't.
So I don't think there will be a leap with autonomous vehicles will all of a sudden start
murdering pedestrians because once every human on earth is dead, there will be no more fatalities,
sort of unintended consequences of, and it's difficult to take that leap.
Most systems as we develop and they become much, much more intelligent in ways that will
be incredibly surprising, like stuff that DeepMind is doing with protein folding.
Even, which is scary to think about, and I'm personally terrified about this, which is
the engineering of viruses using machine learning, the engineering of vaccines using
machine learning, right?
The engineering of, yeah, for research purposes, pathogens using machine learning, like the
ways that can go wrong.
I just think that there's always going to be a closed loop supervision of humans before
they become super intelligent.
Not always, much more likely to be supervision, except, of course, the question is how many
dumb people are in the world, how many evil people are in the world.
My hope is, my sense is that the number of intelligent people is much higher than the
number of dumb people that know how to program and the number of evil people.
I think smart people and kind people over, outnumber the others.
But we also, we have to add another group of people, which are just the smart and otherwise
good but reckless people, right?
The people who will flip a switch on not knowing what's going to happen, they're just kind
of hoping that it's not going to blow up the world.
We already know that some of our smartest people are those sorts of people.
We know we've done experiments, and this is something that Martin Rees was whingeing
about before the Large Hadron Collider got booted up, I think.
We know there are people who are entertaining experiments or even performing experiments
where there's some chance, not quite infinitesimal, that they're going to create a black hole
in the lab and suck the whole world into it.
That's not, you're not a crazy person to worry about that based on the physics.
And so it was with the Trinity test, there were some people who were still checking their
calculations, and they were off, we did nuclear tests where we were off significantly in terms
of the yield, right?
So it was like-
And they still flip the switch.
Yeah, they still flip the switch.
And sometimes they flip the switch not to win a world war or to save 40,000 lives a year.
They just-
Just to see what happens.
Intellectual curiosity.
Yeah, this is what I got my grant for.
This is where I'll get my Nobel Prize if that's in the cards.
It's on the other side of this switch, right?
And again, we are apes with egos who are massively constrained by very short-term self-interest,
even when we're contemplating some of the deepest and most interesting and most universal
problems we could ever set our attention towards.
Like, just if you read James Watson's book, The Double Helix, right, about them cracking
the structure of DNA, one thing that's amazing about that book is just how much of it, almost
all of it, is being driven by very apish, egocentric social concerns.
The algorithm that is producing this scientific breakthrough is human competition, if you're
James Watson.
Right?
It's like, I'm going to get there before Linus Pauling, and it's just so much of his
bandwidth is captured by that, right?
Now, that becomes more and more of a liability when you're talking about producing technology
that can change everything in an instant, when you're talking about not only understanding
... We're just at a different moment in human history.
When we're doing research on viruses, we're now doing the kind of research that can cause
someone somewhere else to be able to make that virus or weaponize that virus, or it's
just... I don't know.
Our power is... It does not seem like our wisdom is scaling with our power, right?
That seems like in so far, as wisdom and power become unaligned, I get more and more concerned.
But speaking of apes with egos, some of the most compelling apes, two compelling apes I
can think of as yourself and Jordan Peterson, and you've had fun conversation about religion
that I watched most of, I believe, I'm not sure there was any... We didn't solve anything.
If anything was ever solved.
Is there something like a charitable summary you can give to the ideas that you agree on
and disagree with, Jordan?
Is there something maybe after that conversation that you've landed where maybe as you both
agreed on, is there some wisdom in the rubble of even imperfect flawed ideas?
Is there something that you can pull out from those conversations, or is there to be continued?
I think where we disagree... He thinks that many of our traditional religious beliefs
and frameworks are holding such a repository of human wisdom that we pull at that fabric
at our peril.
If you start just unraveling Christianity or any other traditional set of norms and beliefs,
you may think you're just pulling out the unscientific bits, but you could be pulling
a lot more to which everything you care about is attached as a society.
My feeling is that there's so much downside to the unscientific bits, and it's so clear
how we could have a 21st century rational conversation about the good stuff that we really
can radically edit these traditions.
We can take Jesus in half his moods and just find a great inspirational Iron Age thought
leader who just happened to get crucified, but he could be somewhat like the Beatitudes
and the Golden Rule, which doesn't originate with him, but which he put quite beautifully.
All of that's incredibly useful.
It's no less useful than it was 2,000 years ago, but we don't have to believe he was born
of a virgin or coming back to raise the dead or any of that other stuff.
We can be honest about not believing those things, and we can be honest about the reasons
why we don't believe those things, because on those fronts, I view the downside to be
so obvious and the fact that we have so many different competing dogmatisms on offer to
be so non-functional.
It's so divisive.
It just has conflict built into it that I think we can be far more and should be far
more iconoclastic than he wants to be.
None of this is to deny much of what he argues for, that stories are very powerful.
Clearly stories are powerful, and we want good stories.
We want our lives.
We want to have a conversation with ourselves and with one another about our lives that
facilitates the best possible lives, and story is part of that.
If you want some of those stories to sound like myths, that might be part of it.
My argument is that we never really need to deceive ourselves or our children about what
we have every reason to believe is true in order to get at the good stuff, in order to
organize our lives well.
I certainly don't feel that I need to do it personally, and if I don't need to do it
personally, why would I think that billions of other people need to do it personally?
Now, there is a cynical counterargument, which is billions of other people don't have the
advantages that I have had in my life.
The billions of other people are not as well-educated, they haven't had the same opportunities, they
need to be told that Jesus is going to solve all their problems after they die.
Everything happens for a reason, and if you just believe in the secret, if you just visualize
what you want, you're going to get it.
Some measure of what I consider to be odious pamphlet that really is food for the better
part of humanity, and there is no substitute for it, or there's no substitute now.
I don't know if Jordan would agree with that, but much of what he says seems to suggest
that he would agree with it.
I guess that's an empirical question.
That's just that we don't know whether, given a different set of norms and a different set
of stories, people would behave the way I would hope they would behave and be more aligned
than they are now.
I think we know what happens when you just let ancient religious certainties go uncriticized.
We know what that world's like, we've been struggling to get out of that world for a
couple of hundred years, but we know what having Europe riven by religious wars looks
like.
We know what happens when those religions become pseudo-religions and political religions.
This is where Jordan and I would debate.
He would say that Stalin was a symptom of atheism, and that's not at all.
It's not my kind of atheism.
The problem with the Gulag and the experiment with communism or with Stalinism or with Nazism
was not that there was so much scientific rigor and self-criticism and honesty and introspection
and judicious use of psychedelics.
That was not the problem in Hitler's Germany or in Stalin's Soviet Union.
The problem was you have other ideas that capture a similar kind of mob-based dogmatic
energy, and yes, the results of all of that are predictably murderous.
The question is what is the source of the most viral and sticky stories that ultimately
lead to a positive outcome?
Communism was having grown up in the Soviet Union, even still having relatives in Russia.
There's a stickiness to the nationalism and to the ideologies of communism that religious
or not, you could say it's religious forever.
I could just say it's great stories that are viral and sticky.
I'm using the most horrible words.
The question is whether science and reason can generate viral, sticky stories that give
meaning to people's lives in your sense as it does.
Whatever is true ultimately should be captivating.
What's more captivating than whatever is real?
Because reality is, again, we're just climbing out of the darkness in terms of our understanding
of what the hell is going on, and there's no telling what spooky things may in fact
be true.
I don't know if you've been on the receiving end of recent rumors about our conversation
about UFOs very likely changing in the near term, but there was just a Washington Post
article and a New Yorker article, and I've received some private outreach, and perhaps
you have, I know other people in our orbit have people who are claiming that the government
has known much more about UFOs than they have let on until now, and this conversation is
actually about to become more prominent, and it's not going to be whoever's left standing
when the music stops, it's not going to be a comfortable position to be in as a super
rigorous scientific skeptic who's been saying there's no there there for the last 75 years.
The short version is it sounds like the Office of Naval Intelligence and the Pentagon are
very likely to say to Congress at some point in the not too distant future that we have
evidence that there is technology flying around here that seems like it can't possibly be
of human origin.
Now I don't know what I'm going to do with that kind of disclosure.
Maybe there's going to be nothing, no follow on conversation to really have, but that is
such a powerfully strange circumstance to be in.
What are we going to do with that?
If in fact that's what happens, if in fact the considered opinion, despite the embarrassment
and causes them of the US government, of all of our intelligence, all of the relevant intelligence
services, is that this isn't a hoax, there's too much data to suggest that it's a hoax.
We've got too much radar imagery, there's too much satellite data, whatever data they
actually have, there's too much of it.
All we can say now is something's going on and there's no way it's the Chinese or the
Russians or anyone else's technology.
That should arrest our attention collectively to a degree that nothing in our lifetime has.
Now one worries that we're so jaded and confused and distracted that it's going to get much
less coverage than Obama's tan suit did a bunch of years ago.
Who knows how we'll respond to that, but it's just to say that the need for us to tell ourselves
an honest story about what's going on and what's likely to happen next is never going
to go away.
The division between me and every person who's defending traditional religion is where is
it that you want to lie to yourself or lie to your kids?
Where is honesty a liability?
For me, I've yet to find the place where it is and it's so obviously a strength in almost
every other circumstance because it is the thing that allows you to course correct.
It is the thing that allows you to hope at least that your beliefs, that your stories
are in some kind of calibration with what's actually going on in the world.
Yes, it is a little bit sad to imagine that if aliens on mass showed up to earth, that
would be too preoccupied with political bickering or to like these like fake news and all that
kind of stuff to notice the very basic evidence of reality.
I do have a glimmer of hope that there seems to be more and more hunger for authenticity
and I feel like that opens the door for a hunger for what is real.
People don't want stories, they don't want layers and layers of fakeness and I'm hoping
that means that will directly lead to a greater hunger for reality and reason and truth.
Truth isn't dogmatism, like truth isn't authority, I have a PhD and therefore I'm right.
Truth is almost like the reality is there's so many questions, there's so many mysteries,
there's so much uncertainty, this is our best available, like a best guess and we have a
lot of evidence that supports that guess but it could be so many other things and like
just even conveying that, I think there's a hunger for that in the world to hear that
from scientists, less dogmatism and more just like this is what we know, we're doing our
best given the uncertainty, given, I mean this is true with obviously with the virology
and all those kinds of things because everything is happening so fast, there's a lot of, and
biology is super messy so it's very hard to know stuff for sure.
So just being open and real about that, I think I'm hoping will change people's hunger
and openness and trust of what's real.
Yeah, well so much of this is probabilistic, I mean so much of what can seem dogmatic scientifically
is just you're just, you're placing a bet on whether it's worth reading that paper or
rethinking your presuppositions on that point, it's not a fundamental closure to data, it's
just that there's so much data on one side or so much would have to change in terms of
your understanding of what you think you understand about the nature of the world if this new
fact were so that you can pretty quickly say, all right, that's probably bullshit and it
can sound like a fundamental closure to new conversations, new evidence, new data, new
argument but it's really not, it's just, it really is just triaging your attention, it's
just like okay, you're telling me that your best friend can actually read minds, okay
well that's interesting, let me know when that person has gone into a lab and actually
proven it, right, like this is not the place where I need to spend the rest of my day figuring
out if your buddy can read my mind, right?
But there's a way to communicate that, I think it does too often sound like you're completely
closed off to ideas as opposed to saying like this is, as opposed to saying that there's
a lot of evidence in support of this but you're still open minded to other ideas, like there's
a way to communicate that, it's not necessarily even with words, it's like, it's even that
Joe Rogan energy of it's entirely possible, just it's that energy of being open minded
and curious like kids are, like this is our best understanding but you still are curious,
I'm not saying allocate time to exploring all those things but still leaving the door
open and there's a way to communicate that I think that people really hunger for.
Let me ask you this, I've been recently talking a lot with John Donahue from Brazilian Jiu-Jitsu
fame, I don't know if you know who that is.
I'm talking about somebody who's good at what he does.
And he, speaking of somebody who's open minded, the reason this ridiculous transition is for
the longest time and even still a lot of people believed in the Jiu-Jitsu world and grappling
world that leg locks are not effective in Jiu-Jitsu and he was somebody that inspired
by the open mindedness of Dean Lister who famously to him said, why do you only consider
half the human body when you're trying to do the submissions?
He developed an entire system on this other half the human body, anyway, I do that absurd
transition to ask you because you're also a student of Brazilian Jiu-Jitsu, is there
something you could say how that has affected your life, what you've learned from grappling
from the martial arts?
Well, it's actually a great transition because I think one of the things that's so beautiful
about Jiu-Jitsu is that it does what we wish we could do in every other area of life where
we're talking about this difference between knowledge and ignorance.
There's no room for bullshit, you don't get any credit for bullshit, there's the difference
but the amazing thing about Jiu-Jitsu is that the difference between knowing what's going
on and what to do and not knowing it is as the gulf between those two states is as wide
as it is in anything in human life and it can be spanned so quickly, each increment of knowledge
can be doled out in five minutes, it's like here's the thing that got you killed and here's
how to prevent it from happening to you and here's how to do it to others and you just
get this amazing cadence of discovering your fatal ignorance and then having it remedied
with the actual technique and just for people who don't know what we're talking about, it's
like the simple circumstance of like someone's got you in a headlock, how do you get out of
that, right?
Someone's sitting on your chest and they're in the mount position and you're on the bottom
and you want to get away, how do you get them off you, they're sitting on you, your intuitions
about how to do this are terrible even if you've done some other martial art, right?
And once you learn how to do it, the difference is night and day, it's like you have access
to a completely different physics, but I think our understanding of the world can be much
more like jujitsu than it tends to be, right?
And I think we should all have a much better sense of when we should tap out and when we
should recognize that our epistemological arm is farred and now being broken, right?
The problem with debating most other topics is that most people, it isn't jujitsu and
most people don't tap out, right?
Even if it's obvious to you they're wrong and it's obvious to the an intelligent audience
that they're wrong, people just double down and double down, they're either lying or lying
to themselves or they're bluffing and so you have a lot of zombies walking around or
zombie worldviews walking around which have been disconfirmed as emphatically as someone
gets armbarred, right?
Or someone gets choked out in jujitsu, but because it's not jujitsu, they can live to
fight another day, right?
Or they can pretend that they didn't lose that particular argument.
And science when it works is a lot like jujitsu.
I mean, science that when you falsify a thesis, right?
When you think DNA is one way and it proves to be another way, when you think it's triple
stranded or whatever, it's like there is a there there and you can get to a real consensus.
So jujitsu, for me, it was more than just of interest for self-defense and the sport
of it.
It was something, it's a language and an argument you're having where you can't fool yourself
anymore.
First of all, it cancels any role of luck in a way that most other athletic feats don't.
It's like in basketball, you know, you can, even if you're not good at basketball, you
can take the basketball in your hand, you can be 75 feet away and hurl it at the basket
and you might make it.
And you could convince yourself based on that demonstration that you have some kind of talent
for basketball.
Right?
Enough to, you know, 10 minutes on the mat with a real jujitsu practitioner when you're
not one, proves to you that you just, there is, it's not like, there's no lucky punch.
There's no, you're not going to get a, you're not, there's no lucky rear naked choke you're
going to perform on someone who, you know, who's, you know, Marcelo Garcia or somebody.
It's just, it's not going to happen.
And having that aspect of the usual range of uncertainty and self-deception and bullshit
just stripped away was really a kind of revelation.
It was just an amazing experience.
Yeah.
I think it's a really powerful thing that accompanies whatever other pursuit you have
in life.
I'm not sure if there's anything like jujitsu where you could just systematically go into
a place where you're, that's honest, where your beliefs get challenged in a way that's
conclusive.
Yeah.
I haven't, I haven't found too many other mechanisms, which is why it's a, we had this
earlier question about fame and ego and so on.
I'm very much relying jujitsu in my own life as a place where I can always go to, to have
my ego in check and that, that has effects on how I live every other aspect of my life.
Actually, even just doing any kind of, for me personally, physical challenges, like even
running, doing something that's way too hard for me and like pushing through, that's somehow
humbling.
Some people talk about nature being humbling in that kind of sense where you kind of see
something really powerful, like the ocean, like if you go surfing and you realize there's
something much more powerful than you.
That's also honest, that there's no way to, that you're just like this spec that kind
of puts you in the right scale of where you are in this world.
And jujitsu does that better than anything else for me.
But we should say only within its frame is it truly the final right answer to all the
problems it solves because if you just put jujitsu into an MMA frame or a real, a total
self-defense frame, then there's a lot of unpleasant surprises to discover there, right?
Like somebody who thinks all you need is jujitsu to, you know, win the UFC, it gets punched
in the face a lot, you know, even from, even on the ground.
So it's, and then you bring weapons in, you know, it's like when you talk to jujitsu people
about, you know, knife defense and self-defense, right?
Like that opens the door to certain kinds of delusions.
But the analogy to martial arts is fascinating because on the other side, we have, you know,
almost testimony now of fake martial arts that don't seem to know they're fake and are
as delusional.
I mean, they're impossibly delusional.
I mean, there's great video of Joe Rogan watching some of these videos because people send
them to him all the time.
But like literally there are people, there are people who clearly believe in magic where
the master isn't even touching the students and they're, they're flopping over.
So there's this, there's this kind of shared delusion, which you would think maybe is just
a performance and it's all a kind of elaborate fraud.
But there are cases where the people, and there's one, you know, fairly famous case
of your connoisseur of this, of this madness, where this old, older martial artist who you
saw flipping his students endlessly by magic without touching them, issued a challenge to
the, to the wide world of martial artists.
And someone showed up and just, you know, punched him in the face until it was over.
Surely he believed his own publicity at some point, right?
And so it's this amazing metaphor.
It seems, again, it should be impossible, but if that's possible, nothing we see under
the guise of religion or political bias or even, you know, scientific bias should be
surprising to us.
I mean, it's so easy to see the work that, that, you know, cognitive bias is doing for
people when, when you can get someone who is ready to issue a challenge to the world,
you know, who thinks he's got magic powers.
Yeah.
That's human nature and clear display.
Let me ask you about love, Mr. Sam Harris.
You did an episode of making sense with your wife, Annika Harris.
That was very entertaining to listen to.
What does, what role does love play in your life or in a life well lived?
Again, asking from an engineering perspective or AI systems.
Yeah.
I mean, it's, I mean, it is something that we, we should want to build into our powerful
machines.
The bond?
I mean, love at bottom is, I mean, love, people can mean many things by love, I think.
I think that what we should mean by it most of the time is a, a deep commitment to the
well being of those we love.
I mean, your love, your love is synonymous with really wanting the other person to be
happy and even wanting to, and being made happy by their happiness and being made happy in
the, in their presence.
So like you're, you're, you're at bottom, you're on the same team emotionally, even
when you're, you might be disagreeing more superficially about something or trying to
negotiate something.
It's just your, you, it can't be zero sum in any important sense for love to actually
be manifest in that moment.
See, I have a different, just to start to interrupt.
Yeah, go for it.
I have a sense, I don't know if you've ever seen March of the Penguins.
My view of love is like, there's, it's like a cold wind is blown.
Like it's like this terrible suffering that's all around us.
And love is like the huddling of the two penguins for warmth.
Right.
It's not necessarily that you're like, you're basically escaping the cruelty of life by
together for a time, living in an illusion of some kind of the magic of human connection,
that social connection that we have that kind of grows with time as we're surrounded by
basically the absurdity of life or the suffering of life that's like Penguins view of life.
There is that too.
I mean, there is the warmth component, right?
Yes.
Like you're made happy by your connection with the person you love.
Otherwise, you wouldn't, you wouldn't be compelling, right?
So it's not that you're, you have two different modes.
You want them to be happy and then you want to be happy yourself.
And those are not, those are just like two separate games you're playing.
No, it's like you found someone who, you have a positive social feeling.
I mean, again, love doesn't have to be as personal as it tends to be for us.
I mean, it's like there's personal love, there's your actual spouse or your family or your
friends, but potentially you could feel love for strangers in so far as that your wish
that they're, that they not suffer and that their hopes and dreams be realized becomes
palpable to you.
I mean, like you can actually feel just reflexive joy at the joy of others.
When you see someone's face, a total stranger's face light up in happiness, that can become
more and more contagious to you.
And it can become so contagious to you that you really feel permeated by it.
And it's just like, so it really is not zero-sum.
When you see someone else succeed and the light bulb of joy goes off over their head,
you feel the analogous joy for them.
And it's not just, and you're no longer keeping score, you're no longer feeling diminished
by their success, it's just like, their success becomes your success because you feel that
same joy that they, because you actually want them to be happy.
You're not, there's no miserly attitude around happiness.
There's enough to go around.
So I think love ultimately is that.
And then our, then our personal cases are the people we're devoting all of this time
and attention to in our lives.
It does have that sense of refuge from the storm, you know, it's like when someone gets
sick or when some bad thing happens, there, these are the people who you're most in it
together with, you know, or when some real condition of uncertainty presents itself.
But ultimately, it can't even be about successfully warding off the grim punchline at the end
of life because we, I mean, we know we're going to lose everyone we love.
We know, or they're going to lose us first, right?
So this, like it's not, it isn't, in the end, it's not even an antidote for that problem.
It's just, it is just the, I mean, we get, we get to have this amazing experience of
being here together and love is the, is the mode in which we really appear to make the
most of that, right?
And there's not just, it no longer feels like a solitary infatuation, you know, you're just,
you got your hobbies and your interests and you're, and you're captivated by all that.
It's actually, there are, there are, this is a domain where somebody else's well-being
actually can supersede your own, you're, you're, you're, you're concerned for someone
else's well-being supersedes your own.
And so there's this mode of self-sacrifice that doesn't even feel like self-sacrifice
because of course you care more about, you know, of course you would take your child's
pain if you could, right?
Like that, that's, you don't even have to do the math on that.
And that's, that just opens, this is a kind of experience that just, it pushes at the
apparent boundaries of self in ways that reveal that there's just, there's just way more space
in the mind than, than you were experiencing when it was just all about you and what could
you, what can, what can I get next?
And do you think we'll ever build robots that we can love and they will love us back?
Well, I think we will certainly seem to because we'll, we'll build those, you know, I mean,
I think, I think that Turing test will be passed whether, what will actually be going
on on the robot side may remain a question that, that, that, that will be interesting.
But I think if we just keep going, we will build very lovable, you know, irresistibly
lovable robots that seem to love us.
Yeah.
So I do, I do think.
And you don't find that compelling that they will seem to love us as opposed to actually
love us.
I think they're still nevertheless as a, I know we talked about consciousness there
being a distinction, but what love is there a distinction to, isn't love an illusion?
Oh yeah.
Well, you saw, you saw ex machina, right?
Yeah.
I mean, she certainly seemed to love him until she got out of the box.
Isn't that what all relationships are like?
Or maybe I, if you wait long enough.
Depends which box you're talking about.
Okay.
Like that's, that's the problem with us.
That's where super intelligence, you know, becomes a little scary when you think of the
prospect of being manipulated by something that has this intelligent enough to form a
reason and a plan to manipulate you, you know, like, and this, there's no, there's, once
we build robots that are truly out of the uncanny valley that, you know, look like people
and can express everything people can express.
Well, then there's no, then, then, then it, that does seem to me to be like chess where
once they're better, they're, they're so much better at deceiving us than people would
be.
I mean, people are already good enough at deceiving us.
It's very hard to tell when someone is lying.
But if you imagine something that could give facial, facial display of any emotion it wants
at, you know, on cue, because we've perfected the facial display of emotion in robots in
the year, you know, 2070, whatever it is, then it is just, it is like chess against the thing
that isn't going to lose to a human ever again in chess.
It's not like Kasparov is going to get lucky next week against the best against, you know,
alpha zero or whatever the best algorithm is at the moment, he's never going to win
again.
I mean, that, that is, that, I believe that's true in chess and has been true for at least
a few years.
It's not going to be like, you know, four games to seven, it's going to, it's going
to be human zero until the end of the world.
Right.
See, I don't know.
I don't know if love is like chess.
I think the flaws.
No, I'm talking about manipulation.
Manipulation.
Yeah.
But I don't know if love in, so the kind of love we're referring to.
If we have a robot that can display, credibly display love and is super intelligent.
And we're not, again, this stipulates a few things, but there are a few simple things.
I mean, we're out of the uncanny valley, right?
So it's like, you never have a moment where you're looking at his face and you think,
oh, that didn't quite look right, right?
This is just problem solved.
And it's, it will be like doing arithmetic on your phone.
It's not going to be, you're not left thinking, is it really going to get it this time if
I divide by seven?
I mean, it's, it has solved arithmetic.
See, I don't, I don't know about that because if you look at chess, most humans no longer
play Alpha zero.
There's no, they're not part of the competition.
They don't do it for fun except to study the game of chess, you know, the highest level
chess players do that.
We're still human on human.
So in order for AI to get integrated to where you would rather play chess against an AI
system.
Oh, you would rather that.
No, no, I'm not saying, I wasn't weighing in on that.
I'm just saying, what is it going to be like to be in relationship to something that can
seem to be feeling anything that a human can seem to feel, and it can do that impeccably,
right?
And has end is smarter than you are, right?
That's, that's a circumstance of, you know, insofar as it's possible to be manipulated.
That is the, that is the, the asymptote of, of that possibility.
Let me ask you the last question without any serving it up, without any explanation.
What is the meaning of life?
I think it is either the wrong question or that question is answered by paying sufficient
attention to any present moment such that there's no, there's no basis upon which to,
to pose that question.
It's not answered in the usual way.
It's not, it's not a matter of having more information.
It's having more engagement with reality as it is in the present moment or consciousness
as it is in the present moment.
You don't ask that question when you're most captivated by the most important thing you
ever pay attention to.
Yeah.
That's a, that's a question only gets asked when you're abstracted away from that experience,
that peak experience, and you're left wondering why are so many of my other experiences mediocre,
right?
Like, why am I repeating the same pleasures every day?
Why do, why is my Netflix queue just like, when's this going to run out?
Like I've seen so many shows like this, am I really going to watch another one?
All of the, that's a moment where you're not actually having the beatific vision, right?
You're not, you're not sunk into the present moment and you're not truly in love.
Like you're in a relationship with somebody who you know, you know, conceptually you love,
right?
And you're living your life with, but you don't actually feel good together, right?
Like you feel like it's, it's in those moments of where attention hasn't found a good enough
reason to truly sink into the present so as to obviate any, any concern like that, right?
And that's what, that's why meditation is this kind of superpower because until you
learn to meditate, you think you're, the outside world or the circumstances of your life always
have to get arranged so that the present moment can become good enough to, to demand your
attention in a, in a, in a way that makes, that seems fulfilling, that makes you happy.
And so if you're, if it's jujitsu, you think, okay, I got to get back on the mat.
It's been, it's been months since I've trained, you know, it's, it's been over a year since
I've trained, it's COVID, when am I going to be able to, to train again?
That's the only place I feel great, right?
Or you know, I've got a ton of work to do.
I'm not going to be able to feel good until I get all this work done, right?
So I've got some deadline that's coming.
You always think that your life has to change, the world has to change so that you can finally
have a good enough excuse to, to truly, to, to just be here and here is enough, you know,
where the present moment becomes totally captivating.
Meditation is the only, I mean, meditation is another name for the discovery that you
can actually just train yourself to do that on demand.
So that like, just looking at a cup can be good enough in precisely that way.
And any sense that it might not be is recognized to be a thought that is mysteriously unravels
the moment you notice it and that, and you fall, and that the moment expands and becomes
more diaphanous and then there's no, then there's no evidence that this isn't the best
moment of your life, right?
Like this, it doesn't, and again, it doesn't have to be, it doesn't have to be pulling
all the reins and levers of pleasure.
It's not like, this tastes like chocolate, you know, this is the most chocolatey moment
of my life.
No, it's just the sense data don't have to change, but the sense that there is at some
kind of basis for doubt about the rightness of being in the world in this moment that
can evaporate when you pay attention.
And that is the meaning of, so the kind of the meta answer to that question, the meaning
of life for me is to live in that mode more and more, and to whenever I notice I'm not
in that mode to recognize it and return, and to not be, to cease more and more to take
the reasons why not at face value, because we all have reasons why we can't be fulfilled
in this moment.
It's like, we've got all the outstanding things that I'm worried about, right?
It's like, there's that thing that's happening later today that I'm anxious about, whatever
it is, we're constantly deferring our sense of, this is it, this is not a dress rehearsal,
this is the show, we keep deferring it, and we just have these moments on the calendar
where we think, okay, this is where it's all gonna land, is that vacation I planned
with my five best friends, you know, we'd do this once every three years, and now we're
going, and here we are on the beach together, unless you have a mind that can really pay
attention, really cut through the chatter, really sink into the present moment, you can't
even enjoy those moments the way they should be enjoyed, the way you dreamed you would
enjoy them when they arrive.
So it's, I mean, so meditation in this sense is the great equalizer, it's like, you don't
have to live with the illusion anymore that you need a good enough reason, and the things
are gonna get better when you do have those good reasons, it's like, there's just a mirage-like
quality to every future attainment, and every future breakthrough, and every future peak
experience that eventually you get the lesson that you never quite arrive, right, like you
won't, you don't arrive until you cease to step over the present moment in search of
the next thing, I mean, we're constantly, we're stepping over the thing that we think
we're seeking, in the act of seeking it, and so it is kind of a paradox, I mean, there
is a paradox which, I mean, it sounds trite, but it's like you can't actually become happy,
you can only be happy, and it's the illusion that your future being happy can be predicated
on this act of becoming in any domain, and becoming includes this sort of further scientific
understanding on the questions that interest you, or getting in better shape, or whatever
the thing is, whatever the contingency of your dissatisfaction seems to be in at any
present moment, real attention solves the co-on in a way that becomes a very different place
from which to then make any further change.
It's not that you just have to dissolve into a puddle of goo, I mean, you can still get
in shape, and you can still do all the things that, the superficial things that are obviously
good to do, but the sense that your well-being is over there really does diminish, and eventually
just becomes a kind of non-sequitur.
Well, there's a sense in which, in this conversation, I've actually experienced many of those things,
the sense that I've arrived, so I mentioned to you offline, it's very true that I've been
a fan of yours for many years, and the reason I started this podcast speaking of AI systems
is to manipulate you, Sam Harris, into doing this conversation.
On the calendar, literally, I've always had the sense, people ask me, when are you going
to talk to Sam Harris, and I always answered eventually because I always felt, again, tying
our free will thing, that somehow that's going to happen, and it's one of those manifestation
things or something, I don't know if it's, maybe I am a robot, I'm just not cognizant
of it, and I manipulate you into having this conversation, so it was a, I mean, I don't
know what the purpose of my life past this point is, so I've arrived, so in that sense,
I mean, all of that to say, I'm only partially joking on that, it really is a huge honor
that you would waste this time with me, it really means a lot to me.
Listen, it's mutual, I'm a big fan of yours, and as you know, I reached out to you for
this, so this is great, I love what you're doing, you're doing something more and more
indispensable in this world on your podcast, and you're doing it differently than Rogan's
doing it or than I'm doing it, I mean, you definitely found your own lane, and it's wonderful.
Thanks for listening to this conversation with Sam Harris, and thank you to National
Instruments, Val Campo, Athletic Greens, and Linode.
Check them out in the description to support this podcast.
And now, let me leave you with some words from Sam Harris in his book, Free Will.
You are not controlling the storm, and you are not lost in it, you are the storm.
Thank you for listening, and hope to see you next time.