logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 9h 33m 5s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Jaren Lanier,
a computer scientist, visual artist, philosopher,
writer, futurist, musician,
and the founder of the field of virtual reality.
To support this podcast,
please check out our sponsors in the description.
As a side note, you may know
that Jaren is a staunch critic of social media platforms.
Him and I agree on many aspects of this,
except perhaps I am more optimistic
about it being possible to build better platforms.
And better artificial intelligence systems
that put long-term interests
and happiness of human beings first.
Let me also say a general comment
about these conversations.
I try to make sure I prepare well,
remove my ego from the picture,
and focus on making the other person shine
as we try to explore the most beautiful
and insightful ideas in their mind.
This can be challenging
when the ideas that are close to my heart
are being criticized.
In those cases, I do offer a little pushback,
but respectfully, and then move on,
trying to have the other person come out
looking wiser in the exchange.
I think there's no such thing as winning
in conversations nor in life.
My goal is to learn and to have fun.
I ask that you don't see my approach
to these conversations as weakness.
It is not.
It is my attempt at showing respect
and love for the other person.
That said, I also often just do a bad job of talking,
but you probably already knew that.
So please give me a pass on that as well.
This is the Lex Friedman podcast,
and here is my conversation with Jaren Lanier.
You're considered the founding father of virtual reality.
Do you think we will one day spend most
or all of our lives in virtual reality worlds?
I have always found the very most valuable moment
in virtual reality to be the moment
when you take off the headset and your senses are refreshed
and you perceive physicality afresh,
as if you were a newborn baby,
but with a little more experience.
So you can really notice just how incredibly strange
and delicate and peculiar and impossible the real world is.
So the magic is, and perhaps forever,
will be in the physical world?
Well, that's my take on it.
That's just me.
I mean, I think I don't get to tell everybody else
how to think or how to experience virtuality.
And at this point,
there have been multiple generations of younger people
who've come along and liberated me
from having to worry about these things.
But I should say also, even in what some,
well, I called it mixed reality back in the day.
In these days, it's called augmented reality,
but with something like a HoloLens,
even then, one of my favorite things is to augment a forest,
not because I think the forest needs augmentation,
but when you look at the augmentation next to a real tree,
the real tree just pops out as being astounding.
It's interactive, it's changing slightly all the time
if you pay attention,
and it's hard to pay attention to that
but when you compare it to virtuality,
all of a sudden you do.
And even in practical applications,
my favorite early application of virtuality,
which we prototyped going back to the 80s
when I was working with Dr. Joe Rosen at Stanford Med
near where we are now,
we made the first surgical simulator.
And to go from the fake anatomy of the simulation,
which is incredibly valuable for many things,
for designing procedures, for training,
for all kinds of things,
then to go to the real person,
boy, it's really something like,
surgeons really get woken up by that transition.
It's very cool.
So I think the transition is actually more valuable
than the simulation.
That's fascinating.
I never really thought about that.
It's almost, it's like traveling elsewhere
in the physical space can help you appreciate
how much you value your home once you return.
Well, that's how I take it.
I mean, once again,
people have different attitudes towards it.
All are welcome.
What do you think is the difference
between the virtual world and the physical meat space world
that you are still drawn,
for you personally, still drawn to the physical world?
Like they're clearly then as a distinction.
Is there some fundamental distinction
or is it the peculiarities of the current set of technology?
In terms of the kind of virtual reality that we have now,
it's made of software and software is terrible stuff.
Software is always the slave of its own history,
its own legacy.
It's always infinitely arbitrarily messy and arbitrary.
Working with it brings out
a certain kind of nerdy personality in people,
or at least in me,
which I'm not that fond of.
And there are all kinds of things about software I don't like.
And so that's different from the physical world.
It's not something we understand as you just pointed out.
On the other hand,
I'm a little mystified when people ask me,
well, do you think the universe is a computer?
And I have to say, well,
I mean, what on earth could you possibly mean
if you say it isn't a computer?
If it isn't a computer,
it wouldn't follow principles consistently
and it wouldn't be intelligible
because what else is a computer ultimately?
I mean, and we have physics,
we have technology,
so we can do technology so we can program it.
So, I mean, of course it's some kind of computer,
but I think trying to understand it as a Turing machine
is probably a foolish approach.
Right, that's the question,
whether it performs this computer we call the universe,
performs the kind of computation that can be modeled
as a universal Turing machine,
or is it something much more fancy?
So fancy, in fact, that it may be
beyond our cognitive capabilities to understand.
Turing machines are kind of,
I call them teases, in a way,
because if you have an infinitely smart programmer
with an infinite amount of time,
an infinite amount of memory,
and an infinite clock speed,
then they're universal,
but that cannot exist,
so they're not universal in practice.
And they actually are, in practice,
a very particular sort of machine
within the constraints,
within the conservation principles of any reality
that's worth being in, probably.
And so, I think universality of a particular model
is probably a deceptive way to think,
even though at some sort of limit,
of course, something like that's gotta be true
at some sort of high enough limit,
but it's just not accessible to us, so what's the point?
Well, to me, the question of whether we're living
inside a computer or a simulation
is interesting in the following way.
There's a technical question here.
How difficult does it to build a machine
not that simulates the universe,
but that makes it sufficiently realistic
that we wouldn't know the difference,
or better yet, sufficiently realistic
that we would kind of know the difference,
but we would prefer to stay in the virtual world anyway?
I wanna give you a few different answers.
I wanna give you the one that I think
has the most practical importance
to human beings right now,
which is that there's a kind of an assertion
sort of built into the way the question's usually asked
that I think is false,
which is a suggestion that people have a fixed level
of ability to perceive reality in a given way.
And actually, people are always learning, evolving,
forming themselves.
We're fluid too.
We're also programmable, self-programmable,
changing, adapting.
And so my favorite way to get at this
is to talk about the history of other media.
So for instance, there was a peer review paper
that showed that an early wire recorder
playing back an opera singer behind a curtain
was indistinguishable from a real opera singer.
And so now, of course, to us,
it would not only be distinguishable,
but it would be very blatant
because the recording would be horrible.
But to the people at the time,
without the experience of it, it seemed plausible.
There was an early demonstration
of extremely crude video teleconferencing
between New York and DC in the 30s,
I think so, that people viewed
as being absolutely realistic and indistinguishable,
which to us would be horrible.
And there are many other examples.
Another one, one of my favorite ones
is in the Civil War era.
There were itinerant photographers
who collected photographs of people
who just looked kind of like a few archetypes.
So you could buy a photo of somebody
who looked kind of like your loved one
to remind you of that person
because actually photographing them was inconceivable
and hiring a painter was too expensive
and you didn't have any way for the painter
to represent them remotely anyway.
How would they even know what they looked like?
So these are all great examples
of how in the early days of different media,
we perceived the media as being really great,
but then we evolved through the experience of the media.
This gets back to what I was saying.
Maybe the greatest gift of photography
is that we can see the flaws in a photograph
and appreciate reality more.
Maybe the greatest gift of audio recording
is that we can distinguish that opera singer now
from that recording of the opera singer
on the horrible wire recorder.
So we shouldn't limit ourselves
by some assumption of stasis that's incorrect.
So that's my first answer,
which is I think the most important one.
Now, of course, somebody might come back and say,
oh, but technology can go so far.
There must be some point at which it would surpass.
That's a different question.
I think that's also an interesting question,
but I think the answer I just gave you
is actually the more important answer
to the more important question.
That's profound, yeah.
But can you, the second question,
which you're now making me realize is way different.
Is it possible to create worlds
in which people would want to stay
instead of the real world?
Well, like unmasked,
like large numbers of people.
What I hope is, as I said before,
I hope that the experience of virtual worlds
helps people appreciate this physical world
we have and feel tender towards it
and keep it from getting too fucked up.
That's my hope.
Do you see all technology in that way?
So basically technology helps us appreciate
the more sort of technology-free aspect of life.
Well, media technology.
You know, I mean, you can stretch that.
I mean, you can, let me say,
I could definitely play McLuhan
and turn this into a general theory.
It's totally doable.
The program you just described is totally doable.
In fact, I will psychically predict
that if you did the research,
you could find 20 PhD theses that do that already.
I don't know, but they might exist.
But I don't know how much value there is
in pushing a particular idea that far.
Claiming that reality isn't a computer,
in some sense, seems incoherent to me
because we can program it.
We have technology.
It has, it seems to obey physical laws.
What more do you want from it to be a computer?
I mean, it's a computer of some kind.
We don't know exactly what kind.
We might not know how to think about it.
We're working on it, but...
Sorry to interrupt, but you're absolutely right.
Like that's my fascination with the AI as well.
Is it helps, in the case of AI,
I see it as a set of techniques
that help us understand ourselves, understand us humans.
In the same way, virtual reality,
and you're putting it brilliantly,
it's a way to help us understand reality.
Sure.
Appreciate and open our eyes more richly to reality.
That's certainly how I see it.
And I wish people who become incredibly fascinated,
who go down the rabbit hole of the different fascinations
with whether we're in a simulation or not,
or, you know, there's a whole world of variations on that,
I wish they'd step back and think about their own motivations
and exactly what they mean.
You know what?
And I think the danger with these things is...
So if you say, is the universe some kind of computer broadly?
It has to be, because it's not coherent to say that it isn't.
On the other hand, to say that that means, you know,
anything about what kind of computer,
that's something very different.
And the same thing is true for the brain,
the same thing is true for anything
where you might use computational metaphors.
Like, we have to have a bit of modesty about where we stand.
And the problem I have with these framings of computation
as these ultimate cosmic questions
is that it has a way of getting people to pretend
they know more than they do.
Can you maybe...
This is a therapy session.
So I can analyze me for a second.
I really like the Elder Scrolls series.
It's a role-playing game.
Skyrim, for example.
Why do I enjoy so deeply just walking around that world?
And then there's people you could talk to,
and you can just like...
It's an escape, but, you know, my life is awesome.
I'm truly happy.
But I also am happy with the music that's playing
and the mountains and carrying around a sword.
And just that...
I don't know what that is.
It's very pleasant, though, to go there.
And I miss it sometimes.
I think it's wonderful to love artistic creations.
It's wonderful to love contact with other people.
It's wonderful to love play and ongoing,
evolving meaning and patterns with other people.
I think it's a good thing.
You know, I...
I'm not, like, anti-tech,
and I'm certainly not anti-digital tech.
I'm anti, as everybody knows by now.
I think the, you know, manipulative economy
of social media is making everybody nuts and all that.
So I'm anti that stuff.
But the core of it...
Of course, I worked for many, many years
on trying to make that stuff happen,
because I think it can be beautiful.
Like, I don't...
Like, why not?
You know, and by the way,
there's a thing about humans,
which is we're problematic.
Any kind of social interaction with other people
is going to have its problems.
People are political and tricky.
And, like, I love classical music,
but when you actually go to a classical music thing,
and it turns out, oh, actually,
this is like a backroom power deal kind of place
and a big status ritual as well.
And that's kind of not as fun.
That's part of the package.
And the thing is it's always going to be.
There's always going to be a mix of things.
I don't think the search for purity
is going to get you anywhere.
So I'm not worried about that.
I worry about the really bad cases
where we're making ourselves crazy or cruel enough
that we might not survive.
And I think, you know, the social media criticism
rises to that level.
But I'm glad you enjoyed it.
I think it's great.
And I like that you basically say
that every experience has both beauty and darkness
as in with classical music.
I also play classical piano, so I appreciate it very much.
But it's interesting.
I mean, every and even the darkest man's search
for meaning with Victor Franco in the concentration camps,
even there, there's opportunity to discover beauty.
And so it's, that's the interesting thing about humans
is the capacity to discover beautiful
in the darkest moments.
But there's always the dark parts, too.
Well, I mean, it's our situation is structurally difficult.
We are structurally different.
No, it is. It's true.
We perceive socially, we depend on each other
for our sense of place and perception of the world.
I mean, we're dependent on each other.
And yet there's also a degree in which we inevitably,
we never really let each other down.
We are set up to be competitive as well as supportive.
I mean, it's our fundamental situation
is complicated and challenging.
And I wouldn't have it any other way.
OK, let's talk about one of the most challenging things.
One of the things I unfortunately am very afraid of
being human, allegedly, you wrote an essay
on death and consciousness in which you write a note.
Certainly, the fear of death has been one of the greatest
driving forces in the history of thought
and in the formation of the character of civilization.
And yet it is under acknowledged.
The great book on the subject, The Denial of Death
by Ernest Becker deserves a reconsideration.
I'm Russian, so I have to ask you about this.
What's the role of death in life?
See, you would have enjoyed coming to our house
because my wife is Russian.
And we also have a piano of such spectacular qualities.
You wouldn't.
You would have freaked out.
But anyway, we'll let all that go.
So the context in which I remember that essay sort of,
this was from maybe the 90s or something.
And I used to publish in a journal
called The Journal of Consciousness Studies
because I was interested in these endless debates
about consciousness and science,
which certainly continue today.
And I was interested in how the fear of death
and the denial of death played into different philosophical
approaches to consciousness.
Because I think on the one hand,
the sort of sentimental school of dualism,
meaning the feeling that there's something
apart from the physical brain, some kind of soul
or something else, is obviously motivated in a sense
by a hope that whatever that is will survive death and continue.
And that's a very core aspect of a lot of the world religions,
not all of them, not really, but most of them.
The thing I noticed is that the opposite of those,
which might be the sort of hardcore, no,
the brain's a computer and that's it,
in a sense, we're motivated in the same way
with a remarkably similar chain of arguments,
which is, no, the brain's a computer
and I'm going to figure it out in my lifetime
and upload it, upload myself and I'll live forever.
That's interesting.
Yeah, that's like the implied thought, right?
Yeah, and so it's kind of this, in a funny way,
it's the same thing.
It's peculiar to notice that these people
who would appear to be opposites in character
and cultural references and in their ideas
actually are remarkably similar.
And to an incredible degree,
the sort of hardcore computationalist idea
about the brain has turned into medieval Christianity
with together, like there's the people who are afraid
that if you have the wrong thought,
you'll piss off the super eyes of the future
who will come back and zap you and all that stuff.
It's really turned into medieval Christianity all over again.
So the Ernest Becker's idea that death,
the fear of death is the warm of the core,
which is like, that's the core motivator
of everything we see humans have created.
The question is if that fear of mortality
is somehow core is like a prerequisite to consciousness.
You just moved across this vast cultural chasm
that separates me from most of my colleagues in a way.
And I can't answer what you just said on the level
without this huge deconstruction.
Yes.
Should I do it?
Yes, what's the chasm?
Okay.
Let us travel across this vast.
Okay, I don't believe in AI.
I don't think there's any AI.
There's just algorithms, we make them, we control them.
Now, they're tools, they're not creatures.
Now, this is something that robs a lot of people
the wrong way.
And don't I know it?
When I was young, my main mentor was Marvin Minsky,
who's the principal author of the computer
as creature rhetoric that we still use.
He was the first person to have the idea at all,
but he certainly populated the AI culture
with most of its tropes, I would say,
because a lot of the stuff people will say,
oh, did you hear this new idea about AI?
And I'm like, yeah, I heard it in 1978.
Sure, yeah, I remember that.
So Marvin was really the person.
And Marvin and I used to argue all the time
about this stuff, because I always rejected it.
And of all of his, I wasn't formally his student,
but I worked for him as a researcher,
but of all of his students and student-like people
of his young adoptees, I think I was the one
who argued with him about this stuff in particular,
and he loved it.
Yeah, I would have loved to hear that conversation.
It was fun.
Did you ever converse to a place?
Oh, no, no, so the very last time I saw him,
he was quite frail, and I was in Boston,
and I was going to the old house in Brookline,
his amazing house, and one of our mutual friends said,
hey, listen, Marvin's so frail.
Don't do the argument with him.
Don't argue about AI, you know?
And so I said, but Marvin loves that.
And so I showed up, and he's like,
he was frail, and he looked up and he said,
are you ready to argue?
He's such an amazing person for that.
So it's hard to summarize this
because it's decades of stuff.
The first thing to say is that nobody can claim
absolute knowledge about whether somebody
or something else is conscious or not.
This is all a matter of faith.
And in fact, I think the whole idea of faith
needs to be updated.
So it's not about God,
but it's just about stuff in the universe.
We have faith in each other being conscious.
And then I used to frame this as a thing
called the circle of empathy in my old papers.
And then it turned into a thing
for the animal rights movement too.
I noticed Peter Singer using it.
I don't know if it was coincident or,
but anyway, there's this idea
that you draw a circle around yourself
and the stuff inside is more like you,
might be conscious,
might be deserving of your empathy,
of your consideration,
and the stuff outside the circle isn't.
And outside the circle might be a rock or I don't know.
And that circle is fundamentally based on faith.
Well, if you're faith in what isn't, what isn't?
The thing about this circle is it can't be pure faith.
It's also a pragmatic decision.
And this is where things get complicated.
If you try to make it too big,
you suffer from incompetence.
If you say, I don't wanna kill a bacteria,
I will not brush my teeth.
I don't know, like, what do you do?
Like, you know, like there's a competence question
where you do have to draw the line.
People who make it too small become cruel.
People are so clannish and political
and so worried about themselves ending up on the bottom
of society that they are always ready to gang up
on some designated group.
And so there's always these people
who are being trying,
we're always trying to shove somebody out of the circle.
And so-
So aren't you shoving AI outside the circle?
Well, give me a second.
All right.
So there's a pragmatic consideration here.
And so the biggest questions are probably fetuses
and animals lately, but AI is getting there.
Now with AI, I think,
and I've had this discussion so many times,
people say, but aren't you afraid if you exclude AI,
you'd be cruel to some consciousness?
And then I would say, well, if you include AI,
you make yourself, you exclude yourself
from being able to be a good engineer or designer.
And so you're facing incompetence immediately.
So like, I really think we need to subordinate algorithms
and be much more skeptical of them.
Your intuition, you speak about this brilliantly
with social media, how things can go wrong.
Isn't it possible to design systems
that show compassion, not to manipulate you,
but give you control and make your life better
if you so choose to, like grow together with systems.
And the way we grow with dogs and cats with pets
with significant others in that way,
they grow to become better people.
I don't understand why that's fundamentally not possible.
You're saying oftentimes you get into trouble
by thinking you know what's good for people.
Well, look, there's this question
of what frame we're speaking in.
Do you know who Alan Watts was?
So Alan Watts once said, morality is like gravity
that in some absolute cosmic sense,
there can't be morality because at some point
it all becomes relative and who are we anyway?
Like morality is relative to us tiny creatures.
But here on earth, we're with each other.
This is our frame and morality is a very real thing.
Same thing with gravity.
At some point you get into interstellar space
and you might not feel much of it,
but here we are on earth.
And I think in the same sense,
I think this identification with a frame that's quite remote
cannot be separated from a feeling of wanting to feel
sort of separate from and superior to other people
or something like that.
There's an impulse behind it that I really have to reject.
And we're just not competent yet
to talk about these kinds of absolutes.
Okay, so I agree with you that a lot of technologies
sort of lack this basic respect,
understanding and love for humanity.
There's a separation there.
The thing I'd like to push back against,
it's not that you disagree,
but I believe you can create technologies
and you can create a new kind of technologist engineer
that does build systems that respect humanity,
not just respect, but admire humanity
that have empathy for common humans, have compassion.
So I mean, no, no, no, I think,
yeah, I mean, I think musical instruments
are a great example of that.
Musical instruments or technologies
that help people connect in fantastic ways.
And that's a great example.
My invention or design during the pandemic period
was this thing called Together Mode
where people see themselves seated sort of in a classroom
or a theater instead of in squares.
And it allows them to semi-consciously perform to each other
as if they have proper eye contact,
as if they're paying attention to each other nonverbally
and weirdly that turns out to work.
And so it promotes empathy so far as I can tell.
I hope it is of some use to somebody.
The AI idea isn't really new.
I would say it was born
with Adam Smith's Invisible Hand
with this idea that we build this algorithmic thing
and it gets a bit beyond us
and then we think it must be smarter than us.
And the thing about the Invisible Hand
is absolutely everybody has some line they draw
where they say, no, no, no, we're gonna take control
of this thing.
They might have different lines,
they might care about different things,
but everybody ultimately became a Keynesian
because it just didn't work.
It really wasn't that smart.
It was sometimes smart and sometimes it failed.
And so if you really,
people who really, really, really wanna believe
in the Invisible Hand is infinitely smart,
screw up their economy is terribly.
You have to recognize the economy as a subservient tool.
Everybody does when it's to their advantage.
They might not when it's not to their advantage.
That's kind of an interesting game that happens.
But the thing is, it's just like that with our algorithms.
Like you can have a sort of a Chicago economic philosophy
about your computer and say, no, no, no, no,
my thing's come alive, it's smarter than anything.
I think that there is a deep loneliness within all of us.
This is what we seek.
We seek love from each other.
I think AI can help us connect deeper.
Like this is what you criticize social media for.
I think there's much better ways of doing social media
that doesn't lead to manipulation.
But instead leads to deeper connection between humans,
leads to you becoming a better human being.
And what that requires is some agency on the part of AI
to be almost like a therapist.
I mean, a companion.
It's not telling you what's right.
It's not guiding you as if it's an all-knowing thing.
It's just another companion that you can leave at any time.
You have complete transparency control over.
There's a lot of mechanisms that you can have
that are counter to how current social media operates
that I think is subservient to humans.
Or no, deeply respects human beings
and empathetic to their experience
and all those kinds of things.
I think it's possible to create AI systems like that.
And I think they, I mean, that's a technical discussion
of whether they need to have
something that looks like AI versus algorithms.
Something that has identity.
Something that has a personality.
All those kinds of things.
AI systems, and you've spoken extensively
how AI systems manipulate you within social networks.
And that's the biggest problem.
Isn't necessarily that there's advertisement
that social networks present you with advertisements
that then get you to buy stuff.
That's not the biggest problem.
The biggest problem is they then manipulate you.
You're, they alter like your human nature
to get you to buy stuff or to get you to do whatever
the advertiser wants.
Well, maybe you can correct me.
Yeah, I don't see it quite that way,
but we can work with that as an approximation.
Sure, so my,
I think the actual thing is even sort of more ridiculous
and stupider than that, but that's okay.
Let's, let's.
My question is, let's not use the word AI,
but how do we fix it?
Oh, fixing social media.
That diverts us into this whole other field in my view,
which is economics,
which I always thought was really boring,
but we have no choice but to turn it to economists
if we want to fix this problem,
because it's all about incentives.
But I've been around this thing since it started.
And I've been in the meetings
where the social media companies sell themselves
to the people who put the most money into them,
which are usually the big advertising holding companies
and whatnot.
And there's this, there's this idea
that I think is kind of a fiction.
And maybe it's even been recognized as that by everybody
that the algorithm will get really good
at getting people to buy something.
Cause I think people have looked at their returns
and looked at what happens
and everybody recognizes it's not exactly right.
It's more like a cognitive access blackmail payment
at this point.
Like just to be connected, you're paying the money.
It's not so much that the persuasion algorithms.
So Stanford renamed its program,
but it used to be called Engaged Persuade.
The Engaged part works.
The Persuade part is iffy.
But the thing is that once people are engaged,
in order for you to exist as a business,
in order for you to be known at all,
you have to put money into the-
Oh, that's dark.
So it doesn't work, but they have to-
But they're still, it's a giant,
it's a giant cognitive access blackmail scheme at this point.
So because the science behind the Persuade part,
it's not entirely, it's not entirely a failure,
but it's not what the, there's,
we play make believe that it works more than it does.
The damage doesn't come.
Honestly, as I've said in my books,
I'm not anti-advertising.
I actually think advertising can be demeaning
and annoying and banal and ridiculous
and take up a lot of our time with stupid stuff.
Like there's a lot of ways to criticize it.
Advertising, that's accurate.
And it can also lie and all kinds of things.
However, if I look at the biggest picture,
I think advertising, at least as it was understood
before social media, helped bring people into modernity
in a way that overall actually did benefit people overall.
And you might say, am I contradicting myself
because I was saying you shouldn't manipulate people?
Yeah, I am, probably here.
I mean, I'm not, I'm not pretending
to have this perfect art airtight world view
without some contradictions.
I think there's a bit of a contradiction there.
So, you know.
Well, looking at the long arc of history,
advertisement has in some parts benefited society
because it funded some efforts that perhaps-
Yeah, I mean, I think like there's a thing
where sometimes I think it's actually been of some use.
Now, let's, where the damage comes
is a different thing though.
Social media, algorithms and social media
have to work on feedback loops
where they present you with stimulus
and they have to see if you respond to the stimulus.
Now, the problem is that the measurement mechanism
for telling if you respond
in the engagement feedback loop is very, very crude.
It's things like whether you click more or occasionally
if you're staring at the screen more
if there's a forward facing camera that's activated
but typically there isn't.
So you have this incredibly crude back channel of information.
And so it's crude enough that it only catches
sort of the more dramatic responses from you.
And those are the fight or flight responses.
Those are the things where you get scared
or pissed off or aggressive or horny.
You know, these are these ancient,
the sort of what are sometimes called
the lizard brain circuits or whatever.
You know, these fast response,
old, old, old evolutionary business circuits that we have
that are helpful in survival once in a while
but are not us at our best.
They're not who we wanna be.
They're not how we relate to each other.
They're this old business.
But it's, so then just when you're engaged
using those intrinsically totally aside
from whatever the topic is,
you start to get incrementally just a little bit
more paranoid, xenophobic, aggressive.
You know, you get a little stupid
and like you become a jerk.
And it happens slowly.
It's not like everybody is like instantly transformed
but it does kind of happen progressively
where people who get hooked kind of get drawn more and more
into this pattern of being at their worst.
Would you say that people are able to
when they get hooked in this way,
look back at themselves from 30 days ago
and say, I am less happy with who I am now
or I'm not happy with why I'm now
versus who I was 30 days ago.
Are they able to self reflect
when you take yourself outside of the lizard brain?
Sometimes.
I wrote a book about people suggesting people take a break
from their social media to see what happens
and maybe even the title of the book
was just the arguments to delete your account.
Yeah, 10 arguments.
Although I always said, I don't know that you should.
I can give you the arguments, it's up to you.
I'm always very clear about that.
But I don't have a social media account obviously
and it's not that easy for people to reach me.
They have to search out an old fashioned email address
on a super crappy antiquated website.
Like it's actually a bit, I don't make it easy.
And even with that, I get this huge flood of mail
from people who say, oh, I quit my social media
and I'm doing so much better.
I can't believe how bad it was.
But the thing is, what's for me a huge flood of mail
would be an imperceptible trickle from the perspective
of Facebook, right?
And so I think it's rare for somebody to look at themselves
and say, oh boy, I just screwed myself over.
It's a really hard thing to ask of somebody.
None of us find that easy, right?
Well, the reason I asked this is,
is it possible to design social media systems
that optimize for some longer term metrics
of you being happy with yourself?
Well, see, I don't think you should try
to engineer personal growth or happiness.
I think what you should do is design a system
that's just respectful of the people
and subordinates itself to the people
and doesn't have perverse incentives.
And then at least there's a chance
of something decent happening.
You'll have to recommend stuff, right?
So you're saying like, be respectful.
What does that actually mean engineering-wise?
Yeah, curation.
People have to be responsible.
Algorithms shouldn't be recommending.
Algorithms don't understand enough to recommend.
Algorithms are crap in this era.
I mean, I'm sorry, they are.
And I'm not saying this
as somebody is a critic from the outside.
I'm in the middle of it.
I know what they can do.
I know the math.
I know what the corpora are.
I know the best ones.
Our office is funding GPT-3 and all these things
that are at the edge of what's possible.
And they do not have yet.
I mean, it still is statistical emergent pseudo semantics.
It doesn't actually have deep representation
emerging of anything.
It's just not like, I mean that,
I'm speaking the truth here and you know it.
Well, let me push back on this.
There's several truths here.
So you're speaking to the way
certain companies operate currently.
I don't think it's outside the realm
of what's technically feasible to do.
There's just not incentive,
like companies are not, why fix this thing?
I am aware that, for example,
the YouTube search and discovery
has been very helpful to me.
And there's a huge number of,
there's so many videos
that it's nice to have a little bit of help.
Have you done-
But I'm still in control.
Let me ask you something.
Have you done the experiment
of letting YouTube recommend videos to you
either starting from a absolutely anonymous random place
where it doesn't know who you are
or from knowing who you or somebody else is
and then going 15 or 20 hops?
Have you ever done that
and just let it go top video recommend
and then just go 20 hops?
No, I'm not.
I've done that many times now.
I have, because of how large YouTube is
and how widely it's used,
it's very hard to get to enough scale
to get a statistically solid result on this.
I've done it with high school kids,
with dozens of kids doing it at a time.
Every time I've done an experiment,
the majority of times,
after about 17 or 18 hops,
you end up in really weird,
paranoid, bizarre territory.
Because ultimately,
that is the stuff the algorithm rewards
the most because of the feedback creepiness
I was just talking about.
So I'm not saying that the video
never recommends something cool.
I'm saying that its fundamental core
is one that promotes a paranoid style,
that promotes increasing irritability,
that promotes xenophobia,
promotes fear, anger,
promotes selfishness,
promotes separation between people.
And the thing is,
it's very hard to do this work solidly.
Many have repeated this experiment
and yet, it still is kind of anecdotal.
I'd like to do a large citizen science thing
sometime and do it,
but then I think the problem with that
is YouTube would detect it and then change it.
Yes, I love that kind of stuff in Twitter.
So Jack Dorsey has spoken
about doing healthy conversations on Twitter
or optimizing for healthy conversations.
What that requires within Twitter
are most likely citizen experiments
of what does healthy conversations
actually look like
and how do you incentivize those healthy conversations?
You're describing what often happens
and what is currently happening.
What I'd like to argue is it's possible
to strive for healthy conversations,
not in a dogmatic way of saying,
I know what healthy conversations are
and I will tell you.
I think one way to do this
is to try to look around at social,
maybe not things that are officially social media,
but things where people are together online
and see which ones have more healthy conversations,
even if it's hard to be completely objective
in that measurement,
you can kind of at least crudely agree.
You could do subjective annotation
of this like have a large crowdsource.
Yeah, one that I've been really interested in is GitHub.
Because it could change,
I'm not saying it'll always be,
but for the most part,
GitHub has had a relatively quite low poison quotient
and I think there's a few things about GitHub
that are interesting.
One thing about it is that people have a stake in it.
It's not just empty status games.
There's actual code or there's actual stuff being done.
And I think as soon as you have a real world
stake in something,
you have a motivation to not screw up that thing.
And I think that that's often missing,
that there's no incentive for the person
to really preserve something
if they get a little bit of attention
from dumping on somebody's TikTok or something,
that they don't pay any price for it.
But you have to kind of get decent with people
when you have a shared stake, a little secret.
So GitHub does a bit of that.
So GitHub is wonderful, yes.
But I'm tempted to play the Jaren back at you,
which is that so GitHub is currently is amazing.
But the thing is, if you have a stake,
then if it's a social media platform,
they can use the fact that you have a stake
to manipulate you because you want to preserve the stake.
Right, well, this gets us into the economics.
So there's this thing called data dignity
that I've been studying for a long time.
I wrote a book about an earlier version of it
called, Who Owns the Future?
And the basic idea of it is that,
once again, this is a 30 year conversation.
It's a fascinating topic.
Let me do the fastest version of this I can do.
The fastest way I know how to do this
is to compare two futures, all right?
So future one is then the normative one,
the one we're building right now,
and future two is gonna be data dignity, okay?
And I'm gonna use a particular population.
I live on the hill in Berkeley.
And one of the features about the hill
is that as the climate changes, we might burn down
and I'll lose our houses or die or something.
Like it's dangerous, you know, and it didn't used to be.
And so who keeps us alive?
Well, the city does.
The city does some things.
The electric company kind of sort of,
maybe hopefully better, individual people who own property,
take care of their property, that's all nice.
But there's this other middle layer,
which is fascinating to me,
which is that the groundskeepers
who work up and down that hill,
many of whom are not legally here,
many of whom don't speak English,
cooperate with each other to make sure trees don't touch
to transfer fire easily from lot to lot.
They have this whole little web that's keeping us safe.
I didn't know about this at first.
I just started talking to them
because they were out there during the pandemic.
And so I tried to just see who are these people?
Who are these people who are keeping us alive?
Now, I want to talk about the two different faiths
for those people under future one and future two.
Future one, some weird like kindergarten paint job van
with all these like cameras and where things drives up,
observes what the gardeners and groundskeepers are doing.
A few years later, some amazing robots
that can show me up trees and all this show up
all those people are out of work.
And there are these robots doing the thing
and the robots are good and they can scale to more land
and they're actually good.
But then there are all these people out of work
and these people have lost dignity.
They don't know what they're going to do.
And then somebody will say,
well, they go on basic income, whatever they become
wards of the state.
My problem with that solution is every time in history
that you've had some centralized thing
that's doling out the benefits that things get seized
by people because it's too centralized and it gets seized.
That's happened to every communist experiment I can find.
So I think that turns into a poor future
that will be these unstable.
I don't think people will feel good in it.
I think it'll be a political disaster
with a sequence of people seizing this central source
of the basic income.
And you'll say, oh, no, an algorithm can do it.
Then people will seize the algorithm.
They'll seize control.
Unless the algorithm is decentralized
and it's impossible to seize the control.
Yeah, but 60,
something people own a quarter of all the Bitcoin,
like the things that we think are decentralized
are not decentralized.
So let's go to future two.
Future two, the gardener see that van
with all the cameras and the kindergarten paint job.
And they say, the groundskeepers, and they say,
hey, the robots are coming.
We're going to form a data union.
And amazingly, California has a little baby data union law
emerging in the books.
Yes.
And so they say, we're going to form a data union.
And not only are we going to sell our data to this place,
but we're going to make it better than it would have been
if they were just grabbing it without our cooperation.
And we're going to improve it.
We're going to make the robots more effective.
We're going to make them better.
And we're going to be proud of it.
We're going to become a new class of experts that are respected.
And then here's the interesting.
There's two things that are different about that world
from future one.
One thing, of course, the people have more pride.
They have more sense of ownership of agency.
But what the robots do changes, instead of just
like this functional, like we'll figure out
how to keep the neighborhood from burning down,
you have this whole creative community that
wasn't there before thinking, well, how can we make
these robots better so we can keep on earning money?
There'll be waves of creative groundskeeping with spiral
pumping, pumpkin patches, and waves of cultural things.
There'll be new ideas like, wow, I
wonder if we could do something about climate change
mitigation with how we do this.
What about fresh water?
Can we make the food healthier?
What about all of a sudden there'll
be this whole creative community on the case?
And isn't it nicer to have a high tech feature
with more creative classes than one
with more dependent classes?
Isn't that a better feature?
But future one and future two have the same robots
and the same algorithms.
There's no technological difference.
There's only a human difference.
And that second future two, that's data dignity.
The economy that you're, I mean, the game theory
here is on the humans.
And then the technology is just the tools
that enable more possibilities.
I mean, I think you can believe in AI and be in future two.
I just think it's a little harder.
You have to do more contortions.
It's possible.
So in the case of social media, what
does data dignity look like?
Is it people getting paid for their data?
Yeah, I think what should happen is in the future
there should be massive data unions for people putting
content into the system.
And those data unions should smooth out the results
a little bit so it's not going to take all.
But at the same time, and people have to pay for it too.
They have to pay for Facebook the way
they pay for Netflix with an allowance for the poor.
There has to be a way out too.
But the thing is, people do pay for Netflix.
It's a going concern.
People pay for Xbox and PlayStation.
There's enough people to pay for stuff
they want this could happen to.
It's just that this precedent started
that moved in the wrong direction.
And then what has to happen, the economy's a measuring device.
If it's an honest measuring device,
the outcomes for people form a normal distribution,
a bell curve.
And then so there should be a few people who do really well,
a lot of people who do OK.
And then we should have an expanding economy
reflecting more and more creativity and expertise
flowing through the network.
And that expanding economy moves the result just a bit
forward so more people are getting money out of it
than are putting money into it.
So it gradually expands the economy and lifts all boats.
And the society has to support the lower wing of the bell
curve too, but not universal basic income.
It has to be for the, because if it's an honest economy,
there will be that lower wing.
And we have to support those people.
There has to be a safety net.
But see what I believe, I'm not going to talk about AI,
but I will say that I think there'll
be more and more algorithms that are useful.
And so I don't think everybody is
going to be supplying data to groundskeeping robots,
nor do I think everybody's going to make their living
with TikTok videos.
I think in both cases, there'll be a rather small contingent
that do well enough at either of those things.
But I think there might be many, many, many, many of those niches
that start to evolve as there are more and more algorithms, more
and more robots.
And it's that large number that will
create the economic potential for a very large part of society
to become members of new creative classes.
Do you think it's possible to create a social network that
competes with Twitter and Facebook that's
large and centralized in this way?
Not centralized, sort of large, large.
All right, so I've got to tell you
how to get from where we are to anything kind of in the zone
of what I'm talking about is challenging.
I know some of the people who run, like I know Jack Dorsey,
and I view Jack as somebody who's actually, I think he's
really striving and searching and trying
to find a way to make it better.
But it's kind of like, it's very hard to do it while in flight.
And he's under enormous business pressure, too.
So Jack Dorsey, to me, is a fascinating study,
because I think his mind is in a lot of good places.
He's a good human being, but there's
a big titanic ship that's already moving in one direction.
It's hard to know what to do with that.
I think that's the story of Twitter.
I think that's the story of Twitter.
One of the things that I observe is
that if you just want to look at the human side,
meaning how are people being changed?
How do they feel?
What does the culture like?
Almost all of the social media platforms that get big
have an initial honeymoon period where they're actually
kind of sweet and cute.
If you look at the early years of Twitter,
it was really sweet and cute.
But also look at Snap, TikTok.
And then what happens is, as they scale,
and the algorithms become more influential
instead of just the early people,
when it gets big enough that it's the algorithm running it,
then you start to see the rise of the paranoid style,
and then they start to get dark.
And we've seen that shift in TikTok rather recently.
But I feel like that scaling reveals the flaws
within the incentives.
I feel like I'm torturing you.
No, it's not torturing.
No, because I have hope for the world with humans.
And I have hope for a lot of things that humans
create, including technology.
And I feel it is possible to create social media platforms
that incentivize different things than the current.
I think the current incentivization
is around the dumbest possible thing that
was invented 20 years ago, however long.
And it just works, and so nobody's changing it.
I just think that there could be a lot of innovation for more.
See, you kind of push back this idea
that we can't know what long-term growth or happiness is.
If you give control to people to define
what their long-term happiness and goals are,
then that optimization can happen for each
of those individual people.
Well, I mean, imagine a future where probably a lot of people
would love to make their living doing TikTok dance videos,
but people recognize generally that's
kind of hard to get into.
Nonetheless, dance crews have an experience
that's very similar to programmers working together
on GitHub.
So the future is like a cross between TikTok and GitHub.
And they get together, and they have rights.
They're negotiating for returns.
They join different artist societies
in order to soften the blow of the randomness of who
gets the network effect benefit, because nobody can know that.
And I think an individual person might join 1,000 different data
unions in the course of their lives, or maybe even 10,000.
I don't know.
But the point is that we'll have these very hedged distributed
portfolios of different data unions were part of.
And some of them might just trickle in a little money
for nonsense stuff where we're contributing to health studies
or something.
But I think people will find their way.
They'll find their way to the right GitHub-like community
in which they find their value in the context of supplying inputs
and data and taste and correctives and all of this
into the algorithms and the robots of the future.
And that is a way to resist the lizard brain-based funding
mechanisms.
It's an alternate economic system
that rewards productivity, creativity, value
as perceived by others.
It's a genuine market.
It's not doled out from the center.
There's not some communist person deciding who's valuable.
It's the actual market.
And the money is made by supporting
that instead of just grabbing people's attention
in the cheapest possible way, which is definitely
how you get the lizard brain.
Yeah.
OK, so we're finally at the agreement.
But I just think that.
So yeah, I'll tell you how I think to fix social media.
There's a few things.
There's a few things.
So one, I think people should have complete control
over their data and transparency of what that data is
and how it's being used if they do hand over the control.
Another thing they should be able to delete, walk away
with their data at any moment.
Easy, with a single click of a button, maybe two buttons.
I don't know.
Just easily walk away with their data.
The other is control of the algorithm, individualized
control of the algorithm for them.
So each one has their own algorithm.
Each person has their own algorithm.
They get to be the decider of what they see in this world.
And to me, that's, I guess, fundamentally decentralized
in terms of the key decisions being made.
But if that's made transparent, I
feel like people will choose that system over Twitter
of today, over Facebook of today,
when they have the ability to walk away, to control their data,
and to control the kinds of thing they see.
Now, let's walk away from the term AI.
You're right.
In this case, you have full control
of the algorithms that help you if you want to use their help.
But you can also say a few to those algorithms
and just consume the raw, beautiful waterfall
of the internet.
I think that, to me, that's not only fix the social media,
but I think it would make a lot more money.
So I would like to challenge the idea.
I know you're not presenting that,
but the only way to make a ton of money
is to operate like Facebook is.
I think you can make more money by giving people control.
Yeah, I mean, I certainly believe
that we're definitely in the territory of wholehearted
agreement here.
I do want to caution against one thing, which
is making a future that benefits programmers versus people.
Like this idea that people are in control of their data.
So years ago, I co-founded an advisory board for the EU
with a guy named Giovanni Bottarelli who passed away.
It's one of the reasons I wanted to mention it.
A remarkable guy who'd been, he was originally
a prosecutor who was throwing mafioso and gel in Sicily.
So he was like this intense guy who was like,
I've dealt with death threats.
Mark Zuckerberg doesn't scare me, whatever.
So we worked on this path of saying,
let's make it all about transparency and consent.
And it was one of the theaters that
led to this huge data privacy and protection framework
in Europe called the GDPR.
And so therefore, we've been able to have
empirical feedback on how that goes.
And the problem is that most people actually
get stymied by the complexity of that kind of management.
They have trouble and reasonably so.
I don't, I'm like a techie.
I can go in and I can figure out what's going on.
But most people really do.
And so there's a problem that it differentially
benefits those who kind of have a technical mindset
and can go in and sort of have a feeling for how this stuff works.
I kind of still want to come back to incentives.
And so if the incentive for whoever's,
if the commercial incentive is to help the creative people
of the future make more money because you get a cut of it,
that's how you grow an economy.
Not the programmers.
Well, some of them will be programmers.
It's not anti-programmer.
I'm just saying that it's not only programmers.
So I mean, I definitely, so yeah, you
have to make sure the incentives are right.
I mean, I like control is an interface problem
to where you have to create something that's
compelling to everybody, to the creatives, to the public.
There's, I don't know, creative commons,
like the licensing.
There's a bunch of legal speak, just in general,
the whole legal profession.
It's nice when it can be simplified in the way
that you can truly simply understand.
Everybody can simply understand the basics.
In the same way, it should be very simple
to understand how the data is being used
and what data is being used for people.
But then you're arguing that in order for that to happen,
you have to have the incentives alike.
I mean, a lot of the reason that money works
is actually information hiding and information loss.
Like one of the things about money
is a particular dollar you might have passed through your enemy's
hands and you don't know it.
But also, I mean, this is what Adam Smith,
if you want to give the most charitable interpretation
possible to the invisible hand, is what he was saying,
is that there's this whole complicated thing.
And not only do you not need to know about it,
the truth is you'd never be able to follow it if you tried.
And it's like, let the economic incentives
solve for this whole thing.
And that, in a sense, every transaction
is like a neuron and a neural net.
If he'd had that metaphor, he would have used it.
And let the whole thing settle to a solution.
And don't worry about it.
I think this idea of having incentives that
reduce complexity for people can be made to work.
And that's an example of an algorithm
that could be manipulative or not going back
into your question before about can you
do it in a way that's not manipulative.
And I would say a GitHub like, if you just have this vision,
GitHub plus TikTok combined, is it possible?
I think it is.
I really think it is.
I'm not going to be able to unsee that idea of creatives
on TikTok collaborating in the same way that people
on GitHub collaborate.
Why not?
I like that kind of version.
Why not?
I like it.
I love it.
I just like, right now, when people use, by the way,
father of teenage daughters.
It's all about TikTok, right?
So when people use TikTok, there's
a lot of, it's kind of funny, I was going to say,
cattiness, but I was just using the cat as this exemplar
of what we're coming up with.
I contradict myself.
But anyway, there's all this cattiness
where people are like, this person's like, yeah, yeah.
And I just, what about people getting together
and saying, OK, we're going to work on this move.
We're going to get a bit of, can we get a better musician?
And they do that.
But that's the part that's kind of off the books right now.
That should be right there.
That should be the center.
That's the really best part.
Well, that's where the invention of get period,
the versioning is brilliant.
And so some of the things you're talking about,
technology, algorithms, tools can empower.
And that's the thing for humus to connect, to collaborate,
and so on.
Can we upset more people a little bit?
You already?
Maybe, we'd have to try.
No, no, can we ask you to elaborate?
Because my intuition was that you
would be a supporter of something like cryptocurrency
and Bitcoin, because it fundamentally
emphasizes decentralization.
What do you, so can you elaborate?
Yeah, OK, look.
Your thoughts on Bitcoin.
It's kind of funny.
I wrote, I've been advocating some kind of digital currency
for a long time.
And when Bitcoin came out and the original paper on blockchain,
my heart kind of sank.
Because I thought, oh my god, we're
applying all of this fancy thought
and all these very careful distributed security measures
to recreate the gold standard.
Like it's just so retro, it's so dysfunctional,
it's so useless from an economic point of view.
So it's always, and then the other thing
is using computational inefficiency at a boundless scale
as your form of security is a crime against this atmosphere.
Obviously, a lot of people know that now.
But we knew that at the start.
Like the thing is, when the first paper came out,
I remember a lot of people saying, oh my god,
this thing scales.
It's a carbon disaster.
And I'm just mystified.
But that's a different question than when you asked.
Can you have a cryptographic currency
or at least some kind of digital currency that's
of a benefit?
And absolutely.
And there are people who are trying to be thoughtful about this.
If you haven't, you should interview
Vitalik Buterin sometime.
Yeah, I've interviewed him twice.
OK.
So there are people in the community
who are trying to be thoughtful and trying to figure out
how to do this better.
It has nice properties, though, right?
So one of the nice properties is that government centralized,
it's hard to control.
And then the other one, to fix some of the issues
that you're referring to, I'm sort of playing devil's advocate
here, is there's lightning network.
There's ideas how you build stuff on top of Bitcoin similar
with gold that allow you to have this kind of vibrant economy
that operates not on the blockchain,
but outside the blockchain.
And you use this Bitcoin for checking the security
of those transactions.
So Bitcoin's not new.
It's been around for a while.
I've been watching it closely.
I've not seen one example of it creating economic growth.
There was this obsession with the idea
that government was the problem.
That idea that government's the problem, let's say,
government earned that wrath, honestly.
Because if you look at some of the things
that governments have done in recent decades,
it's not a pretty story.
Like after a very small number of people in the US government
decided to bomb and landmine Southeast Asia,
it's hard to come back and say, oh,
government's a great thing.
But then the problem is that this resistance to government
is basically resistance to politics.
It's a way of saying, if I can get rich,
nobody should bother me.
It's a way of not having obligations to others.
And that ultimately is a very suspect motivation.
But does that mean that the impulse,
that the government should not overreach its power is flawed?
Well, I mean, what I want to ask you to do
is to replace the word government with politics.
Like our politics is people having to deal with each other.
My theory about freedom is that the only authentic form
of freedom is perpetual annoyance.
So annoyance means you're actually dealing with people,
because people are annoying.
Perpetual means that that annoyance is survivable,
so it doesn't destroy us all.
So if you have perpetual annoyance, then you have freedom.
And that's politics.
That's politics.
If you don't have perpetual annoyance,
something's gone very wrong.
And you suppress those people.
It is only temporary.
It's going to come back and be horrible.
You should seek perpetual annoyance.
I'll invite you to a Berkeley City Council meeting
so you can know what that feels like, what perfection it feels
like.
But anyway, so freedom is being the test of freedom
is that you're annoyed by other people.
If you're not, you're not free.
If you're not, you're trapped in some temporary illusion
that's going to fall apart.
Now, this quest to avoid government
is really a quest to avoid that political feeling.
But you have to have it.
You have to deal with it.
And it sucks, but that's the human situation.
That's the human condition.
And this idea that we're going to have this abstract thing
that protects us from having to deal with each other
is always an illusion.
The idea, and I apologize.
I overstretched the use of the word government.
The idea is there should be some punishment from the people
when a bureaucracy, when a set of people,
or a particular leader, like in an authoritarian regime, which
more than half the world currently lives under,
if they become, they stop representing the people.
It stops being like a Berkeley meeting
and starts being more like a dictatorial kind of situation.
And so the point is it's nice to give people the populace
in a decentralized way power to resist that kind of government
becoming over authoritarian.
Yeah, but people see this idea that the problem is always
the government being powerful is false.
The problem can also be criminal gangs.
The problem can also be weird cults.
The problem can be abusive clergy.
The problem can be infrastructure that fails.
The problem can be poisoned water.
The problem can be failed electric rids.
The problem can be a crappy education system
that makes the whole society less and less able to create value.
There are all these other problems
that are different from an overbearing government.
You have to keep some sense of perspective
and not be obsessed with only one kind of problem
because then the others will pop up.
But empirically speaking, some problems are bigger than others.
So like some groups of people like governments or gangs
or companies lead to problems more than that.
Are you a US citizen?
Yes.
Has the government ever really been a problem for you?
Well, OK.
So first of all, I grew up in the Soviet Union.
And actually, yeah, my wife did too.
So I have seen and has the government bothered me.
I would say that that's a really complicated question,
especially because the United States is such,
it's a special place in like a lot of other countries.
My wife's family were refused NICs.
And so we have like a very, and her dad was sent to the gulag.
For what it's worth, on my father's side,
all but a few were killed by a pogrom in post-Soviet pogrom
in Ukraine.
So I would say because you did a little trick of eloquent trick
of language that you switched to the United States
to talk about government.
So I believe, unlike my friend Michael Malice, who's
an anarchist, I believe government
can do a lot of good in the world.
That is exactly what you're saying, which is politics.
The thing that Bitcoin folks and cryptocurrency folks argue
is that one of the big ways that government can control
the populace is centralize bank, like control the money.
That was the case in the Soviet Union, too.
Inflation can really make poor people suffer.
And so what they argue is this is one way
to go around that power that government
has of controlling the monetary system.
So that's a way to resist.
That's not actually saying government bad.
That's saying some of the ways that central banks get
into trouble can be resisted.
So let me ask you on balance today in the real world
in terms of actual facts.
Do you think cryptocurrencies are doing more
to prop up corrupt, murderous, horrible regimes
or to resist those regimes?
Where do you think the balance is right now?
I know exactly having talked to a lot of cryptocurrency folks
what they would tell me, right?
It's hard.
I'm asking it as a real question.
There's no way to know the answer.
There's no way to know the answer perfectly.
However, I've got to say, if you look at people
who've been able to decode blockchains,
and they do leak a lot of data, they're not as secure
as this widely thought, there are a lot of unknown Bitcoin
whales from pretty early, and they're huge.
And if you ask, who are these people,
there's evidence that a lot of them are quite not
the people you'd want to support, let's say.
And I just don't, like I think empirically this idea
that there's some intrinsic way that bad governments will
be disempowered and people will be able to resist them more
than new villains or even villainous governments
will be empowered.
There's no basis for that assertion.
It's just this kind of circumstantial.
And I think in general, Bitcoin ownership is one thing,
but Bitcoin transactions have tended
to support criminality more than productivity.
Of course, they would argue that was the story of its early days,
that now more and more Bitcoin is being
used for legitimate transactions.
But that's the difference.
I didn't say for legitimate transactions.
I said for economic growth, for creativity.
Like, I think what's happening is
people are using it a little bit for buying,
I don't know, maybe some of these companies
make it available for this and that by a Tesla with it
or something.
Investing in a startup, hard, it might have happened a little
bit, but it's not an engine of productivity, creativity,
and economic growth.
Whereas old-fashioned currency still is.
And anyway, I think something, I'm
pro the idea of digital currencies.
I am anti the idea of economics wiping out politics
as a result.
I think they have to exist in some balance
to avoid the worst dysfunctions of each.
In some ways, there's parallels to our discussion
of algorithms and cryptocurrency is you're pro the idea,
but it can be used to manipulate,
you can be used poorly by aforementioned humans.
Well, I think that you can make better designs
and worse designs.
And the thing about cryptocurrency that's so interesting
is how many of us are responsible for the poor designs
because we're all so hooked on that Horatio Alger story
on like, I'm going to be the one who gets the viral benefit.
Way back when all this stuff was starting,
I remember it would have been in the 80s,
somebody had the idea of using viral as a metaphor for network
effect.
And the whole point was to talk about how bad network
effect was, that it always created distortions that
ruined the usefulness of economic incentives that
created dangerous distortions.
But then somehow, even after the pandemic,
we think of viral as this good thing
because we imagine ourselves as the virus, right?
We want to be on the beneficiary side of it.
But of course, you're not likely to be.
There is a sense because money is involved,
people are not reasoning clearly always
because they want to be part of that first viral wave that
makes them rich.
And that blinds people from their basic morality.
I had an interesting conversation.
I sort of feel like I should respect some people's privacy.
But some of the initial people who started Bitcoin,
I remember having an argument about like,
it's intrinsically a Ponzi scheme,
like the early people have more than the later people.
And the further down the chain you get,
the more you're subject to gambling-like dynamics,
where it's more and more random and more and more
subject to weird network effects and whatnot,
unless you're a very small player, perhaps,
and you're just buying something.
But even then, you'll be subject to fluctuations
because the whole thing is just kind of like,
as it fluctuates, it's going to wave around
the little people more.
And I remember the conversation turned to gambling
because gambling is a pretty large economic sector.
And it's always struck me as being non-productive.
Like somebody goes to Las Vegas and they lose money.
And so one argument is, well, they got entertainment.
They paid for entertainment as they lost money,
so that's fine.
And Las Vegas does up the losing of money
in an entertaining way, so why not?
It's like going to a show.
So that's one argument.
The argument that was made to me was different from that.
It's that, no, what they're doing
is they're getting a chance to experience hope.
And a lot of people don't get that chance.
And so that's really worth it, even if they're going to lose.
They have that moment of hope.
And they need to be able to experience that.
And it's a very interesting argument.
That's so heartbreaking, because I've seen it.
But I've seen that.
I have that a little bit of a sense.
I've talked to some young people who
invest in cryptocurrency.
And what I see is this hope.
This is the first thing they gave them hope.
And that's so heartbreaking to me that you've gotten hope from
that so much is invested.
It's like hope from somehow becoming rich,
as opposed to something, to me, I apologize.
But money is, in the long term, not
going to be a source of that deep meaning.
It's good to have enough money, but it should not
be the source of hope.
And it's heartbreaking to me how many people
is the source of hope.
Yeah.
You've just described the psychology of virality
or the psychology of trying to base a civilization
on semi-random occurrences of network effect peaks.
And it doesn't really work.
I mean, I think we need to get away from that.
We need to soften those peaks, except Microsoft, which
deserves every penny.
But in every other case.
Well, you mentioned GitHub.
I think what Microsoft did with GitHub was brilliant.
I was very happy.
OK, if I can give not a critical, but on Microsoft,
because they recently purchased Bethesda.
So Elder Scrolls is in their hands.
I'm watching you, Microsoft, to not screw up my favorite game.
Yeah, look, I'm not speaking for Microsoft.
I have an explicit arrangement with them
where I don't speak for them.
Obviously, that should be very clear.
I do not speak for them.
I am not saying I like them.
I think Satcha's amazing.
The term data dignity was coined by Satcha.
So we have, it's kind of extraordinary.
But Microsoft is a giant thing.
It's going to screw up this or that.
I don't know.
It's kind of interesting.
I've had a few occasions in my life
to see how things work from the inside of some big thing.
And it's always just people kind of, I don't know.
There's always like coordination problems.
And there's always human problems.
Oh, god.
There's some good people.
There's some bad people.
I hope Microsoft doesn't screw up your game.
And I hope they bring Clippy back.
You should never kill Clippy.
Bring Clippy back.
Oh, Clippy.
But Clippy promotes the myth of AI.
Well, that's why I think you're wrong.
How about if we, all right, could we bring back Bob
instead of Clippy?
Which one was Bob?
Oh, Bob was another thing.
Bob was this other screen character
who was supposed to be the voice of AI.
Cortana, Cortana, would Cortana do it for you?
No, Cortana is too corporate.
I like it.
Exactly.
There's a woman in Seattle who's
like the model for Cortana.
Did Cortana's voice?
The voice?
There was like.
No, the voice is great.
We had her as a, she used to walk around
and if you were wearing a HoloLens for a bit,
I don't think that's happening anymore.
I think, I don't think you should
turn a software into a creature.
Well, you and I, you and I, you and I,
well, get a dog, get a dog.
A dog, yeah.
Yeah, you're a.
A hedgehog.
A hedgehog.
Yeah.
You co-authored a paper.
We mentioned Lee Smolin titled
the Autodagdactic Universe,
which describes our universe as one that
learns its own physical laws.
That's a trippy and beautiful and powerful idea.
What are, what would you say are
the key ideas in this paper?
Okay.
Well, I should say that paper reflected work
from last year and the project,
the program has moved quite a lot.
So it's a little,
there's a lot of stuff that's not published
that I'm quite excited about.
So I have to kind of keep my frame
in that, in that last year's things.
I have to try to be a little careful about that.
We can think about it in a few different ways.
The core of the paper, the technical core of it
is a triple correspondence.
One part of it was already established
and then another part is in the process.
The part that was established was, of course,
understanding different theories of physics
as matrix models.
The part that was fresher
is understanding those as a machine learning system
so that we could move fluidly
between these different ways of describing systems.
And the reason to want to do that
is to just have more tools and more options
because, well,
theoretical physics is really hard
and a lot of programs have kind of
run into a state where they feel a little stalled.
I guess I can,
I want to be delicate about this
because I'm not a physicist.
I'm the computer scientist collaborating.
So I don't mean to diss anybody's.
So this is almost like gives a framework
for generating new ideas in physics.
As we start to publish more about where it's gone,
I think you'll start to see there's tools
and ways of thinking about theories
that I think open up some new paths
that will be of interest.
There's the technical core of it,
which is this idea of a correspondence
to give you more facility.
But then there's also the storytelling part of it.
And this is something Lee loves stories and I do.
And the idea here is that
a typical way of thinking about physics
is that there's some kind of starting condition
and then there's some principle
by which the starting condition evolves.
And the question is like, why the starting condition?
Like how the starting condition has to get kind of,
this has to be fine tuned
and all these things about it have to be kind of perfect.
And so we were thinking, well, look,
what if we could push the storytelling
about where the universe comes from much further back
by starting with really simple things that evolve
and then through that evolution,
explain how things got to be,
how they are through very simple principles, right?
And so we've been exploring a variety of ways
to push the start of the storytelling
further and further back,
which, and it's an interesting,
it's really kind of interesting
because like for all of his, Lee is sometimes considered
to be, to have a radical quality in the physics world
but he still is like, no, this is gonna be
like the kind of time we're talking about
and which evolution happens is the same time we're now
and we're talking about something that starts and continues.
And I'm like, well, what if there's some other kind of time
that's time-like and sounds like metaphysics
but there's an ambiguity, like it has to start
from something and it's kind of interesting.
So there's this, a lot of the math can be thought of either
way, which is kind of interesting.
So it pushes so far back that basically all the things
we take for granted in physics start becoming emergent.
It's emergent.
I really want to emphasize this is all super baby steps.
I don't want to over claim.
It's like, I think a lot of the things we're doing,
we're approaching some old problems in a pretty fresh way
informed.
There's been a zillion papers about how you can think of
the universe as a big neural net or how you can think of
different ideas in physics as being quite similar to
or even equivalent to some of the ideas in machine learning.
And that actually works out crazy well.
Like, I mean, that is actually kind of eerie
when you look at it, like there's probably
two or three dozen papers that have this quality
and some of them are just crazy good.
And it's very interesting.
What we're trying to do is take those kinds of observations
and turn them into an actionable framework where you can
then start to do things with landscapes of theories
that you couldn't do before and that sort of thing.
So in that context, or maybe beyond,
how do you explain us humans?
How unlikely are we, this intelligent civilization?
Or is there a lot of others or are we alone in this universe?
Yeah.
You seem to appreciate humans very much.
I've grown fond of us.
We're okay.
We have our nice qualities.
I like that.
I mean, we're kind of weird.
We spread this here on our heads and then we're,
I don't know, we're sort of weird animal.
That's the feature, not a bug, I think, the weirdness.
I hope so.
I think if I'm just going to answer you in terms of truth,
the first thing I'd say is we're not in a privileged enough
position, at least as yet, to really know much about who we
are, how we are, what we're really like in the context of
something larger, what that context is, like all that stuff,
what we're trying to do, what we're trying to do,
what we're trying to do, what we're trying to do,
what we're trying to do, what we're trying to do,
what stuff we might learn more in the future,
our descendants might learn more,
but we don't really know very much,
which you can either view as frustrating or charming
like that first year of TikTok or something.
All roads lead back to TikTok.
I like it.
Well, lately, but in terms of, there's another level at
which I can think about it where I sometimes think that
I sometimes think that if you are just quiet and you do
something that gets you in touch with the way reality
happens, and for me, it's playing music,
sometimes it seems like you can feel a bit of how the
universe is, and it feels like there's a lot more going on
in it, and there is a lot more life and a lot more stuff
happening and a lot more stuff flowing through.
I don't know, I'm not speaking as a scientist now,
this is kind of a more, my artists side talking,
and it's, so I feel like I'm suddenly in multiple
personalities with you, but.
Well, Kerouac, Jack Kerouac said that music is
the only truth.
What do you, it sounds like you might be at least in part.
There's a passage in Kerouac's book, Dr. Sacks,
where somebody tries to just explain the whole situation
with reality and people in like a paragraph,
and I couldn't reproduce it for you here,
but it's like, yeah, like there are these boldest things
that walk around and they make these sounds,
you can sort of understand them, but only kind of,
and then there's like, and it's just like this amazing,
like just really quick, like if some spirit being
or something was gonna show up in our reality
and hadn't knew nothing about it,
it's like a little basic intro of like,
okay, here's what's going on here,
an incredible passage.
Yeah, yeah.
It's like a one or two sentence summary
in H. H. H. Geiger's Guide to the Galaxy, right,
of what this.
Mostly harmless.
Mostly harmless.
Yeah.
But do you think there's truth to that,
that music somehow connects to something
that words cannot?
Yeah, music is something that just towers above me.
I don't, I don't, I don't feel like I have an overview of it,
it's just the reverse, I don't fully understand it,
because on one level it's simple, like you can say,
oh, it's a thing people evolved to coordinate our brains
on a pattern level or something like that.
There's all these things you can say about music,
which are, you know, some of that's probably true.
It's also, there's kind of like this,
this is the mystery of meaning,
like there's a way that just instead
of just being pure abstraction,
music can have like this kind of substantiality to it
that is philosophically impossible.
I don't know what to do with it.
Yeah.
The amount of understanding I feel I have
when I hear the right song at the right time
is not comparable to anything I can read on Wikipedia.
Anything I can understand, read through in language.
There's, the music does connect us to something.
There's this thing there, yeah.
There's, there's, there's some kind of a thing in it.
And I've never, ever, I've read across a lot of explanations
from all kinds of interesting people,
like that it's some kind of a flow language
between people or between people and how they perceive
and that kind of thing.
There's, and that sort of explanation is fine,
but it's not, it's not quite it either.
Yeah.
There's something about music that makes me believe
that panpsychism could possibly be true,
which is that everything in the universe is conscious.
It makes me think,
makes me be humble in how much or how little
I understand about the functions of our universe
that everything might be conscious.
Most people interested in theoretical physics
eventually land in panpsychism,
but I'm not one of them.
I, I still think there's this pragmatic imperative
to treat people as special.
So I will proudly be a dualist without people and cats.
People and cats.
Yeah, I'm not, I'm not,
I'm not quite sure where to draw the line
or why the lines there or anything like that.
But I don't think I should be required to all the same
questions or equally mysterious for no line.
So I don't, I'm not, I don't feel disadvantaged by that.
So I shall remain a dualist,
but if you listen to anyone trying to explain
where consciousness is in a dualistic sense,
either believing in souls or some special thing
in the brain or something,
you pretty much say, screw this,
I'm going to be a panpsychist.
Fair enough, well put.
Is there moments in your life that happened
that were defining in the way that you hope others,
your daughters might have thought about it?
Well, listen, I gotta say,
the moments that defined me were not the good ones.
The moments that defined me were often horrible.
I, I've had successes, you know,
but if you ask what defined me,
my mother's death,
being under the World Trade Center and the attack,
the things that,
the things that have had an effect on me were the most
were sort of real world terrible things,
which I don't wish on young people at all.
And this is,
this is the thing that's hard about giving advice
to young people that they have to learn their own lessons.
And lessons don't come easily.
And a world which avoids hard lessons
is will be a stupid world, you know?
And I don't know what to do with it.
That's a, that's a little bundle of truth
that has a bit of a fatalistic quality to it.
But I don't, I don't,
this is like what I'm saying that, you know,
freedom equals eternal annoyance.
Like you can't, like there's a degree to which honest advice
is not that pleasant to give.
And I don't want young people to have to know
about everything.
I think-
You don't want to wish hardship on them.
Yeah, I think they,
they deserve to have a little grace period
of naivety that's pleasant.
I mean, I do, you know, if it's possible,
if it's,
these things are, this is like, this is tricky stuff.
I mean, if you, if you,
okay, so let me, let me try a little bit
on this advice thing.
I think one thing,
and any serious broad advice will have been given
a thousand times before for a thousand years.
So this, I'm not gonna,
I'm not going to claim originality,
but I think trying to find a way
to really pay attention to what you're feeling
fundamentally, what your sense of the world is,
what your intuition is,
if you feel like an intuitive person,
what you're,
like, like to try to escape the constant sway
of social perception or manipulation, whatever you wish,
not to escape it entirely, that would be horrible,
but to find, to find cover from it once in a while,
to find a sense of being anchored in that,
to believe in experience as a real thing.
Believing in experience as a real thing is very dualistic.
That goes in, that goes with my philosophy of dualism.
I believe there's something magical.
And I, instead of squirting the magic dust on the programs,
I think experience is something real and something apart
and something mystical and something.
Your own personal experience that you just have.
And then you're saying,
silence the rest of the world enough to hear that,
like whatever that magic justice
and that experience.
Find what is there.
And I think that's one thing.
Another thing is to recognize that kindness requires genius,
that it's actually really hard,
that facile kindness is not kindness
and that it'll take you a while to have the skills,
to have kind impulses, to want to be kind
you can have right away.
To be effectively kind is hard.
To be effectively kind, yeah.
It takes skill, it takes hard lessons.
You'll never be perfect at it.
To the degree you get anywhere with it,
it's the most rewarding thing ever.
Let's see, what else would I say?
I would say when you're young,
you can be very overwhelmed
by social and interpersonal emotions.
You'll have broken hearts and jealousies.
You'll feel socially down the ladder
instead of up the ladder.
It feels horrible when that happens.
All of these things.
And you have to remember
what a fragile crust all that stuff is.
And it's hard because right when it's happening,
it's just so intense.
And if I was actually giving this advice to my daughter,
she'd already be out of the room.
So this is for some hypothetical teenager
that doesn't really exist, that really wants to sit
and listen to my wisdom.
Or for your daughter 10 years from now.
Maybe.
Can I ask you a difficult question?
Yeah, sure.
You talked about losing your mom.
Yeah.
Do you miss her?
Yeah, I mean, I still connected her through music.
She was a young prodigy piano player in Vienna
and she survived the concentration camp
and then died in a car accident here in the US.
What music makes you think of her?
Is there a song that connects you?
Well, she was in Vienna.
So she had the whole Viennese music thing going
which is this incredible school
of absolute skill and romance bundled together
and wonderful on the piano, especially.
I learned to play some of the Beethoven sonatas for her
and I played them in this exaggerated, drippy way.
I remember when I was a kid and...
Exaggerated meaning too full of emotion?
Yeah, like just like...
Is that the only way to play Beethoven?
I mean, I didn't know there's any other way.
That's a reasonable question.
I mean, the fashion these days
is to be slightly Apollonian even with Beethoven
but one imagines that actual Beethoven playing
might have been different.
I don't know.
I've gotten to play a few instruments he played
and try to see if I could feel anything
about how it might have been for him.
I don't know, really.
I was always against the clinical precision
of classical music.
I thought a great piano player should be in pain,
like, you know, like emotionally,
like truly feel the music and make it messy, sort of.
Sure.
Maybe play classical music the way, I don't know, blues.
P&S plays blues.
Like...
It seems like they actually got happier
and I'm not sure Beethoven got happier.
I think it's a different,
I think it's a different kind of concept
of the place of music.
I think the blues,
the whole African-American tradition
was initially surviving awful, awful circumstances.
You could say, you know, there were some of that
in the concentration camps and all that too.
And it's not that Beethoven's circumstances were brilliant,
but he kind of also...
I don't know, this is hard.
Like, I mean, it would seem to be his misery
was somewhat self-imposed, maybe, through,
I don't know, it's kind of interesting.
Like, I've known some people who loathed Beethoven.
Like, the composer, late composer, Polina Oliveira,
this wonderful modernist composer,
I played in her band for a while and she was like,
oh, Beethoven, like, that's the worst music ever.
It's like all ego, it completely,
it turns information, I mean,
it turns emotion into your enemy.
And it's ultimately all about your own self-importance,
which has to be at the expense of others could,
but what else could it be?
And blah, blah, blah.
So she had, I shouldn't say, I don't mean it to be disrespectful,
but I'm just saying like her position on Beethoven
was very negative and very unimpressed,
which is really interesting for me.
The manner of the music.
I think, I don't know.
I mean, she's not here to speak for herself,
so it's a little hard for me to answer that question.
But it was interesting,
because I'd always thought of Beethoven,
it's like, whoa, you know, this is like Beethoven,
it's like really the dude, you know,
and she's like, eh, you know, Beethoven, Schmeidovin,
you know, it's like not really happening.
Yeah, still, even though it was cliche,
I like playing personally just for myself,
Moonlight, Sonata, I mean, I just,
Moonlight's amazing, you know, I, you know,
you're talking about comparing the blues
and that sensibility from Europe
is so different in so many ways.
One of the musicians I play with is John Batiste,
who has the band on Colbert's show,
and he'll sit there playing jazz
and suddenly go into Moonlight, he loves Moonlight.
And what's kind of interesting is he's found a way
to do Beethoven, and he, by the way,
he can really do Beethoven, like he went through
Julliard and one time he was at my house,
he's saying, hey, do you have the book of Beethoven's
songs to say, yeah, I want to find one I haven't played,
and then he sight read through the whole damn thing
perfectly, and I'm like, oh God, I just can't get out of here.
I can't even deal with this, but anyway.
But anyway, the thing is, he has this way of,
with the same persona and the same philosophy,
moving from the blues into Beethoven
that's really, really fascinating to me.
It's like, I don't want to say he plays it
as if it were jazz, but he kind of does.
It's kind of really, and he talks,
while he was sight reading, he talks
like Beethoven's talking to him, like he's like,
oh yeah, here he's doing this, I can't do John,
but you know, it's like, it's really interesting,
like it's very different, like for me,
I was introduced to Beethoven as like almost
like this God-like figure, and I presume Pauline was too,
that was really kind of a press when I had to deal with,
and for him, it's just like,
it's like he's playing James P. Johnson or something.
It's like another musician who did something
and they're talking, and it's very cool to be around.
It's very kind of freeing to see someone
have that relationship.
I would love to hear him play Beethoven.
That sounds amazing.
He's great.
We talked about Ernest Becker,
and how much value he puts on our mortality
and our denial of our mortality.
Do you think about your mortality?
Do you think about your own death?
You know, what's funny is, I used to not be able to,
but as you get older, you just know people who die,
and there's all these things that just becomes familiar
and more of a, more ordinary, which is what it is.
But are you afraid?
Sure, although less so.
And it's not like I didn't have some kind of insight
or revelation to become less afraid.
I think I just, like I say, it's kind of familiarity.
It's just knowing people who've died,
and I really believe in the future.
I have this optimism that people
or this whole thing of life on earth,
this whole thing we're part of,
I don't know where to draw that circle,
but this thing is going somewhere
and has some kind of value.
And you can't both believe in the future
and wanna live forever.
You have to make room for it.
You know, like you have to,
that optimism has to also come with its own like humility.
You have to make yourself small to believe in the future.
And so it actually, in a funny way, comforts me.
Wow, that's powerful.
And optimism requires you to kind of step down after time.
Yeah, I mean, that said, life seems kind of short,
but you know, whatever.
I've tried to find, I can't find the complaint department.
You know, I really want to bring this up,
but the customer service number never answers.
The email bounces.
One way.
Do you think there's meaning to it, to life?
Well, see, meaning's a funny word.
Like we say all these things as if we know what they mean,
but meaning, we don't know what we mean when we say meaning.
Like we obviously do not.
And it's a funny little mystical thing.
I think it ultimately connects to that sense of experience
that dualists tend to believe in.
Because there are why, like if you look up to the stars
and you experience that awe-inspiring like joy,
whatever, when you look up to the stars,
I don't know, like for me,
that kind of makes me feel joyful,
maybe a little bit melancholy,
just some weird soup of feelings.
And ultimately, the question is like,
why are we here in this vast universe?
That question, why?
Have you been able in some way,
maybe through music, answer it for yourself?
My impulse is to feel like
it's not quite the right question to ask,
but I feel like going down that path
is just too tedious for the moment.
And I don't want to do it, but...
The wrong question.
Well, just because, you know,
I don't know what meaning is,
and I think I do know that sense of awe.
I grew up in southern New Mexico,
and the stars were so vivid.
I've had some weird misfortunes,
but I've had some weird luck also.
One of our near neighbors
was the head of optics research at White Sands,
and when he was young, he discovered Pluto.
His name was Clyde Tombow.
And he taught me how to make telescopes
as grinding mirrors and stuff.
And my dad had also made telescopes when he was a kid,
but Clyde had, like,
backyard telescopes that would put to shame a lot of...
I mean, he really, he did his telescopes, you know?
And so I remember he'd let me go and play with him
and just, like, looking at a globular cluster,
and you're seeing the actual photons,
and with a good telescope, it's really like this object.
Like, you can really tell this isn't coming through
some intervening information structure.
This is, like, the actual photons,
and it's really a three-dimensional object.
And you have even a feeling for the vastness of it,
and it's...
I don't know, so I definitely...
I was very, very fortunate
to have a connection to this guy that way
when I was a kid.
To have had that experience.
Again, the emphasis on experience.
I...
It's kind of funny.
Like, I feel like sometimes, like, I've taken...
When she was younger, I took my daughter and her friends
to, like, a telescope.
There are a few around here that kids can go and use,
and they would, like, look at Jupiter's moons or something.
I think, like, Galilean moons.
And I don't know if they quite had that,
because it's, like, too...
It's been just too normalized,
and I think maybe...
When I was growing up, screens weren't that common yet,
and maybe it's, like, too confusable with the screen.
I don't know.
You know, somebody brought up in conversation to me
somewhere, I don't remember who,
but they kind of posited this idea that
if humans, early humans, weren't able to see the stars,
like, if Earth's atmosphere was such that it was cloudy,
that we would not develop human civilization.
There's something about being able to look up
and see a vast universe is, like,
that's fundamental to the development of human civilization.
I thought that was a curious kind of thought.
That reminds me of that old Isaac Asimov story,
where, you know, there's this planet where they finally get to see
what's in the sky once in a while,
and it turns out they're in the middle of a globular cluster
and there are these stars.
I forget what happens exactly. God, that's from when I was the same age
as the kid I don't really remember.
But, yeah, I don't know.
It might be right.
I'm just thinking of all the civilizations that grew up under clouds.
I mean, like, the Vikings needed a special
diffracting piece of mica to navigate
because they could never see the sun.
They had this thing called a sunstone that they found from this one cave.
Do you know about that?
So, they were in this, like, they were trying to navigate
boats, you know, in the North Atlantic
without being able to see the sun because it was cloudy,
and so they used a chunk of mica
to diffract it in order to be able to align
where the sun really was because they couldn't tell by eye
and they didn't navigate.
So, I'm just saying, there are a lot of civilizations that are pretty impressive
that had to deal with a lot of clouds.
The Amazonians invented our agriculture,
and they were probably under clouds a lot.
I don't know. I don't know.
To me, personally, the question of the meaning of life
becomes most vibrant, most apparent
when you look up at the stars
because it makes me feel very small.
We are small.
But then you ask, it still feels like we're special.
And then the natural question is like,
well, if we are special as I think we are,
why the heck are we here in this vast universe?
That ultimately is the question of the meaning of life.
I mean, look, there's a confusion sometimes
in trying to set up a question
or a thought experiment or something
that's defined in terms of a context
to explain something where there is no larger context,
and that's a category error.
If we want to do it in physics,
or in computer science,
it's hard to talk about the universe as a Turing machine
because a Turing machine has an external clock
and an observer and input and output.
There's a larger context implied
in order for it to be defined at all.
And so if you're talking about the universe,
you can't talk about it coherently as a Turing machine.
Quantum mechanics is like that.
Quantum mechanics has an external clock
and has some kind of external context
depending on your interpretation.
That's either the observer or whatever.
And they're similar that way.
So maybe Turing machines and quantum mechanics
can be better friends or something
because they have a similar setup.
But the thing is if you have something that's defined
in terms of an outer context,
you can't talk about ultimates with it
because obviously it's not suited for that.
So there's some ideas that are their own context.
There's some relativity is its own context.
It's different.
It's hard to unify.
And I think the same thing is true
when we talk about these types of questions.
Meaning is in a context.
And to talk about ultimate meaning
is therefore a category.
It's not a resolvable way of thinking.
It might be a way of thinking that is experientially
or aesthetically valuable
because it is awesome in the sense of awe-inspiring.
But to try to treat it analytically is not sensible.
Maybe that's what music can poetry for.
Yeah, maybe.
I think music actually does escape in a particular context.
It feels to me, but I'm not sure about that.
That's, once again, crazy artist talking, not scientist.
Well, you do both masterfully.
Like I said, I'm a big fan of everything you've done,
of you as a human being.
I appreciate the fun argument we had today
that I'm sure will continue for 30 years,
as it did with Martin Mitzky.
Honestly, I deeply appreciate that you spent
your really valuable time with me today.
Thank you so much.
Thanks for listening to this conversation with Jaren Lanier.
To support this podcast, please check out our sponsors
in the description.
And now, let me leave you with some words
from Jaren Lanier himself.
A real friendship ought to introduce each person
to unexpected weirdness in the other.
Thank you for listening and hope to see you next time.