logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 9h 33m 5s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

Evolutionarily, if we see a lion running at us,
we didn't have time to calculate the lion's kinetic energy,
and is it optimal to go this way or that way,
you just react it.
And physically, our bodies are well-attuned
to actually make right decisions.
But when you're playing a game like poker,
this is not something that you ever evolved to do,
and yet you're in that same flight or fight response.
And so that's a really important skill
to be able to develop to basically learn how to meditate
in the moment and calm yourself
so that you can think clearly.
The following is a conversation with Liv Burri,
formerly one of the best poker players in the world,
trained as an astrophysicist
and is now a philanthropist and an educator
on topics of game theory, physics, complexity, and life.
This is the Lex Friedman podcast.
To support it, please check out our sponsors
in the description.
And now, dear friends, here's Liv Burri.
What role do you think luck plays in poker and in life?
You can pick whichever one you want,
poker or life and or life.
The longer you play, the less influenced luck has.
Like with all things, the bigger your sample size,
the more the quality of your decisions
or your strategies matter.
So to answer that question, yeah, in poker, it really depends.
If you and I sat and played 10 hands right now,
I might only win 52% of the time, 53% maybe.
But if we played 10,000 hands,
then I'll probably win like over 98, 99% of the time.
So it's a question of sample sizes.
And what are you figuring out over time?
The betting strategy that this individual does
or literally doesn't matter
against any individual over time?
Against any individual over time, the better player,
because they're making better decisions.
So what does that mean to make a better decision?
Well, to get into the real Nistagriti already,
basically poker is the game of math.
There are these strategies familiar
with Nash Equilibria that term, right?
So there are these game theory optimal strategies
that you can adopt.
And the closer you play to them,
the less exploitable you are.
So because I've studied the game a bunch,
although admittedly not for a few years,
but back when I was playing all the time,
I would study these game theory optimal solutions
and try and then adopt those strategies when I go and play.
So I'd play against you and I would do that.
And because the objective when you're playing
game theory optimal, it's actually,
it's a loss minimization thing that you're trying to do.
Your best bet is to try and play a sort of similar style.
You also need to try and adopt this loss minimization.
But because I've been playing much longer than you,
I'll be better at that.
So first of all, you're not taking advantage
of my mistakes, but then on top of that,
I'll be better at recognizing
when you are playing suboptimally
and then deviating from this game theory optimal strategy
to exploit your bad plays.
Can you define game theory and Nash Equilibria?
Can we try to sneak up to it in a bunch of ways?
Like, what's a game theory framework of analyzing poker,
analyzing any kind of situation?
So game theory is just basically the study
of decisions within a competitive situation.
I mean, it's technically a branch of economics,
but it also applies to like wider decision theory.
And usually when you see it,
it's these like little payoff matrices and so on,
that's how it's depicted.
But it's essentially just like study of strategies
under different competitive situations.
And as it happens, certain games, in fact, many, many games
have these things called Nash Equilibria.
And what that means is when you're in a Nash Equilibrium,
basically it is not, there is no strategy
that you can take that would be more beneficial
than the one you're currently taking,
assuming your opponent is also doing the same thing.
So it would be a bad idea,
if we're both playing in a game theory optimal strategy,
if either of us deviate from that,
now we're putting ourselves at a disadvantage.
Rockpapers is actually a really great example of this.
Like if we were to start playing rockpapers,
you know, you know nothing about me
and we're gonna play for all our money.
Let's play 10 rounds of it.
What would your sort of optimal strategy be?
Do you think?
What would you do?
Let's see.
I would probably try to be as random as possible.
Exactly.
You wanna, because you don't know anything about me.
You don't want to give anything away about yourself.
So ideally you'd have like a little dice
or somewhat, you know, perfect randomizer
that makes you randomize 33% of the time
each of the three different things.
And in response to that,
well actually I can kind of do anything,
but I would probably just randomize back too.
But actually it wouldn't matter
because I know that you're playing randomly.
So that would be us in a Nash Equilibrium,
where we're both playing this like unexploitable strategy.
However, if after a while you then notice
that I'm playing rock a little bit more often
than I should.
Yeah, you're the kind of person that would do that,
wouldn't you?
Sure, yes, yes, yes.
I'm more of a scissors girl.
But anyway.
You are?
No, I'm a, as I said, randomizer.
So you notice I'm throwing rock too much
or something like that.
Now you'd be making a mistake
by continuing playing this game theory optimal strategy,
well the previous one,
because you are now, I'm making a mistake
and you're not deviating and exploiting my mistake.
So you'd wanna start throwing paper a bit more often
in whatever you figure is the right sort of percentage
of the time that I'm throwing rock too often.
So that's basically an example of where,
what game theory optimal strategy is
in terms of loss minimization,
but it's not always the maximally profitable thing
if your opponent is doing stupid stuff,
which in that example.
So that's kind of then how it works in poker,
but it's a lot more complex.
And the way poker players typically,
nowadays they study, the games change so much
and I think we should talk about how it sort of evolved.
But nowadays, like the top pros
basically spend all their time in between sessions
running these simulators using like software
where they do basically Monte Carlo simulations
sort of doing billions of fictitious self-play hands.
You input a fictitious hand scenario,
like, oh, what do I do with Jack 9 suited
on a King 10, 4, 2, 2 spade board?
And against this bet size.
So you'd input that, press play,
it'll run its billions of fake hands
and then it'll converge upon
what the game theory optimal strategies are.
And then you wanna try and memorize what these are.
Basically they're like ratios of how often,
what types of hands you want to bluff
and what percentage of the time.
So then there's this additional layer
of inbuilt randomization built in.
Yeah, those kinds of simulations incorporate
all the betting strategies and everything else like that.
So as opposed to some kind of very crude mathematical model
of what's the probability you win
just based on the quality of the card,
it's including everything else too.
The game theory of it.
Yes, yeah, essentially.
And what's interesting is that nowadays,
if you want to be a top pro
and you go and play in these really like
the super high stakes tournaments or tough cash games,
if you don't know this stuff,
you're gonna get eaten alive in the long run.
But of course you could get lucky over the short run
and that's where this like luck factor comes in
because luck is both a blessing and a curse.
If luck didn't, you know,
if there wasn't this random element
and there wasn't the ability for worst players
to win sometimes, then poker would fall apart.
You know, the same reason people don't play chess
professionally for money against you.
You don't see people going and hustling chess,
like not knowing, trying to make a living from it
because you know there's very little luck in chess
but there's quite a lot of luck in poker.
Have you seen Beautiful Mind, that movie?
Years ago.
Well, what do you think about the game theoretic formulation
of what is it the hot blonde at the bar?
Do you remember?
Oh yeah.
The way they'd illustrated it is
they're trying to pick up a girl at a bar
and there's multiple girls.
They're like friend, it's like a friend group
when you're trying to approach.
I don't remember the details, but I remember.
Don't you like then speak to her friends?
Yeah, yeah, yeah.
Just like that, faint disinterest.
I mean, it's classic pickup artist stuff.
Yeah.
You want to.
And they were trying to correlate that somehow,
that being an optimal strategy game theoretically.
Why, what, like I don't think, I remember.
I can't imagine that they were,
I mean, there's probably an optimal strategy.
Is it, does that mean that there's a natural Nash equilibrium
of like picking up girls?
Do you know the marriage problem?
It's optimal stopping.
Yes.
So where it's an optimal dating strategy
where you, do you remember?
Yeah, I think it's like something like you know,
you've got like a set of a hundred people
that you're going to look through.
And after how many do you, now after that,
after going on this many dates out of a hundred,
at what point do you then go,
okay, the next best person I see, is that the right one?
And I think it's like something like 37%.
It's one over E, whatever that is.
Right, which I think is 37%.
Yeah.
I'm going to fact check that.
Yeah.
Yeah.
So, but it's funny under those strict constraints,
then yes, after that many people,
as long as you have a fixed size pool,
then you just pick the next person
that is better than anyone you've seen before.
Yeah.
Have you tried this?
Have you incorporated it?
I'm not one of those people.
And we're going to discuss this.
And what do you mean those people?
I try not to optimize stuff.
I try to listen to the heart.
I don't think, I like,
my mind immediately is attracted to optimizing everything.
And I think that if you really give in
to that kind of addiction,
that you lose the joy of the small things,
the minutiae of life, I think.
I don't know.
I'm concerned about the addictive nature
of my personality in that regard.
In some ways, while I think the,
on average people under try and quantify things
or try under optimize, there are some people who,
you know, it's like with all these things,
it's a, you know, it's a balancing act.
I've been on dating apps, but I've never used them.
I'm sure they have data on this
because they probably have
the optimal stopping control problem.
Cause there aren't a lot of people that use social,
like dating apps are on there for a long time.
So the interesting aspect is like, all right,
how long before you stop looking,
before it actually starts affecting your mind
negatively such that you see dating as a kind of,
a game.
A kind of game versus an actual process
of finding somebody that's gonna make you happy
for the rest of your life.
That's really interesting.
They have the data.
I wish they would be able to release that data.
And I do want to-
It's okay, Cupid, right?
I think they ran a huge, huge study on all of their-
Yeah, they're more data-driven.
I think okay, Cupid folks are.
I think there's a lot of opportunity for dating apps
and you know, even bigger than dating apps,
people connecting on the internet.
I just hope they're more data-driven
and it doesn't seem that way.
I think like, I've always thought that
Goodreads should be a dating app.
Like the-
I've never used it.
Goodreads is just like books that you've read.
And it allows you to comment on the books you've read
and what the books you're currently reading.
But it's a giant social networks of people reading books.
And that seems to be a much better database of interests.
Of course, to constrain you to the books you're reading,
but that really reveals so much more about the person.
Allows you to discover shared interests
because books are kind of window
into the way you see the world.
Also like the kind of places people you're curious about,
the kind of ideas you're curious about.
Are you romantic?
Are you cold calculating rationalist?
Are you into Iron Rand?
Or are you into Bernie Sanders?
Are you into whatever?
Right.
And I feel like that reveals so much more
than like a person trying to look hot
from a certain angle in the Tinder profile.
Well, and it would also be a really great filter
in the first place for people.
It's selects for people who read books
and are willing to go and rate them
and give feedback on them and so on.
So that's already a really strong filter
or probably the type of people you'd be looking for.
Well, at least be able to fake reading books.
I mean, the thing about books,
you don't really need to read it.
You can just look at the cliff notes.
Yeah, game the dating app by feigning intellectualism.
Can I admit something very horrible about myself?
Go on.
The things that, you know,
I don't know how many things in my closet,
but this is one of them.
I've never actually read Shakespeare.
I've only read cliff notes.
And I got a five in the AP English exam.
Wow.
And I-
Which book?
Which books have I read?
Well, yeah, which was the exam on which book?
Oh, no, they include a lot of them.
Oh.
But Hamlet, I don't even know
if you read Romeo and Juliet, Macbeth,
I don't remember, but I don't understand it.
It's like really cryptic.
It's hard.
It's really, I don't,
and it's not that pleasant to read.
It's like ancient speak.
I don't understand it.
Anyway, maybe I was too dumb.
Man, I'm still too dumb, but I dig-
But you got a five, which is-
Yeah, yeah.
I don't know how the U.S. grading system.
Oh, no.
So AP English is a,
there's kind of this advanced versions
of courses in high school.
And you take a test that is like a broad test
for that subject and includes a lot.
It wasn't obviously just Shakespeare.
I think a lot of it was also writing, written.
You have like AP physics, AP computer science,
AP biology, AP chemistry,
and then AP English or AP literature.
I forget what it was,
but I think Shakespeare was a part of that.
But I-
And you, the point is you gamified it?
Gamified.
Well, entirety, I was into getting A's.
I saw it as a game.
I don't think any,
I don't think all the learning I've done
has been outside of the school.
The deepest learning I've done has been outside of school,
with a few exceptions, especially in grad school,
like deep computer science courses.
But that was still outside of school
because it was outside of getting, sorry,
it was outside of getting the A for the course.
The best stuff I've ever done is when you read the chapter
and you do many of the problems at the end of the chapter,
which is usually not what's required for the course,
like the hardest stuff.
In fact, textbooks are freaking incredible.
If you go back now and you look at like biology textbook
or any of the computer science textbooks
on algorithms and data structures,
those things are incredible.
They have the best summary of a subject,
plus they have practice problems of increasing difficulty
that allows you to truly master the basic,
like the fundamental ideas behind that.
That was, I got through my entire physics degree
with one textbook that was just this really comprehensive one
that they told us at the beginning of the first year,
buy this, but you're gonna have to buy 15 other books
for all your supplementary courses.
And I was like, every time I would just check
to see whether this book covered it and it did.
And I think I only bought like two or three extra
and thank God, because they're so super expensive textbooks.
It's a whole racket they've got going on.
Yeah, they are, they could just,
you get the right one, it's just like a manual for,
but what's interesting though,
is this is the tyranny of having exams and metrics.
The tyranny of exams and metrics, yes.
I loved them because I loved, I'm very competitive
and I liked finding ways to gamify things
and then like sort of dust off my shoulders
after it's when I get a good grade
or be annoyed at myself when I didn't.
But yeah, you're absolutely right in that the actual,
how much of that physics knowledge I've retained?
Like I've, I learned how to cram and study
and please an examiner,
but did that give me the deep lasting knowledge
that I needed?
I mean, yes and no,
but really like nothing makes you learn a topic better
than when you actually then have to teach it yourself.
You know, like I'm trying to wrap my teeth around
this like game theory, Moloch stuff right now.
And there's no exam at the end of it that I can gamify.
There's no way to gamify and sort of like
shortcut my way through it.
I have to understand it so deeply
from like deep foundational levels to then to build upon it
and then try and explain it to other people.
And like, you know,
you're about to go and do some lectures, right?
You can't sort of just like,
you presumably can't rely on the knowledge
that you got through
when you were studying for an exam to re-teach that.
Yeah, and especially high level lectures,
especially the kind of stuff you do on YouTube,
you're not just regurgitating material.
You have to think through what is the core idea here.
And when you do the lectures live, especially,
you have to, there's no second takes.
That is the luxury you get
if you're recording a video for YouTube or something like that.
But it definitely is a luxury you shouldn't lean on.
I've gotten to interact with a few YouTubers
that lean on that too much.
And you realize, oh, you've gamified this system
because you're not really thinking deeply about stuff.
You're through the edit,
both written and spoken.
You're crafting an amazing video,
but you yourself as a human being
have not really deeply understood it.
So live teaching, or at least recording video
with very few takes is a different beast.
And I think it's the most honest way of doing it,
like as few takes as possible.
That's what I'm nervous about this.
Don't go back to like, let's do that.
Don't fuck this up, Liv.
The tyranny of exams.
I do think people talk about high school and college
as a time to do drugs and drink and have fun
and all this kind of stuff.
But looking back, of course I did a lot of those things.
No, yes, but it's also a time when you get to
like read textbooks or read books
or learn with all the time in the world.
Like you don't have these responsibilities
of like, you know, laundry
and having to sort of pay for mortgage
or all that kind of stuff, pay taxes,
all this kind of stuff.
In most cases, there's just so much time
in the day for learning.
And you don't realize at the time,
because at the time it seems like a chore,
like why the hell does there's so much homework?
But you never get a chance to do this kind of learning,
this kind of homework ever again in life,
unless later in life you really make a big effort out of it.
You basically, your knowledge gets solidified.
You don't get to have fun and learn.
Learning is really fulfilling and really fun
if you're that kind of person.
Like some people like knowledge is not something
that they think is fun.
But if that's the kind of thing that you think is fun,
that's the time to have fun and do the drugs
and drink and all that kind of stuff.
But the learning, just going back to those textbooks,
the hours spent with the textbooks is really, really rewarding.
Do people even use textbooks anymore?
Yeah.
Do you think?
Because it's these days with their TikTok and there.
Well, not even that, but it's just like so much information,
really high quality information.
It's now in digital format online.
Yeah, but they're not, there are using that,
but college is still very, there's a curriculum.
I mean, so much of school is about rigorous study
of a subject and still on YouTube, that's not there.
Right.
YouTube has, Grant Sanderson talks about this.
He's this math.
331 Brown.
Yeah, 331 Brown.
He says like, I'm not a math teacher.
I just take really cool concepts and I inspire people.
But if you want to really learn calculus,
if you want to really learn linear algebra,
you should do the textbook.
You should do that.
And there's still the textbook industrial complex
that like charges like $200 for a textbook
and somehow, I don't know, it's ridiculous.
Well, they're like, oh, sorry.
New edition, edition 14.6.
Sorry, you can't use 14.5 anymore.
It's like, what's different?
We've got one paragraph different.
So we mentioned offline, Daniel Negrano.
I'm gonna get a chance to talk to him on this podcast.
And he's somebody that I found fascinating
in terms of the way he thinks about poker,
verbalizes the way he thinks about poker,
the way he plays poker.
So, and he's still pretty damn good.
He's been good for a long time.
So you mentioned that people are running
these kinds of simulations and the game of poker has changed.
Do you think he's adapting in this way?
Do you like the top pros?
Do they have to adopt this way?
Or is there still like over the years,
you basically develop this gut feeling about,
like you get to be like good the way like alpha zero is good.
You look at the board
and somehow from the fog comes out the right answer.
Like this is likely what they have.
This is likely the best way to move.
And you don't really,
you can't really put a finger on exactly why,
but it's just comes from your gut feeling or no.
Yes and no.
So gut feelings are definitely very important.
You know, that we've got our two mode
or you can distill it down to two modes of decision making, right?
You've got your sort of logical linear voice in your head system too,
as it's often called and your system on your gut intuition.
And historically in poker,
the very best players were playing almost entirely by their gut.
You know, often they'd do some kind of inspired play
and you'd ask them why they do it
and they wouldn't really be able to explain it.
And that's not so much because their process was unintelligible,
but it was more just because no one had the language
with which to describe what optimal strategies were
because no one really understood how poker worked.
This was before, you know, we had analysis software,
you know, no one was writing,
you know, I guess some people would write down their hands
in a little notebook,
but there was no way to assimilate all this data and analyze it.
But then, you know, when computers became cheaper
and software started emerging and then obviously online poker,
where it would like automatically save your hand histories,
now all of a sudden you kind of had this body of data
that you could run analysis on.
And so that's when people started to see, you know,
these mathematical solutions and so what that meant
is the role of intuition essentially became smaller.
And it went more into as we talked before
about, you know, this game theory optimal style,
but also as I said, like game theory optimal
is about loss minimization and being unexploitable.
But if you're playing against people who aren't,
because no person, no human being can play perfectly
game theory optimal in poker, not even the best AIs,
they're still like, they're 99.99% of the way there
or whatever, but it's kind of like speed of light,
you can't reach it perfectly.
So there's still a role for intuition?
Yes, so when, yeah, when you're playing this unexploitable style,
when your opponents start doing something, you know,
suboptimal that you want to exploit,
well, now that's where not only your like logical brain
will need to be thinking, well, okay, I know I have this,
I'm in the sort of top end of my range here with this,
with this hand.
So that means I need to be calling x% of the time
and I put them on this range, et cetera.
But then sometimes you'll have this gut feeling
that will tell you, you know, you know what, this time,
I know, I know mathematically I'm meant to call now,
you know, I've got, I'm in the sort of top end of my range
and this is the odds I'm getting.
So the math says I should call,
but there's something in your gut saying,
they've got it this time, they've got it,
like they're beating you, maybe your hand is worse.
So then the real art, this is where the last remaining art
in poker, the fuzziness is like, do you listen to your gut?
How do you quantify the strength of it
or can you even quantify the strength of it?
And I think that's what Daniel has.
I mean, I can't speak for how much he's studying
with the simulators and that kind of thing.
I think he has, like he must be to still be keeping up,
but he has an incredible intuition for just,
he's seen so many hands of poker in the flesh,
he's seen so many people the way they behave
when the chips are, you know, when the money's on the line
and he's got him staring you down in the eye,
you know, he's intimidating.
He's got this like kind of X factor vibe
that he, you know, gives out.
And he talks a lot, which is an interactive element,
which is he's getting stuff from other people.
Yes.
Yeah.
And just like the subtlety,
so he's like, he's probing constantly.
Yeah, he's probing and he's getting this extra layer
of information that others can't.
Now that said though, he's good online as well.
You know, I don't know how, again,
would he be beating the top cash game players online?
Probably not, no.
But when he's in person
and he's got that additional layer of information,
he can not only extract it,
but he knows what to do with it still so well.
There's one player who I would say is the exception
to all of this.
And he's one of my favorite people to talk about
in terms of, I think he might have cracked the simulation.
It's Phil Helmuth.
He...
In more ways than one, he's cracked the simulation,
I think.
He somehow to this day is still,
and I love you Phil, not in any way knocking you,
he's still winning so much
at the World Series of Pogo specifically.
He's now won 16 bracelets.
The next nearest person I think has won 10.
And he is consistently year in, year out,
going deep or winning these huge field tournaments,
you know, with like 2,000 people,
which statistically he should not be doing.
And...
And yet, you watch some of the plays he makes
and they make no sense, like mathematically,
they are so far from game theory optimal.
And the thing is, if you went and stuck him
in one of these high stakes cash games
with a bunch of like, GTO people,
he's going to get ripped apart.
But there's something that he has that when he's in the halls
of the World Series of Pogo specifically,
amongst sort of amateurish players,
he gets them to do crazy shit like that.
And, but my little pet theory is that also he just the car,
he's like a wizard and he gets the cards
to do what he needs them to.
Because he just expects to win
and he expects to receive, you know, to get flopper set
with a frequency far beyond what the real percentages are.
And I don't even know if he knows what the real percentages are.
He doesn't need to because he gets that.
I think he has found the Chico because when I've seen him play,
he seems to be like annoyed
that the long shot thing didn't happen.
Yes.
He's like annoyed and it's almost like everybody else is stupid
because he was obviously going to win with this.
He's meant to win if that silly thing hadn't happened.
And it's like, you don't understand,
the silly thing happens 99% of the time.
And it's a 1%, not the other way round,
but genuinely for his lived experience at the World Series,
only at the World Series of Pogo, it is like that.
So I don't blame him for feeling that way, but he does.
He has this X-Factor and the Pogo community has tried for years
to rip him down saying like, you know, he doesn't, he's no good,
but he's clearly good because he's still winning.
There's something going on, whether that's he's figured out
how to mess with the fabric of reality
and how cards, you know, a randomly shuffled deck of cards come out.
I don't know what it is, but he's doing it right still.
Who do you think is the greatest of all time?
Would you put Hellmuth?
No, he's definitely, he seems like the kind of person
when mentioned he would actually watch this,
so you might want to be careful.
Well, as I said, I love Phil and I have,
I would say this to his face.
I'm not saying anything, I don't, he's got, he truly,
I mean, he is one of the greatest.
I don't know if he's the greatest,
he's certainly the greatest at the World Series of Pogo
and he is the greatest at,
despite the game switching into a,
pure game, almost an entire game of math,
he has managed to keep the magic alive
and this like, just through sheer force of will,
making the game work for him.
And that is incredible.
And I think it's something that should be studied
because it's an example.
Yeah, there might be some actual game theoretic wisdom.
There might be something to be said about optimality
from studying him.
What do you mean by optimality?
Meaning, or rather game design perhaps.
Meaning, if what he's doing is working,
maybe poker is more complicated
than we're currently modeling it as.
Or there's an extra layer,
and I don't mean to get too weird and wooey,
but, or there's an extra layer of ability
to manipulate the things the way you want them to go
that we don't understand yet.
Do you think Phil, how many of them understands them?
Is he just generally...
Hashtag positivity.
He wrote a book on positivity.
He has?
He did?
Not like a trolling book?
No.
Like serious?
Straight up, yeah.
Phil Helmuth wrote a book about positivity.
Yes.
Okay, not ironic.
I think, and I think it's about sort of manifesting
what you want and getting the outcomes that you want
by believing so much in yourself
and in your ability to win, like eyes on the prize.
And I mean, it's working.
The man's delivered.
But where do you put Phil Ivy and all those kinds of people?
I mean, I'm too...
I've been, to be honest, too much out of the scene
for the last few years to really...
I mean, Phil Ivy's clearly got, again,
he's got that X factor.
He's so incredibly intimidating to play against.
I've only played against him a couple of times,
but when he looks you in the eye
and you're trying to run a bluff on him,
oof, no one's made me sweat harder than Phil Ivy.
My bluff got through, actually.
That was actually one of the most thrilling moments
I've ever had in poker was it was in a Monte Carlo
in a high roller.
I can't remember exactly what the hand was,
but I three bit and then just barreled all the way through.
And he just put his laser eyes into me
and I felt like he was just scouring my soul.
And I was just like, hold it together, live, hold it together.
You knew your hand was weaker.
Yeah, I mean, I was bluffing.
There's a chance I was bluffing with the best hand,
but I'm pretty sure my hand was worse.
And he folded.
I was truly one of the deep highlights of my career.
Did you show the cards or did you fall?
You should never show in-game.
Because especially as I felt like I was one of the worst players
at the table in that tournament.
So giving that information,
unless I had a really solid plan that I was now like advertising
or look, I'm capable of bluffing Phil Ivy, but like why?
And it's much more valuable to take advantage of the impression
that they have of me, which is like, I'm a scared girl
playing a high roller for the first time.
Keep that going, you know.
Interesting.
But isn't there layers to this like psychological warfare
that the scared girl might be way smart
and then like to flip the tables?
Do you think about that kind of stuff?
Definitely.
It's better not to reveal information.
I mean, generally speaking, you want to not reveal information.
You know, the goal of poker is to be as deceptive
as possible about your own strategies
while elucidating as much out of your opponent about their own.
So giving them free information,
particularly if they're people who you consider very good players,
any information I give them is going into their little database
and I assume it's going to be calculated and used well.
So I have to be really confident that my like meta gaming
that I'm going to then do, or they've seen this so therefore that,
I'm going to be on the right level.
So it's better just to keep that little secret to myself in the moment.
So how much is bluffing part of the game?
Huge amount.
So yeah, I mean, maybe actually, let me ask like, what did it feel like
with Phil Ivey or anyone else when it's a high stake,
when it's a big, it's a big bluff.
So a lot of money in the table and maybe, I mean,
what defines a big bluff, maybe a lot of money in the table
but also some uncertainty in your mind and heart about like self doubt.
Well, maybe I miscalculated what's going on here,
what the bet said, all that kind of stuff.
Like, what does that feel like?
I mean, it's, I imagine comparable to, you know, running a,
I mean, any kind of big bluff where you have a lot of something
that you care about on the line, you know?
If you're bluffing in a courtroom,
not that I didn't want to ever do that or, you know,
something equatable to that, it's, it's incredible.
You know, in that scenario, you know,
I think it was the first time I'd ever played a 20,
I'd won my way into this 25K tournament.
So that was the buy-in, 25,000 euros.
And I had satelliteed my way in because it was much bigger
than I would ever, ever normally play.
And, you know, I hadn't, I wasn't that experienced at the time
and now I was sitting there against all the big boys,
you know, the Nogranos, the Phil Ivey's and so on.
And then to like, you know, each time you put the bets out,
you know, you put another bet out, your card.
Yeah, I was on a, what's called a semi-bluff.
So there were some cards that could come that would make
my hand very, very strong and therefore win,
but most of the time those cards don't come.
So that is a semi-bluff because you're representing,
are you representing that you already have something?
So I think in this scenario, I had a flush draw,
so I had two clubs, two clubs came out on the flop
and then I'm hoping that on the turn and the river, one will come.
So I have some future equity.
I could hit a club and then I'll have the best hand,
in which case, great.
And so I can keep betting and I'll want them to call,
but I'm also got the other way of winning the hand
where if my card doesn't come, I can keep betting
and get them to fold their hand.
And I'm pretty sure that's what the scenario was.
So I had some future equity, but it's still,
most of the time I don't hit that club.
And so I would rather him just fold
because the pot is now getting bigger and bigger.
And in the end, like I jam all in on the river,
that's my entire tournament on the line.
As far as I'm aware, this might be the one time
I ever get to play a big 25K.
This was the first time I played one.
So it felt like the most momentous thing.
And this is also when I was trying to build myself up,
build my name, a name for myself in poker.
Get respect.
Destroy everything for you.
It felt like it in the moment.
I mean, it literally does feel like a form of life and death.
Your body physiologically is having that flight or fight response.
What are you doing with your body?
What are you doing with your face?
Are you just like, what are you thinking about?
More like a mixture of, okay, what are the cards?
So in theory, I'm thinking about like,
okay, what are cards that make my hand look stronger?
Which cards hit my perceived range from his perspective?
Which cards don't?
What's the right amount of bet size to, you know,
maximize my fold equity in this situation?
You know, that's the logical stuff that I should be thinking about.
But I think in reality, because I was so scared,
because there's this, at least for me,
there's a certain threshold of like nervousness or stress
beyond which the like logical brain shuts off.
And now it just gets into this like,
it just like, it feels like a game of wits, basically.
It's like of nerve.
Can you hold your resolve?
And it's certainly got by that, like by the river.
I think by that point, I was like,
I don't even know if this is a good bluff anymore,
but fuck it, let's do it.
Your mind is almost numb from the intensity of that feeling.
I call it the white noise.
And it happens in all kinds of decision making, I think.
Anything that's really, really stressful.
I can imagine someone in like an important job interview,
if it's like a job they've always wanted,
and they're getting grilled, you know, like Bridgewater style,
where they ask these really hard mathematical questions.
It's a really learned skill to be able to like,
subdue your flight or fight response,
you know, I think get from the sympathetic
into the parasympathetic so you can actually, you know,
engage that voice in your head
and do those slow logical calculations.
Because evolutionarily, you know,
if we see a lion running at us,
we didn't have time to sort of calculate the lion's kinetic energy
and, you know, is it optimal to go this way
or that way, you just react it.
And physically, our bodies are well-attuned
to actually make right decisions.
But when you're playing a game like poker,
this is not something that you ever, you know, evolved to do.
And yet you're in that same flight or fight response.
And so that's a really important skill to be able to develop
to basically learn how to like,
meditate in the moment and calm yourself
so that you can think clearly.
But as you were searching for a comparable thing,
it's interesting because you just made me realize
that bluffing is like an incredibly high-stakes form of lying.
You're lying.
And I don't think you can...
Telling a story.
No, it's straight up lying.
In the context of a game, it's not a negative kind of lying.
But it is. Yeah, exactly.
You're representing something that you don't have.
And I was thinking like, how often in life
do we have such high-stakes of lying?
Because I was thinking,
certainly in high-level military strategy,
I was thinking when Hitler was lying to Stalin
about his plans to invade the Soviet Union.
And so you're talking to a person like your friends
and you're fighting against the enemy,
whatever the formulation of the enemy is.
But meanwhile, the whole time,
you're building up troops on the border.
That's extremely...
Wait, wait, so Hitler and Stalin were pretending to be friends?
My history knowledge is terrible.
Yeah, that they were...
Yeah, man.
And they worked because Stalin,
until the troops crossed the border
and invaded in Operation Barbarossa,
where this storm of Nazi troops
invaded large parts of the Soviet Union,
and hence one of the biggest wars in human history began,
Stalin for sure was...
He just thought that this was never going to be...
that Hitler is not crazy enough to invade the Soviet Union.
And it makes...
geopolitically makes total sense to be collaborators.
And ideologically,
even though there's a tension between communism and fascism,
or national socialism, however you formulate it,
it still feels like this is the right way to battle the West.
Right, they were more ideologically aligned.
They, in theory, had a common enemy, which is the West.
So it made total sense.
And in terms of negotiations and the way things were communicated,
it seemed to Stalin that for sure
that they would remain at least for a while
peaceful collaborators.
And everybody, because of that,
in the Soviet Union believed that it was a huge shock
when Kiev was invaded.
And you hear echoes of that when I traveled to Ukraine,
sort of the shock of the invasion.
It's not just the invasion on one particular border,
but the invasion of the capital city.
And just like, holy shit,
especially at that time when you thought World War I,
you realized that that was the war to end all wars.
You would never have this kind of war.
And holy shit, this person is mad enough
to try to take on this monster in the Soviet Union.
So it's no longer going to be a war of hundreds of thousands dead.
It'll be a war of tens of millions dead.
Yeah, but that's a very large scale kind of lie.
But I'm sure there's in politics and geopolitics
that kind of lying happening all the time.
And a lot of people pay financially
and with their lives for that kind of lying.
But in our personal lives, I don't know how often we...
Maybe we...
I think people do.
I mean, think of spouses cheating on their partners, right?
And then having to lie, like, where were you last night?
Stuff like that.
Holy shit, that's tough, yeah.
That's, I think...
I mean, unfortunately, that stuff happens all the time, right?
Or having multiple families, that one is great.
When each family doesn't know about the other one
and maintaining that life.
There's probably a sense of excitement about that too.
It seems unnecessary, yeah.
Why?
Well, just lying, like, you know.
The truth finds a way of coming out, you know.
Yes, but hence that's the thrill.
Yeah, perhaps.
Yeah, people.
I mean, and that's why I think, actually, like poker.
What's so interesting about poker is
most of the best players I know,
there are always exceptions, you know, there are always bad eggs.
But actually poker players are very honest people.
I would say they are more honest than the average,
you know, if you just took random population sample.
Because A, you know, I think, you know, humans like to have that.
Most people like to have some kind of, you know, mysterious,
you know, an opportunity to do something like a little edgy.
So we get to sort of scratch that itch of being edgy at the poker table
where it's like, it's part of the game.
Everyone knows what they're in for and that's allowed.
And you get to like really get that out of your system.
And then also like poker players learned that, you know,
I would play in a huge game against some of my friends,
even my partner Igor, where we will be, you know,
absolutely going at each other's throats,
trying to draw blood in terms of winning each money off each other
and like getting under each other's skin,
winding each other up, doing the craftiest moves we can.
But then once the game's done, you know,
the winners and the losers will go off and get a drink together
and have a fun time and like talk about it
in this like weird academic way afterwards.
Because that, and that's why games are so great
because you get to like live out
or like this competitive urge that, you know, most people have.
What's it feel like to lose?
Like we talked about bluffing when it worked out.
What about when you, when you go broke?
So like in a game, I'm, you know,
unfortunately I've never gone broke. You mean like full life?
Full life? No.
I know plenty of people who have.
And I don't think Igor would mind me saying,
he went, you know, he went broke once in poker bowl,
you know, early on when we were together.
I feel like you haven't lived unless you've gone broke.
Oh, yeah.
In some sense. Right.
Well, I mean, I'm happy.
I've sort of lived through it vicariously through him
when he did it at the time.
But yeah, what's it like to lose?
Well, it depends.
So it depends on the amount.
It depends what percentage of your net worth you've just lost.
It depends on your brain chemistry.
It really, you know, varies from person to person.
You have a very cold calculating way of thinking about this.
So it depends what percentage.
What it did, it really does, right?
Yeah, it's true.
But that's, I mean, that's another thing poker trains you to do.
You see, you see everything in percentages.
Or you see everything in like ROI or expected hourly's
or cost benefit, et cetera.
You know, so that's one of the things I've tried to do
is calibrate the strength of my emotional response
to the to the win or loss that I've received.
Because it's no good if you like, you know,
you have a huge emotional dramatic response to a tiny loss.
Or on the flip side, you have a huge win
and you're so dead inside that you don't even feel it.
Well, that's, you know, that's a shame.
I want my emotions to calibrate with reality as much as possible.
So, yeah, what's it like to lose?
I mean, I've had times where I've lost, you know,
busted out of a tournament that I thought I was going to win in,
you know, especially if I got really unlucky or,
or I make a dumb play where I've gone away and like,
you know, kicked, kicked the wall, punched a wall.
I like nearly broke my hand one time.
Like I'm a lot less competitive than I used to be.
Like I was like pathologically competitive in my like late teens, early 20s.
I just had to win at everything.
And I think that sort of slowly waned as I've gotten older.
According to you, yeah.
According to me.
I don't know if others would say the same, right?
Um, I feel like ultra competitive people,
like I've heard Joe Rogan say this to me.
It's like, he's a lot less competitive than he used to be.
I don't know about that.
Oh, I believe it.
No, I totally believe it.
Like, because as you get, you can still be like,
I care about winning.
Like when, you know, I play a game with my buddies online
or, you know, whatever it is,
polyp, polytopia is my current obsession.
Like, when I,
Thank you for passing on your obsession to me.
Are you playing now?
Yeah, I'm playing now.
We've got to have a game.
But I'm terrible and I enjoy playing terribly.
I don't want to have a game because that's going to pull me
into your monster of, of like competitive play.
It's important skill.
I'm, I'm enjoy playing on the, I can't.
You just do that.
You just do the points thing of, you know, against the bots.
Yeah, against the bots.
And I can't even do the, there's like a hard one
and there's a very hard one.
That's crazy.
Yeah.
That's crazy.
I can't, I don't even enjoy the hard one.
The crazy I really don't enjoy.
Cause it's intense.
You have to constantly try to win
as opposed to enjoy building a little world.
Yeah.
No, no, there's no time for exploration and polytopia.
You got to get.
Well, when, once you graduate from the crazies,
then you can come play the, the.
Graduate from the crazies.
Yeah.
So in order to be able to play a decent game against like,
you know, our group, um, you'll need to be,
you'll need to be consistently winning like 90% of games
against 15 crazy bots.
Yeah.
And you'll be able to, like, there'll be,
I could, I could teach you it within a day, honestly.
Um, how to be the crazies, how to be the crazies.
And then, and then you'll be ready for the big leagues.
Generalizes, uh, to more than just polytopia, but okay.
Uh, why were we talking about polytopia?
Uh, losing hurts.
Losing hurts.
Oh yeah.
Yes.
Um, I think it's more that, at least for me,
I still care about play, about winning when I choose to play
something.
It's just that I don't see the world as,
as zero some as I used to, you know, um,
I think as you, one gets older and wiser,
you start to see the world more as a positive something
or at least you're more aware of externalities of,
of scenarios of competitive interactions.
Um, and so, yeah, I just like, I'm more,
and I'm more aware of my own, you know, like,
if I have a really strong emotional response to losing
and that makes me then feel shitty for the rest of the day
and then I beat myself up mentally for it,
like I'm now more aware that that,
that's unnecessary negative externality.
So I'm like, okay, I need to find a way to turn this down,
you know, dial this down a bit.
Was poker the thing that has,
if you think back at your life
and think about some of the lower points of your life,
like the darker places you've got in your mind,
did it have to do something with poker?
Like what, did losing spark the,
um, the descent into darkness or was it something else?
Um, I think my darkest points in poker were when
I was wanting to quit and move on to other things,
but I felt like I hadn't ticked all the boxes I wanted to tick.
Like I wanted to be the most winningist female player,
but it's by itself a bad goal.
Um, you know, that was one of my initial goals
and I was like, well, I haven't, you know,
and I wanted to win a WPT event.
I've won one of these, I've won one of these,
but I want one of those as well.
And that sort of, again, like it's a drive
of like over-optimization to random metrics
that I decided were important,
um, without much wisdom at the time,
but then like carried on.
Um, that made me continue chasing it longer
than I still actually had the passion to chase it for.
And I don't, I don't have any regrets
that, you know, I played for as long as I did
because who knows, you know, I wouldn't be sitting here.
I wouldn't be living this incredible life
that I'm living now.
Um, this is, this is the height of your life right now.
This is it, peak experience, absolute pinnacle
here in your, in your robot land.
Yeah.
With your creepy light.
Um, no, it is.
I mean, I wouldn't change a thing about my life right now
and I feel very blessed to say that.
Um, so, but the dark times
were in the sort of like 2016 to 18,
even sooner really where I was like,
I had stopped loving the game
and I was going through the motions
and I would,
and then I was like, you know,
I would take the loss as harder than I needed to
because I'm like, oh, it's another one.
And it was, I was aware that like,
I felt like my life was ticking away
and I was like, is this going to be what's on my tombstone?
Oh yeah, she played the game of, you know,
this zero sum game of poker,
slightly more optimally than her next opponent.
Like, cool, great legacy.
You know, so I just wanted, you know,
there was something in me that knew I needed to be doing
something more directly impactful
and just meaningful.
It was just like a search for meaning
and I think it's a thing a lot of poker players,
even a lot of, I imagine any games players
who sort of love intellectual pursuits.
Um, you know, I think you should ask
Magnus Carlson this question.
Yeah, walking away from Jess, right?
Yeah, like, it must be so hard for him.
You know, he's been on the top for so long
and it's like, well, now what?
He's got this incredible brain, like,
what to put it to.
Um, and yeah, it's...
It's this weird moment where I was just
spoken with people that won multiple gold medals
at the Olympics and the depression hits hard
after you win.
Don't come in crash.
Because it's a kind of a goodbye, saying goodbye to that person
to all the dreams you had that you thought
would give meaning to your life.
But in fact, life is full of constant pursuits
of meaning.
It doesn't...
You don't, like, arrive and figure it all out
and there's endless bliss.
Now it continues going on and on.
You constantly have to figure out to rediscover yourself.
And so for you, like, that struggle to say goodbye to poker,
you have to, like, find the next...
There's always a bigger game.
That's the thing.
That's my motto.
It's like, what's the next game?
And more importantly,
because obviously, game usually implies zero sum.
Like, what's the game which is, like, OmniWin?
Look, what it was.
OmniWin.
Why is OmniWin so important?
Because if everyone plays zero sum games,
that's a fast track to either completely stagnate
as a civilization, but more actually far more likely
to extinct ourselves.
You know, like, the playing field is finite.
You know, nuclear powers are playing, you know,
a game of poker with their chips of nuclear weapons, right?
And the stakes have gotten so large
that if anyone makes a single bet, you know,
fires some weapons, the playing field breaks.
I made a video on this.
Like, you know, the playing field is finite
and if we keep playing these adversarial zero sum games,
thinking that we, you know,
in order for us to win, someone else has to lose.
Or if we lose that, you know, someone else wins,
that will extinct us.
It's just a matter of when.
What do you think about that mutually assured destruction?
That very simple, almost to the point of caricaturing game theory idea
that does seem to be at the core of why we haven't blown
each other up yet with nuclear weapons.
Do you think there's some truth to that?
This kind of stabilizing force of mutually assured destruction?
And do you think that's going to hold up through the 21st century?
I mean, it has held, yes.
There's definitely truth to it that it was a, you know,
it's an ash equilibrium.
Yeah, are you surprised it held this long?
Isn't it crazy?
It is crazy when you factor in all the, like,
near miss accidental firings.
That makes me wonder, like, you know,
you're familiar with the, like, quantum suicide thought experiment
where it's basically like, you have a, you know,
like a Russian roulette type scenario hooked up
to some kind of quantum event, you know, particle splitting
or periparticle splitting.
And if it, you know, if it goes A, then the gun doesn't go off
and it goes B, then it does go off and it kills you.
Because you can only ever be in the universe, you know,
assuming like the Everett branch, you know, multiverse theory,
you'll always only end up in the, in the branch
where you continually make, you know, option A comes in.
But you run that experiment enough times,
it starts getting pretty damn, you know, out of the tree gets huge.
There's a million different scenarios in,
but you'll always find yourself in this, in the one where it didn't go off.
And so from that perspective, you are essentially immortal
because someone, and you will only find yourself
in the set of observers that make it down that path.
So it's, it's kind of.
But that doesn't mean, that doesn't, that doesn't mean
you're still not going to be fucked at some point in your life.
No, of course not.
I'm not, I'm not advocating like that we're all immortal
because of this.
It's just like a fun thought experiment.
And the point is it like raises this thing
of like these things called observer selection effects,
which Bostrom, and Nick Bostrom talks about a lot
and I think people should go read.
It's really powerful, but I think it could be overextended
that logic. I'm not sure exactly how it can be.
I just feel like you can get, you can over generalize
that logic somehow.
Well, no, I mean, it leaves you into like solipsism,
which is a very dangerous mindset.
Again, if everyone like falls into solipsism of like,
well, I'll be fine.
That's a great way of creating a very, you know,
self-terminating environment.
But my point is, is that with the nuclear weapons thing,
there have been at least, I think it's 12 or 11
near misses of like just stupid things.
Like there was moonrise over Norway
and it made weird reflections of some glaciers in the mountains,
which set off, I think, the alarms of NORAD radar,
and that put them on high alert, nearly ready to shoot.
And it was only because the head of the Russian military
happened to be at the UN in New York at the time,
that they go like, well, wait a second, why would,
why would they fire now when their guy is there?
And it was only that lucky happenstance,
which doesn't happen very often,
where they didn't then escalate it into firing.
And there's a bunch of these different ones.
Stanislav Petrov, like saved the person
who should be the most famous person on earth,
because he's probably on expectations,
saved the most human lives of anyone,
like billions of people by ignoring Russian orders to fire
because he felt in his gut that actually this was a false alarm
and it turned out to be, you know, very hard thing to do.
And there's so many of those scenarios
that I can't help but wonder at this point
that we aren't having this kind of like selection effect thing going on,
because you look back and you're like, geez,
that's a lot of near misses.
But of course, we don't know the actual probabilities
that they would have lent,
each one would have ended up in nuclear war.
Maybe they were not that likely.
But still, the point is,
it's a very dark, stupid game that we're playing.
And it is an absolute moral imperative, if you ask me,
to get as many people thinking about ways
to make this like very precarious,
because we're in a Nash equilibrium,
but it's not like we're in the bottom of a pit.
You know, if you would like map it topographically,
it's not like a stable ball at the bottom of a thing.
We're not in equilibrium because we're on the top of a hill
with the ball balanced on top.
And just any little nudge could send it flying down
and, you know, nuclear war pops off and hellfire and bad times.
On the positive side, life on earth will probably still continue.
And another intelligent civilization might still pop up.
Maybe.
Several millennia after.
Pick your X-risk.
Depends on the X-risk.
Nuclear war, sure.
That's one of the perhaps less bad ones.
Green goo through synthetic biology, very bad.
We'll turn, you know, destroy all, you know, organic matter through,
you know, it's basically like a biological paperclip maximizer,
also bad.
Or AI type, you know, mass extinction thing as well would also be better.
Shh, they're listening.
There's a robot right behind you.
Okay, wait.
So, let me ask you about this from a game theory perspective.
Do you think we're living in a simulation?
Do you think we're living inside a video game created by somebody else?
Well, I think, well, so what was the second part of the question?
Do I think we're living in a simulation and?
A simulation that is observed by somebody for purpose of entertainment.
So like a video game.
Are we listening?
Are we?
Because there's a, it's like Phil Hellmuth type of situation, right?
Like, there's a creepy level of like, this is kind of fun and interesting.
Like there's a lot of interesting stuff going on.
I mean, that could be somehow integrated into the evolutionary process
where the way we perceive and.
Are you asking me if I believe in God?
Sounds like it.
Kind of, but God seems to be not optimizing in the different formulations of God that
we conceive of.
He doesn't seem to be or she optimizing for like personal entertainment.
Maybe the older gods did, but the, you know, just like the basically like a teenager in
their mom's basement watching, create a fun universe to observe.
What kind of crazy shit might happen?
Okay.
So to try and answer this.
Do I think there is some kind of extraneous intelligence to like our, you know,
classic measurable universe that we, you know, can measure with, you know,
through current physics and instruments?
I think so.
Yes.
Partly because I've had just small little bits of evidence in my own, in my own life,
which have made me question like, so I was a diehard atheist.
Even five years ago, you know, I got into like the rationality community,
big fan of less wrong continue to be incredible resource.
But I've just started to have too many little snippets of experience, which don't make
sense with the current sort of purely materialistic explanation of how reality works.
It's not just like a humbling practical realization that we don't know how reality works.
Isn't that just a reminder to yourself?
Yeah.
No, it's a reminder of epistemic humility because I felt too hard.
You know, same, same as people like, I think, you know, many people who are just like,
my religion is the way this is the correct way.
This is the work.
This is the law.
You are immoral if you don't follow this blah, blah, blah.
I think they are lacking epistemic humility.
They're a little too, too much hubris there.
But similarly, I think the sort of the Richard Dawkins brand of atheism is too rigid as well
and doesn't, you know, there's a way to try and navigate these questions, which still honors
the scientific method, which I still think is our best sort of realm of like reasonable
inquiry, you know, a method of inquiry.
So an example, I have two kind of notable examples that like really rattled my cage.
The first one was actually in 2010, early on in, quite early on in my poker career.
And I, remember the Icelandic volcano that erupted that like shut down kind of all Atlantic
airspace.
And it meant I got stuck down in the south of France.
I was there for something else.
And I couldn't get home.
And someone said, well, there's a big poker tournament happening in Italy.
Maybe do you want to go?
I was like, oh, right, sure, like let's, you know, got a train across, found a way to get there.
And the buy-in was 5,000 euros, which was much bigger than my bankroll would normally allow.
And so I played a feeder tournament, won my way in, kind of like I did with the Monte Carlo
big one.
So then I won my way, you know, from 500 euros into 5,000 euros to play this thing.
And on day one of then the big tournament, which turned out to have, it was the biggest
tournament ever held in Europe at the time.
Like 1,200 people, absolutely huge.
And I remember they dimmed the lights before, you know, the normal shuffle up and deal to
tell everyone to start playing.
And they played Chemical Brothers, Hey Boy, Hey Girl, which I don't know why it's notable,
but it was just like a really, it was a song I always liked.
It was like one of these like pump me up songs.
And I was sitting there thinking, oh yeah, it's exciting.
I'm playing this really big tournament.
And out of nowhere, just suddenly this voice in my head just, it sounded like my own sort
of, you know, when you think in your mind, you hear a voice kind of, right?
At least I do.
And so it sounded like my own voice.
And it said, you are going to win this tournament.
And it was so powerful that I got this like wave of like, you know, just sort of goosebumps
down my body.
And I even, I remember looking around being like, did anyone else hear that?
And obviously people are in their phones like no one else heard it.
And I was like, okay, six days later, I win the fucking tournament out of 1,200 people.
And I don't know how to explain it.
Okay.
Yes.
Maybe I have that feeling before every time I play.
And it's just that I happen to, you know, because I won the tournament, I retroactively
remembered it.
Or the, or the feeling gave you a kind of now from the film Helmuthian.
Well, exactly.
Like it gave you a confident, a deep confidence.
And it did.
It definitely did.
Like I remember then feeling this like sort of, well, although I remember then on day
one, I then went and lost half my stack quite early on.
And I remember thinking like, oh, that was bullshit.
You know, what kind of premonition is this?
Yes.
Thinking, oh, I'm out.
But, you know, I managed to like keep it together and recover.
And then, and then just went like pretty perfectly from then on.
And either way, it definitely instilled me with this confidence.
And I don't want to put, I don't, I can't put an explanation like, you know, was it
some, you know, huge extra, extra supernatural thing driving me?
Or was it just my own self confidence in someone that just made me make the right decisions?
I don't know.
And I don't, I'm not going to put a frame on it.
And I think
I know a good explanation.
So we're a bunch of NPCs living in this world created by in the simulation.
And then people, not people, creatures from outside of the simulation sort of can tune
in and play your character.
And that feeling you got is somebody just like, they got to play a poker tournament through
you.
Honestly, it felt like that.
It did actually feel a little bit like that.
But it's been 12 years now.
I've retold this story many times.
Like, I don't even know how much I can trust my memory.
You're just an NPC retelling the same story.
This because they just played the tournament and left.
Yeah.
They're like, oh, that was fun.
Cool.
Yeah.
Cool.
Next.
And now you're for the rest of your life left as a boring NPC retelling this great
greatness.
Well, it was, and what was interesting was that after that, then I didn't obviously win
a major tournament for quite a long time.
And it left.
That was, that was actually another sort of dark period because I had this incredible
like the highs of winning that, you know, just on a like material level.
We're insane winning the money.
I was on the front page of newspapers because it was like this girl that came out of nowhere
and won this big thing.
And so again, like sort of chasing that feeling was, was difficult.
But then on top of that, that was this feeling of like almost being touched by something
bigger that was like, uh, um,
And also maybe did you have a sense that I might be somebody special?
Like this kind of, I think that's the confidence thing that maybe you could do something special
in this world after all kind of feeling.
I definitely, I mean, this is the thing I think everybody wrestles with to an extent,
right?
Like we are truly the protagonists in our own lives.
And so it's a natural bias, human bias to feel, to feel special.
And I think, and in some ways we are special, every single person is special because you
are that the universe does the world literally does revolve around you.
That's the thing in, in some respect.
But of course, if you then zoom out and take the amalgam of everyone's experiences, then
no, it doesn't.
So there is this shared sort of objective reality, but sorry, this objective reality
that is shared, but then there's also this subjective reality, which is truly unique
to you.
And I think both of those things coexist and it's not like one is correct and one
isn't.
And again, anyone who's like, uh, oh no, your lived experience is everything versus
your lived experience is nothing.
No, it's, it's, it's a blend between these two things.
They can exist concurrently.
But there's a certain kind of sense that at least I've had my whole life and I think
a lot of people have this is like, well, I'm just like this little person.
Surely I can't be one of those people that do the big thing.
Right.
There's all these big people doing big things.
There's big actors and actresses, big musicians.
There's big business owners and all that kind of stuff.
Scientists and so on.
I, you know, I have my own subjective experience that I enjoy and so on, but there's like a
different layer.
Like, um, surely I can't do those great things.
I mean, one of the things just having interacted with a lot of great people, I realized.
And oh, they're like just the same, the same, the same humans as me.
And that realization I think is really empowering and like to remind yourself.
What are they?
What are they?
Are they?
Uh, uh, well.
Depends on some.
Yeah.
They're like a bag of insecurities and peculiar sort of like their own little weirdnesses and
so on.
Um, I should say also not, um, they have the capacity for brilliance, but they're not generically
brilliant.
Like, you know, we, we tend to say this person or that person is brilliant, but really, no,
they're just like sitting there and thinking through stuff, just like the rest of us.
Right.
I think they're in the habit of thinking through stuff seriously.
And they've built up a habit of not allowing them their mind to get trapped in a bunch
of bullshit and minutiae of day to day life.
They really think big ideas, but those big ideas, it's like allowing yourself the freedom
to think big, to realize that you, you, you can be one that actually solve this particular
big problem.
First identify a big problem that you care about, then like, I can actually be the one
that solves this problem and like allowing yourself to believe that.
And I think sometimes you do need to have like that shock go through your body and a
voice tells you, you're going to win this tournament.
Well, exactly.
And, and whether it was, it's, it's this idea of, uh, useful fictions.
So again, like going through the, all like the rat, the classic rationalist training
of less wrong where it's like, you want your map, you know, the, the image you have of
the world in your head to as accurately match up with how the world actually is.
You want the map and the territory to perfectly align as, you know, you want it to be as
an accurate representation as possible.
I don't know if I fully subscribed to that anymore.
Having now had these moments of like feeling of something either bigger or just actually
just being overconfident.
Like there's, there is value in overconfidence sometimes I do.
If you would, you know, take, you know, take Magnus Carlson, right?
If he, I'm sure from a young age, he knew he was very talented, but I wouldn't be surprised
if he was also had something in him to, well, actually, maybe he's a bad example because
he truly is the world's greatest.
Um, but someone who was unclear whether they were going to be the world's greatest, but
ended up doing extremely well because they had this innate, deep self-confidence, this
like even overblown, uh, idea of how good their relative skill level is.
That gave them the confidence to then pursue this thing and like with the kind of focus
and dedication that it requires to excel in whatever it is you're trying to do, you know?
And so there are these useful fictions and that's where I think I diverge slightly with
the classic, um, the classic sort of rationalist community, um, because that's a field that
is worth studying, um, of like how the stories we tell, what the stories we tell to ourselves,
even if they are actually false and even if we suspect they might be false, um, how it's
better to sort of have that like little bit of faith, um, like value in faith, I think
actually.
And that's partly another thing that's now led me to explore, um, you know, the concept
of God, whether you want to call it a simulator, the classic theological thing, I think we're
all like elucidating to the same thing.
Now I don't know, I'm not saying, you know, because obviously the Christian God is like,
you know, all benevolent, um, endless love.
The simulation, one of the, at least one of the simulation hypothesis is like, as you
said, like a teenager in its bedroom who doesn't really care, doesn't give a shit about the
individuals within there, it just like wants to see how the thing plays out because it's
curious and it could turn it off like that.
You know, where on the, you know, where on the sort of psychopathy to benevolent spectrum
God is, I don't know, um, but having, having this, having a little bit of faith that there
is something else out there that might be interested in our outcome is I think an essential
thing actually for people to, to find a, because it creates commonality between us.
It's something we can all share and like it, like it is uniquely humbling of all of us
to an extent.
It's like a, like a common objective, um, but B, it gives people that little bit of
like reserve, you know, when things get really dark and I do think things are going to get
pretty dark over the next few years.
Um, but it gives that like to think that there's something out there that actually wants our
game to keep going.
I keep calling it the game, you know, uh, it's a thing C and I recall it the game.
Um, you and C is AKA Grimes call, call what the game, everything, the whole thing.
Yeah.
We, we, we joke about like everything is a game, not well the, the universe, like what
if, what if it's a game and the, the goal of the game is to figure out like, well, either
how to beat it, how to get out of it, you know, maybe, maybe that, maybe this universe
is an escape room, like a giant escape room and the goal is to figure out, put all the
pieces of puzzle, figure out how it works in order to like unlock this like hyper dimensional
key and get out beyond what it is.
That's no, but then so you're saying it's like different levels and it's like a cage
within a cage, within a cage and never like one cage at a time, you figure out how to
do that.
Um, like a new level up, you know, like us becoming multi-planetary would be a level
up or us, you know, find figuring out how to upload our consciousnesses to the thing
that would probably be a leveling up or spiritually, you know, humanity becoming more combined
and less adversarial and, and, uh, bloodthirsty and us becoming a little bit more enlightened
that would be a leveling up.
You know, there's many different frames to it, whether it's physical, you know, digital,
uh, or like metaphysical.
I think, I think level one for earth is probably the biological evolutionary process.
It's like going from single cell organisms to early humans that may be level two is whatever
is happening inside our minds and creating ideas and creating technologies.
That's like evolutionary process of ideas.
And then, uh, multi-planetary is interesting.
Is that fundamentally different from what we're doing here on earth?
Probably.
Cause it allows us to like exponentially scale.
It, it delays the Malthusian trap, right?
It, it, it's a way to keep the playing field get a lot to make the playing field get larger
so that we can accommodate more of our stuff, more of us.
And that's a good thing.
But I don't know if it like fully solves this issue of, uh, well, this thing called
Moloch, which we haven't talked about yet, but, um, which is basically, I call it the,
the God of unhealthy competition.
Yeah.
Let's go, let's go to Moloch.
What's Moloch?
You, you did a great video on Moloch.
One aspect of it, the application of it to Instagram beauty filters through very niche.
Uh, I wanted to start off small, um, so, uh, Moloch was originally, um, coined as well.
So apparently back in the like, uh, Canaanite times, it was to say ancient Carthaginian,
I can never say it, Carthaginian, somewhere around like 300 BC or 200 AD, I don't know.
Um, there was supposedly this death cult who would sacrifice their children to this awful
demon God thing.
They called Moloch, um, in order to get power to win wars.
So really dark, horrible things.
And it was literally like about child sacrifice, whether they actually existed or not, we don't
know.
But in mythology, they, they did.
And this God that they worshiped was this thing called Moloch.
And then I don't know, it seemed like it was kind of quiet throughout history, um, in terms
of mythology beyond that until, um, this movie Metropolis, uh, in 1927, talked about, um,
this movie, you see that there was this incredible futuristic city that everyone was living great
in.
Um, but then the protagonist goes underground into the sewers and sees that the city is run
by this machine.
And this machine basically would just like kill the workers all the time because it was
just so hard to keep it running.
They were always dying.
So there was all this suffering that was required in order to keep the city going.
And then the protagonist has this vision that this machine is actually this demon Moloch.
So again, it's like this sort of like mechanistic consumption of, of humans in order to get
more power.
Um, and then Alan Ginsberg wrote a poem in the sixties, um, which incredible poem called
Howl about this thing, Moloch.
Um, and a lot of people sort of quite understandably take the, the interpretation of that.
He's, uh, that he's talking about capitalism, um, but then the bet, like the sort of piece
to resistance that's moved Moloch into this idea of game theory, uh, was Scott Alexander
of Slate-style Codex, um, wrote this incredible, one, literally, I think it might be my favorite
piece of writing of all time.
It's called Meditations on Moloch.
Everyone must, must go read it.
Yeah.
And Slate Codex is a blog.
It's a blog.
Yes.
We can link to it in the show notes or something.
Right.
Um, no, don't.
I, I, yes, yes, but I like how you, how, how you assume, um, I have a professional operation
going on here.
I mean, I shall try to remember.
What do you want?
What do you want?
What do you want?
You're giving the impression of it.
Yeah.
Yeah.
I'll like, please, if I, if I don't please somebody in the comments, remind me.
I'll help you.
I don't know this blog is one of the best blogs ever probably you should probably be
following it.
Yes.
Um, our blog's still a thing.
I think they are still a thing.
Yeah.
Yeah.
He's migrated on to Substack, but yeah, it's still a blog.
Um, anyway.
Substack, better not fuck things up, but.
I hope not.
Yeah.
I hope they don't, I hope they don't turn Molochie, which will mean something to people
when we continue.
Okay.
When I stop interrupting for once, that's good.
Go on.
Yeah.
So he writes, he writes this, this piece, meditations on Moloch and basically he analyzes
the poem and he's like, okay, so it seems to be something relating to where competition
goes wrong.
And you know, Moloch was historically this thing of like where people would sacrifice
a thing that they care about in this case, children, their own children, uh, in order
to gain power, a competitive advantage.
And if you look at almost everything that sort of goes wrong in our society, it's that
same process.
So with the Instagram beauty filters thing, um, you know, if you're trying to become a
famous Instagram model, you are incentivized to post the hottest pictures of yourself that
you can, you know, you're trying to play that game.
Um, there's a lot of hot women on Instagram.
How do you compete against them?
You post really hot pictures and that's how you get more likes as technology gets better.
Um, you know, more makeup techniques come along.
And then more recently, these beauty filters where like at the touch of a button, it makes
your face look absolutely incredible, um, compared to your natural, natural, natural
face.
Uh, these, these technologies come along.
It's everyone is incentivized to that short-term strategy.
Um, but over on, on net, it's bad for everyone because now everyone is kind of like feeling
like they have to use these things and these things like they make you like the reason
why I talked about them in this video is because I noticed it myself, you know, like I was
trying to grow my Instagram for a while, I've given up on it now, but, um, yeah, and I noticed
these filters, how good they made me look and I'm like, well, I know that everyone else
is kind of doing it.
Go subscribe to Liv's Instagram.
Please.
So I don't have to use the filters.
Uh, post a bunch of, yeah, make, make, make it blow up.
Uh, yeah.
With those, you felt the pressure actually.
Exactly.
These short-term incentives to do this like, this thing that like either sacrifices your
integrity or something else, um, in order to like stay competitive, um, which on aggregate
turns like, creates this like sort of race to the bottom spiral where everyone else ends
up in a situation which is worse off than if they hadn't started, you know, than it
were before.
Kind of like if, um, like at a football stadium, uh, like the system is so badly designed,
a competitive system of like everyone sitting and having a view that if someone at the very
front stands up to get an even better view, it forces everyone else behind to like adopt
that same strategy just to get to where they were before, but now everyone's stuck standing
up.
Like, so you need this like top down, God's like coordination to make it go back to the
better state, but from within the system, you can't actually do that.
So that's kind of what this MOLIC thing is.
It's this thing that makes people sacrifice, uh, values in order to optimize for the winning
the game in question, the short-term game.
But this, this MOLIC, do you, can you attribute it to anyone centralized source or is it an
emergent phenomena from a large collection of people?
Exactly that.
It's, it's an emergent phenomena.
It's, it's a force of game theory.
It's a force of bad incentives on a multi-agent system where you've got more, you know, prisoners
dilemma is technically a kind of MOLIC system as well, but it's just a two player thing.
Um, another word for MOLIC is it multipolar trap, um, where basically you just got a lot
of different people or competing for some kind of prize.
Um, and it would be better if everyone didn't do this one shitty strategy, but because that
shot strategy gives you a short-term advantage, everyone's incentivized to do it.
And so everyone ends up doing it.
So the responsibility for, I mean, social media is a really nice place for a large number
of people to play game theory.
And so they also have the ability to then design the, the rules of the game.
And uh, is it on them to try to anticipate what kind of, like to do the thing that poker
players are doing to run simulation?
Ideally, that would have been great if, you know, Mark Zuckerberg and Jack and all the,
you know, the Twitter founders and everyone, if they had at least just run a few simulations
of how their algorithms would, you know, different types of algorithms would turn out for society,
that would have been great.
That's really difficult to do that kind of deep philosophical thinking about thinking
about humanity actually.
So not, not kind of this level of how do we optimize engagement or what brings people
joy in the short-term, but how is this thing going to change the way people see the world?
How is it going to get morphed and iterative games played into something that will change
society forever?
That requires some deep thinking that's, I hope there's meetings like that and say companies,
but I haven't seen them.
There aren't.
That's the problem.
And, and it's, it's difficult because like when you're starting up a social media company,
you know, you're aware that you've got investors to please, there's bills to pay.
You know, there's only so much R&D you can afford to do.
You've got all these like incredible pressures into bad, you know, bad incentives to get
on and just build your thing as quickly as possible and start making money.
And you know, I don't think anyone intended when they built these social media platforms
and just to like preface it.
So the reason why, you know, social media is relevant because it's a very good example
of like everyone these days is optimizing for, you know, clicks, whether it's a social
media platforms themselves because, you know, every click gets more, you know, impressions
and impressions pay for, you know, they get advertising dollars or whether it's individual
influencers or, you know, whether it's the New York Times or whoever, they're trying
to get their story to go viral.
So everyone's got this bad incentive of using, you know, as you called it, the click bait
industrial complex.
That's a very mollkey system because everyone is now using worse and worse tactics in order
to like try and win this attention game.
And yeah, so ideally these companies would have had enough slack in the beginning in
order to run these experiments to see, okay, what are the ways this could possibly go wrong
for people?
Well, what are the ways that Molek, they should be aware of this concept of Molek and realize
that it's, whenever you have a highly competitive multi-agent system, which social media is
a classic example of millions of agents all trying to compete for likes and so on.
And you try and bring all this complexity down into like very small metrics such as
number of likes, number of retweets, whatever the algorithm optimizes for, that is a like
guaranteed recipe for this stuff to go wrong and become a race to the bottom.
And I think there should be an honesty when founders, I think there's a hunger for that
kind of transparency of like, we don't know what the fuck we're doing.
This is a fascinating experiment where all running as a human, as a human civilization.
Let's try this out and like actually just be honest about this, that we're all like these
weird rats and amaze, none of us are controlling it.
There's this kind of sense like the founders, the CEO of Instagram or whatever, Mark Zuckerberg
has a control and he's like, with strings playing people.
No, they're-
He's at the mercy of this is like everyone else, he's just like trying to do his best.
And like, I think putting on a smile and doing over polished videos about how Instagram
and Facebook are good for you, I think is not the right way to actually ask some of
the deepest questions we get to ask as a society.
How do we design the game such that we build a better world?
I think a big part of this as well is people, there's this philosophy particularly in Silicon
Valley of, well, techno optimism, technology will solve all our issues.
And there's a steel man argument to that, where yes, technology has solved a lot of
problems and can potentially solve a lot of future ones, but it can also, it's always
a double-edged sword and particularly as you know, technology gets more and more powerful
and we've now got like big data and we're able to do all kinds of like psychological
manipulation with it and so on.
It's, technology is not about values, neutral thing.
People think, I used to always think this myself, it's like this naive view that, oh,
technology is completely neutral, it's just, it's the humans that either make it good
or bad.
No, to the point we're at now, the technology that we are creating, they are social technologies,
they literally dictate how humans now form social groups and so on, beyond that.
And beyond that, it also then, that gives rise to like the memes that we then like coalesce
around.
And that, you know, if you have the stack that way, where it's technology driving social
interaction, which then drives like mimetic culture and like, which ideas become popular,
that's Moloch.
And we need the other way around, we need it, so we need to figure out what are the
good memes, what are the good values that we think are, we need to optimize for that
like makes people happy and healthy and like keeps society as robust and safe as possible,
then figure out what the social structure around those should be, and only then do we
figure out technology.
But if we're doing the other way around and, you know, like, as much as I love, in many
ways, the culture of Silicon Valley and like, you know, I do think that technology has,
you know, I don't want to knock it.
It's done so many wonderful things for us, same as capitalism.
There are, we have to like be honest with ourselves, we're getting to a point where
we are losing control of this very powerful machine that we have created.
Can you redesign the machine within the game?
Can you just have, can you understand the game enough?
Okay, this is the game.
And this is how we start to reemphasize the memes that matter, the memes that bring out
the best in us.
You know, like the way I try to be in real life and the way I try to be online is to
be about kindness and love.
And I feel like I'm sometimes get like criticized for being naive and all those kinds of things.
But I feel like I'm just trying to live within this game.
I'm trying to be authentic.
Yeah, but also like, hey, it's kind of fun to do this.
Like you guys should try this too.
You know, that and that's like trying to redesign some aspects of the game within the game.
Is that possible?
I don't know, but I think we should try.
I don't think we have an option but to try.
Well, the other option is to create new companies or to pressure companies that or anyone who
has control of the rules of the game.
I think we need to be doing all of the above.
I think we need to be thinking hard about what are the kind of positive, healthy memes.
You know, as Elon said, he who controls the memes controls the universe.
You said that.
I think he did.
Yeah.
But there's truth to that.
It's very, there is wisdom in that because memes have driven history.
You know, we are a cultural species.
That's what sets us apart from chimpanzees and everything else.
We have the ability to learn and evolve through culture as opposed to biology or like, you
know, classic physical constraints.
And that means culture is incredibly powerful.
And we can create and become victim to very bad memes or very good ones.
But we do have some agency over which memes, you know, we, we, but not only put out there,
but we also like to subscribe to.
So I think we need to take that approach.
We also need to, you know, because I don't want the, the, the, the, you know, I'm making
this video right now called the attention wars, which is about like how Molek, it's
like the media machine is this Molek machine.
Well, is this, is this kind of like blind dumb thing that where everyone is optimizing
for engagement in order to win their share of the attention pie.
And then if you zoom out, it's really like Molek that's pulling the strings because the
only thing that benefits from this in the end, you know, like, oh, our information ecosystem
is breaking down.
Like we have, you look at the state of the US, it's in, we're in a, we're in a civil
war.
It's just not a physical war.
It's, it's, it's a, it's an information war and people, people are becoming more fractured
in terms of what their actual shared reality is.
Like truly like an extreme left person, an extreme right person, like they, they, they
literally live in different worlds in their, in their, in their minds at this point.
And it's getting more and more amplified.
And this, this force is like a, like razor blade pushing through everything.
It doesn't matter how innocuous the topic is, it will find a way to split into this,
you know, bifurcated culture war and it's fucking terrifying.
Because that maximizes the tension and that's like an emergent Molek type force that takes
any, anything, any topic and cuts through it so that you can split nicely into two groups.
One that's.
Well, it's, it's whatever, yeah.
All everyone is trying to do within the system is just maximize whatever gets them the most
attention because they're just trying to make money so they can keep their thing going,
right?
And the way the, the best emotion for getting attention in, well, because it's not just
about attention on the internet, it's engagement.
That's the key thing, right?
In order for something to go viral, you need people to actually engage with it.
They need to like comment or retweet or whatever.
And of all the emotions that, you know, there's like seven classic shared emotions that studies
are found at all humans, even from like on previously uncontacted tribes have.
Some of those are negative, you know, like sadness, disgust, anger, et cetera.
Some are positive happiness, excitement and so on.
The one that happens to be the most useful for the internet is anger.
Because anger is, it's such an active emotion.
If you want people to engage, if someone's scared, and I'm not just like talking out
of my ass here, there are studies here that have looked into this.
Whereas like if someone's like disgusted or fearful, they actually tend to them be like,
I don't want to deal with this.
So they're less likely to actually engage and share it and so on.
They're just going to be like, whereas if they're enraged by a thing, well, now they're
like, that triggers all the like, the old tribalism emotions.
And so that's how then things get sort of spread, you know, much more easily.
They out-compete all the other memes in the ecosystem.
And so this like, the attention economy, the wheels that make it go around is rage.
I did a tweet, the problem with raging against the machine is that the machine has learned
to feed off rage because it is feeding off our rage.
That's the thing that's now keeping it going.
The more we get angry, the worse it gets.
So the mollic in this attention, in the war of attention is constantly maximizing rage.
What it is optimizing for is engagement, and it happens to be that engagement is more propaganda.
You know, it's not, I mean, it just sounds like everything is putting, is more and more
things being put through this like propagandist lens of winning whatever the war is in question,
whether it's the culture war or the Ukraine war.
Yeah.
Well, I think the silver lining of this, do you think it's possible that in the long arc
of this process, you actually do arrive at greater wisdom and more progress, it just
in the moment, it feels like people are turning each other to shreds over ideas.
But if you think about it, one of the magic things about democracy and so on, is you have
the blue versus red constantly fighting.
It's almost like they're in this course, creating devil's advocate, making devils out
of each other.
And in the process, discussing ideas, like almost really embodying different ideas just
to yell at each other and through the yelling over the period of decades, maybe centuries,
figuring out a better system.
In the moment, it feels fucked up, but in the long arc, it actually is productive.
I hope so.
That said, we are now in the era of, just as we have weapons of mass destruction with
nuclear weapons that can break the whole playing field, we now are developing weapons
of informational mass destruction, information weapons, WMDs that basically can be used for
propaganda or just manipulating people, however, is needed, whether that's through dumb TikTok
videos, or there are significant resources being put in, I don't mean to sound like to
doom and gloom, but there are bad actors out there.
That's the thing.
There are plenty of good actors within the system who are just trying to stay afloat in
the game.
So we're effectively doing monarchy things, but then on top of that, we have actual bad
actors who are intentionally trying to manipulate the other side into doing things.
And using, so because it's a digital space, they're able to use artificial actors, meaning
bots.
Exactly.
Botnets.
And this is a whole new situation that we've never had before.
It's exciting.
You know what I want to do?
You know what I want to do?
Because there is people talking about bots manipulating and malicious bots that are basically
spreading propaganda, I want to create like a bot army that fights that, yeah, exactly
for love.
They have fights.
You know, there's truths to fight fire with fire, but how you always have to be careful
whenever you create, again, like, Molek is very tricky.
Hitler was trying to spread love too.
Yeah, so we thought, but I agree with you that that is a thing that should be considered,
that there is, again, everyone, the road to hell is paved in good intentions.
And there's always unforeseen outcomes, externalities of you trying to adopt a thing, even if you
do it in the very best of faith.
But you can learn lessons of history.
If you can run some sims on it first, absolutely.
But also there's certain aspects of a system as we've learned through history that do better
than others.
For example, don't have a dictator, so like if I were to create this bot army, it's not
good for me to have full control over it.
Because in the beginning, I might have a good understanding of what's good and not.
But over time, that starts to get deviated, because I'll get annoyed at some assholes
and I'll think, okay, wouldn't it be nice to get rid of those assholes, but then that
power starts getting to your head, you become corrupted, that's basic human nature.
So distribute the power.
We need a love botnet on a DAO, a DAO love botnet.
And without a leader, like without-
Exactly, distributed, right, but yeah, without any kind of centralized-
Yeah, without even, you know, basic is the more control, the more you can decentralize
the control of a thing to people.
But then you still need the ability to coordinate, because that's the issue when you think something
is too, you know, that's really, to me, like the culture wars, the bigger war we're dealing
with is actually between the sort of the, I don't know what even the term is for it,
but like centralization versus decentralization, that's the tension we're seeing.
Power in control by a few versus completely distributed.
And the trouble is, if you have a fully centralized thing, then you're at risk of tyranny, you
know, Stalin type things can happen, or completely distributed, now you're at risk of complete
anarchy and chaos where you can't even coordinate to like on, you know, when there's like a
pandemic or anything like that.
So it's like, what is the right balance to strike between these two structures?
Can't Molek really take hold in a fully decentralized system?
That's one of the dangers, too.
Yes.
Very vulnerable to Molek.
So the dictator can commit huge atrocities, but they can also make sure the infrastructure
works and trains run on.
They have that God's eye view, at least.
They have the ability to create like laws and rules that like force coordination, which
stops Molek.
But then you're vulnerable to that dictator getting infected with like this, with some
kind of psychopathy type thing.
What's reverse Molek?
So great question.
So that's where, I've been working on this series, it's been driving me insane for the
last year and a half.
I did the first one a year ago, I can't believe it's nearly been a year.
The second one, hopefully we're coming out in like a month.
And my goal at the end of the series is to like present, because basically I'm painting
the picture of what Molek is and how it's affecting almost all these issues in our society
and how it's driving.
It's like kind of the generator function, as people describe it, of existential risk.
And then at the end of that.
Wait, wait.
The generator function of existential risk.
So you're saying Molek is sort of the engine that creates a bunch of ex-risks?
Yes.
Not all of them.
Just a cool phrase, generator function function.
It's not my phrase.
It's Daniel Schmacktenberger.
Oh, Schmacktenberger.
I got that from him.
Of course.
All things.
It's like all roads lead back to Daniel Schmacktenberger, I think.
The dude is brilliant.
After that, it's Mark Twain.
But anyway, sorry, totally rude interruption for me.
No, it's fine.
So not all ex-risks.
So like an asteroid technically isn't because it's just like this one big external thing.
It's not like a competition thing going on, but synthetic bioweapons, that's one because
everyone's incentivized to build, even for defense, bad viruses, just to threaten someone
else, et cetera, or AI, technically, the race to AGI is kind of potentially a Molek-y situation.
But yeah, so if Molek is this generator function that's driving all of these issues over the
coming century that might wipe us out, what's the inverse?
And so far, what I've gotten to is this character that I want to put out there called Win-Win.
Because Molek is the God of Lose-Lose, ultimately.
It masquerades as the God of Win-Lose, but in reality, it's Lose-Lose.
Everyone ends up worse off.
So I was like, well, what's the opposite of that?
It's Win-Win.
And I was thinking for ages, what's a good name for this character?
And then the more I was like, okay, well, don't try and think through it logically.
What's the vibe of Win-Win?
And to me, in my mind, Molek is like, and I dress as it in the video, it's red and black.
It's kind of like very hyper-focused on its one goal you must win.
So Win-Win is kind of actually like these colors, it's like purple, turquoise.
It loves games too.
It loves a little bit of healthy competition, but constrained, like kind of like before.
Knows how to ring fence zero-sum competition into just the right amount, whereby it's externalities
can be controlled and kept positive.
And then beyond that, it also loves cooperation, coordination, love, all these other things.
But it's also kind of like mischievous, like, you know, it will have a good time.
It's not like kind of like boring, you know, like, oh God, it's, you know, it's hard to
have fun.
It can get like, it can get down.
But ultimately, it's unbelievably wise and it just wants the game to keep going.
And I call it Win-Win.
Win-Win.
That's a good like pet name.
Yes.
Win-Win.
The, I think the...
Win-Win.
Win-Win, right?
And I think it's formal name when it has to do like official functions is Omnia.
Omnia.
Yeah.
From like omniscience, kind of, what's, why Omnia?
It's just like Omnia.
It's just like Om-Win.
Om-Win.
But I'm open to suggestions.
Or like, you know, and this is...
I like Omnia.
Yeah.
But there's an angelic kind of sense to Omnia though.
So Win-Win is more fun.
Exactly.
Like it embraces the fun aspect.
I mean, there is something about sort of, there's some aspect to Win-Win interactions
that requires embracing the chaos of the game and enjoying the game itself.
I don't know.
I don't know what that is.
That's almost like a zen like appreciation of the game itself, not optimizing for the
consequences of the game.
Right.
Well, it's recognizing the value of competition in of itself.
It's not like about winning.
It's about you enjoying the process of having a competition and not knowing whether you're
going to win or lose this little thing.
But then also being aware that, you know, what's the boundary?
How big do I want competition to be?
Because one of the reasons why Molek is doing so well now in our civilization is because
we haven't been able to ring fence competition.
You know, and so it's just having all these negative externalities and we've completely
lost control of it.
You know, it's, I think my guess is, and now we're getting really like, you know, metaphysical
technically, but I think we'll be, we'll be in a more interesting universe if we have
one that has both pure cooperation, you know, lots of cooperation and some pockets of competition,
and one that's purely competition, cooperation entirely, like it's good to have some little
zero sum this bits, but I don't know that fully and I'm not qualified as a philosopher
to know that.
And that's what reverse Molek, so this kind of win-win creature is in, system is an antidote
to the Molek system.
Yes.
And I don't know how it's going to do that.
But it's good to kind of try to start to formulate different ideas, different frameworks of how
we think about that at the small scale of a collection of individuals, a large scale
of a society.
Exactly.
It's a meme.
I think it's, I think it's an example of a good meme.
And I'm open, I'd love to hear feedback from people if they think it's at, you know, they
have a better idea or it's not, you know, but it's the direction of memes that we need
to spread this idea of like, look for the win-wins in life.
Well, on the topic of beauty filters, so in that particular context where Molek creates
negative consequences, you know, Dostoevsky said beauty will save the world.
What is beauty anyway?
It would be nice to just try to discuss what kind of thing we would like to converge towards
in our understanding of what is beautiful.
So to me, I think something is beautiful when it can't be reduced down to easy metrics.
Like if you think of a tree, what is it about a tree, like a big ancient beautiful tree,
right?
What is it about it that we find so beautiful?
It's not, you know, the sweetness of its fruit or the value of its lumber, it's this entirety
of it that is, there's these immeasurable qualities, it's like almost like a qualia
of it that's both, like it walks this fine line between, well, it's got lots of patternicity,
but it's not overly predictable, you know, again, it walks this fine line between order
and chaos.
It's a highly complex system.
And the, you know, you can't, it's evolving over time, you know, the definition of a complex
versus, and this is another Schmacktenberger thing, you know, a complex versus a complicated
system.
A complicated system can be sort of broken down into bits, understood, and then put
that together.
A complex system is kind of like a black box.
It does all this crazy stuff, but if you take it apart, you can't put it back together
again because there's all these intricacies.
And also very importantly, like the sum of the parts, sorry, the sum of the whole is
much greater than the sum of the parts.
And that's where the beauty lies, I think.
And I think that extends to things like art as well, like, there's something immeasurable
about it.
There's something we can't break down to a narrow metric.
Does that extend to humans, you think?
Yeah, absolutely.
So how can Instagram reveal that kind of beauty, the complexity of a human being?
Good question.
This takes us back to our dating sites and good reads, I think.
Very good question.
I mean, well, I know what it shouldn't do.
It shouldn't try and like, right now, you know, one of the, I was talking to like a
social media expert recently because I was like, I hate things.
There's such a thing as a social media expert?
Oh, yeah.
You know, there are like agencies out there that you can like outsource because I'm thinking
about working with one to like, I saw, I want to start a podcast.
You should.
You should have done it a long time ago.
Working on it.
It's going to be called Winwin.
And it's going to be about this like positive stuff.
And the thing that, you know, they always come back and say, it's like, well, you need
to like, figure out what your thing is, you know, you need to narrow down what your thing
is and then just follow that.
Have like a sort of a formula because that's what people want.
They want to know that they're coming back to the same thing.
And that's the advice on YouTube, Twitter, you name it.
And that's why, and the trouble with that is that it's a complexity reduction.
And generally speaking, complexity reduction is bad.
It's making things more, it's an oversimplification.
Not that simplification is always a bad thing, but when you're trying to take, you know,
what is social media doing?
It's trying to like encapsulate the human experience and put it into digital form and
commodify it to an extent.
So you do that, you compress people down into these like narrow things.
And that's why I think it's kind of ultimately fundamentally incompatible with at least my
definition of beauty.
Yeah, it's interesting because there is some sense in which a simplification sort of in
the Einstein kind of sense of a really complex idea, a simplification in a way that still
captures some core power of an idea of a person is also beautiful.
And so maybe it's possible for social media to do that.
A presentation, a sort of a slither, a slice, a look into a person's life that reveals something
real about them.
But in a simple way, in a way that can be displayed graphically or through words.
In some way Twitter can do that kind of thing, a very few set of words can reveal the intricacies
of a person.
Of course, the viral machine that spreads those words often results in people taking
the thing out of context.
People often don't read tweets in the context of the human being that wrote them.
The full history of the tweets they've written, the education level, the humor level, the
worldview they're playing around with, all that context is forgotten and people just
see the different words.
So that can lead to trouble.
But in a certain sense, if you do take it in context, it reveals some kind of quirky
little beautiful idea or a profound little idea from that particular person that shows
something about that person.
So in that sense, Twitter can be more successful.
If we talk about mollusks, is driving a better kind of incentive?
Yeah, I mean, how they can, if we were to rewrite, is there a way to rewrite the Twitter
algorithm so that it stops being the fertile breeding ground of the culture wars?
Because that's really what it is.
Maybe I'm giving it Twitter too much power, but just the more I looked into it and I had
conversations with Tristan Harris from Center of Human Technology and he explained it as
Twitter is where you have this amalgam of human culture and then this terribly designed
algorithm that amplifies the craziest people and the angriest, the angriest, most divisive
takes and amplifies them and then the mainstream media, because all the journalists are also
on Twitter, they then are informed by that and so they draw out the stories they can
from this already very boiling lava of rage and then spread that to their millions and
millions of people who aren't even on Twitter.
And so honestly, I think if I could press a button and turn them off, I probably would
at this point because I just don't see a way of being compatible with healthiness, but
that's not going to happen.
And so at least one way to stem the tide and make it less monarchy would be to change,
at least if it was on a subscription model, then it's now not optimizing for impressions
because basically what it wants is for people to keep coming back as often as possible.
That's how they get paid, right?
Every time an ad gets shown to someone and the way is to get people constantly refreshing
their feed.
So you're trying to encourage addictive behaviors, whereas if they moved on to at least a subscription
model, then they're getting the money either way, whether someone comes back to the site
once a month or 500 times a month, they get the same amount of money.
So now that takes away that incentive to use technology to design an algorithm that is
maximally addictive, that would be one way, for example.
Yeah, but you still want people to...
Yeah, I just feel like that just slows down, creates friction in the virality of things.
But that's good.
We need to slow down virality.
It's good.
It's one way.
Virality is mollic, to be clear.
So mollic is always negative then?
Yes, by definition.
Yes.
But then I disagree with you.
Competition is not always negative.
Competition is neutral.
I disagree with you that all virality is as mollic then, because it's a good intuition
because we have a lot of data on virality being negative.
But I happen to believe that the core of human beings, so most human beings want to be good
more than they want to be bad to each other.
And so I think it's possible, it might be just harder to engineer systems that enable
virality, but it's possible to engineer systems that are viral that enable virality and the
kind of stuff that rises to the top is things that are positive.
And positive, not like lala positive, it's more like win-win, meaning a lot of people
need to be challenged.
Wise things.
Yeah.
You grow from it.
It might challenge you.
It might get, but you ultimately grow from it.
And ultimately bring people together as opposed to tear them apart.
I deeply want that to be true, and I very much agree with you that people at their core
are on average good as opposed to, you know, care for each other as opposed to not.
Like, you know, I think it's actually a very small percentage of people are truly like
wanting to do just like destructive malicious things.
Most people are just trying to win their own little game, and they don't mean to be, you
know, they're just stuck in this badly designed system.
That said, the current structure, yes, is the current structure means that virality
is optimized towards mollock.
That doesn't mean there aren't exceptions.
You know, sometimes positive stories do go viral, and I think we should study them.
I think there should be a whole field of study into understanding, you know, identifying
memes that, you know, above a certain threshold of the population agree is a positive, happy,
bringing people together meme, the kind of thing that, you know, brings families together
that would normally argue about cultural stuff at the table, at the dinner table.
Identify those memes and figure out what it was, what was the ingredient that made them
spread that day.
And also like, not just like happiness and connection between humans, but connection
between humans in other ways that enables like productivity, like cooperation, solving
difficult problems and all those kinds of stuff, you know, so it's not just about, let's
be happy and have fulfilling lives, it's also like, let's build cool shit.
Yeah.
Let's get excited.
Which is the spirit of collaboration, which is deeply anti-mollock, right?
That's, it's not using competition, it's like, you know, mollock hates collaboration
and coordination and people working together.
And that's, you know, again, like the internet started out as that, and it could have been
that, but because of the way it was sort of structured in terms of, you know, very lofty
ideal, they wanted everything to be open source and also free, but they needed to find a way
to pay the bills anyway, because they were still building this on top of our old economic
system.
And so the way they did that was through third-party advertisement, but that meant that things were
very decoupled, you know, you've got this third-party interest, which means that you're
then like, people are having to optimise for that, and that is, you know, the actual consumer
is actually the product, not the person you're making the thing for, in the end, you stop
making the thing for the advertiser, and so that's why it then breaks down.
Yeah, like, it's, there's no clean solution to this, and I, it's a really good suggestion
about you actually to, like, figure out how we can optimise virality for positive sum
topics.
I shall be the general of the love-bought army distributed distributed distributed.
Okay.
Yeah.
The power, just even in saying that the power already went to my head.
No.
Okay.
You've talked about quantifying your thinking, we've been talking about this sort of a game
theoretic view on life, and putting probabilities behind estimates, like, if you think about
different trajectories you can take through life, just actually analysing life in a game
theoretic way, like your own life, like personal life, you, I think you've given an example
that you had an honest conversation with Igor about, like, how long is this relationship
going to last, similar to our sort of marriage problem kind of discussion, having an honest
conversation about the probability of things that we sometimes are a little bit too shy
or scared to think of in a probabilistic terms.
Can you speak to that kind of way of reasoning, the good and the bad of that?
Can you do this kind of thing with human relations?
Yeah.
So the scenario you're talking about, it was, like-
Yeah.
Tell me about that scenario.
Yeah.
So it was about a year into our relationship, and we were having a fairly heavy conversation
because we were trying to figure out whether or not I was going to sell my apartment, he'd
already moved in, but I think we were just figuring out what, like, our long-term plans
would be.
Should we buy a place together, et cetera.
When you guys start having that conversation, are you, like, drunk out of your mind on wine
or is you sober and you're actually having a serious-
I think we were sober.
How do you get to that conversation?
Because most people are kind of afraid to have that kind of serious conversation.
Well, so, you know, our relationship was very, well, first of all, we were good friends for
a couple of years before we even, you know, got romantic, and when we did get romantic,
it was very clear that this was a big deal.
It wasn't just like another, like, you know, it wasn't a random thing.
So the probability of it being a big deal was high?
It was already very high, and then we'd been together for a year and it had been pretty
golden and wonderful.
So, you know, there was a lot of foundation already where we felt very comfortable having
a lot of frank conversations, but Igor's MO has always been much more than mine.
He was always from the outset, like, just in a relationship.
Radical transparency and honesty is the way because the truth is the truth, whether you
want to hide it or not, you know, it will come out eventually, and if you aren't able
to accept difficult things yourself, then how could you possibly expect to be, like,
the most integral version that, you know, you can't, the relationship needs this bedrock
of, like, honesty as a foundation, more than anything.
Yeah, that's really interesting, but I would like to push against some of those ideas,
but let's throw them up.
Throw them up.
Down the line, yes.
Throw them up.
I just rudely interrupt.
That was fine.
And so, you know, we'd been about together for a year and things were good, and we were
having this hard conversation, and then he was like, well, okay, what's the likelihood
that we're going to be together in three years, then?
Because I think it was roughly a three-year time horizon.
And I was like, oh, interesting.
And then we were like, actually, wait, before you say it out loud, let's both write down
our predictions formally, because we'd been, like, we were just getting into, like, effective
altruism and rationality at the time, which is all about making, you know, formal predictions
as a means of measuring your own, well, your own foresight, essentially, in a quantified
way.
So we, like, both wrote down our percentages, and we also did a one-year prediction and
a 10-year one as well.
So we got percentages for all three, and then we showed each other.
And I remember, like, having this moment of, like, ooh, because for the 10-year one, I
was like, oh, well, I mean, I love them a lot, but, like, a lot can happen in 10 years,
you know, and we've only been together for, you know, so I was like, I think it's over
50%, but it's definitely not 90%.
And I remember, like, wrestling, I was like, oh, but I don't want him to be hurt.
I don't want him to, you know, I don't want to give a number lower than his.
And I remember thinking, I was like, don't game it.
This is an exercise in radical honesty.
So just give your real percentage.
And I think mine was, like, 75%.
And then we showed each other, and luckily, we were fairly well-aligned.
But honestly, even if we weren't...
20%. It definitely would have, I, if his had been consistently lower than mine, that would
have rattled me for sure.
Whereas if it had been the other way around, I think he would have, he's just kind of like
a water off the duck's back type of guy, be like, okay, well, all right, we'll figure
this out.
Well, did you guys provide air bars on the estimate?
Like the level on...
They came built in.
We didn't give formal plus or minus air bars.
I didn't draw any or anything like that.
But I guess that's the question I have is, did you feel informed enough to make such
decisions?
Because like, I feel like if I were to do this kind of thing rigorously, I would want
some data.
I would want to say that one of the assumptions you have is you're not that different from
other relationships.
Right.
And so I want to have some data about the way...
You want the base rates.
Yeah.
And also actual trajectories of relationships.
I would love to have like time series data about the ways that relationships fall apart
or prosper, how they collide with different life events, losses, job changes moving, both
partners find jobs, only one has a job.
I want that kind of data and how often the different trajectories change in life.
Like how informative is your past to your future?
That's the whole thing I got.
Can you look at my life and have a good prediction about in terms of my characteristics of my
relationships with what that's going to look like in the future or not?
I don't even know the answer to that question.
I'll be very ill-informed in terms of making the probability.
I would be far, yeah, I just would be under informed.
I would be under informed.
I would be over-biasing to my prior experiences, I think.
Right.
But as long as you're aware of that and you're honest with yourself and you're honest with
the other person and say, look, I have really wide error bars on this for the following reasons.
That's okay.
I still think it's better than not trying to quantify it at all if you're trying to make
really major irreversible life decisions.
And I feel also the romantic nature of that question.
For me personally, I try to live my life thinking it's very close to 100%.
Like allowing myself actually, this is the difficulty of this, is allowing myself to
think differently, I feel like has a psychological consequence.
That's what's one of my pushbacks against radical honesty is this one particular perspective
on it.
So you're saying you would rather give a falsely high percentage to your partner?
Going back to the wide-sage film- In order to sort of create this additional
optimism.
The realm youth of fake it till you make it, the positive, the power of positive thinking.
Hashtag positivity.
Yeah.
Hashtag.
The hashtag.
Well, so that and this comes back to this idea of useful fictions, right?
And I agree.
I don't think there's a clear answer to this and I think it's actually quite subjective.
Some people this works better for than others.
To be clear, Igor and I weren't doing this formal prediction in it.
We did it with very much tongue-in-cheek.
It wasn't like we were going to make, I don't think it even would have drastically changed
what we decided to do even.
We kind of just did it more as a fun exercise.
For the consequence of that fun exercise, it really actually kind of, there was a deep
honesty to it too.
Exactly.
It was a deep, yeah, it was just like this moment of reflection.
I'm like, oh wow, I actually have to think through this quite critically and so on.
And it's also what was interesting was I got to check in with what my desires were.
So there was one thing of what my actual prediction is, but what are my desires and could these
desires be affecting my predictions and so on.
And that's a method of rationality.
And I personally don't think it loses anything in terms of, it didn't take any of the magic
away from our relationship, quite the opposite.
It brought us closer together because it was like, we did this weird fun thing that I appreciate.
A lot of people find quite strange.
And I think it was somewhat unique in our relationship that both of us are very, we
both love numbers.
We both love statistics.
We're both poker players.
So this was kind of like our safe space anyway.
For others, one partner like really might not like that kind of stuff at all, in which
case it's not a good exercise to do.
I don't recommend it to everybody.
But I do think there's, it's interesting sometimes to poke holes in the probe at these
things that we consider so sacred that we can't try to quantify them, which is interesting
because that's intention with the idea of what we just talked about with beauty and
what makes something beautiful, the fact that you can't measure everything about it.
And perhaps something shouldn't be tried to measure.
Maybe it's wrong to completely try and put a utilitarian frame of measuring the utility
of a tree in its entirety.
I don't know.
Maybe we should, maybe we shouldn't.
I'm ambivalent on that.
But overall, people have too many biases.
People are overly biased against trying to do a quantified cost-benefit analysis on
really tough life decisions.
They're like, oh, just go with your gut.
It's like, well, sure, but our intuitions are best suited for things that we've got
tons of experience in.
Then we can really trust on it if it's a decision we've made many times.
If it's like, should I marry this person or should I buy this house over that house?
You only make those decisions a couple of times in your life, maybe.
Well, I would love to know there's a balance, there's probably a personal balance of strike
is the amount of rationality you apply to a question versus the useful fiction, the
fake it till you make it.
For example, just talking to soldiers in Ukraine, you ask them, what's the probability
of you winning, Ukraine winning?
Almost everybody I talk to is 100%.
You listen to the experts, they say all kinds of stuff.
First of all, the morale there is higher than probably, and I've never been to a war zone
before this, but I've read about many wars.
I think the morale in Ukraine is higher than almost anywhere I've read about.
It's every single person in the country is proud to fight for their country.
Everybody, not just soldiers, not everybody.
Why do you think that is specifically more than in other wars?
I think because there's perhaps a dormant desire for the citizens of this country to
find the identity of this country, because it's been going through this 30-year process
of different factions and political bickering, and they haven't had, as they talk about,
they haven't had their independence war.
They say all great nations have had an independence war.
They had to fight for their independence, for the discovery of the identity, of the
core of the ideals that unify us, and they haven't had that.
There's constantly been factions, there's been divisions, there's been pressures from
empires, from United States and from Russia, from NATO and Europe, everybody telling them
what to do.
Now they want to discover who they are, and there's that kind of sense that we're going
to fight for the safety of our homeland, but we're also going to fight for our identity.
On top of the fact that if you look at the history of Ukraine, and there's certain other
countries like this, there are certain cultures are feisty in the pride of being the citizens
of that nation.
Ukraine is that, Poland was that, you just look at history.
In certain countries, you do not want to occupy.
Right.
I mean, both Stalin and Hitler talked about Poland in this way.
They're like, this is a big problem if we occupy this land for prolonged periods of
time.
They're going to be a pain in their ass.
They're not going to want to be occupied.
Certain other countries are pragmatic.
They're like, well, leaders come and go, I guess this is good.
Ukrainians throughout the 20th century don't seem to be the kind of people that just sit
calmly and let the, quote, unquote, occupiers impose their roots.
That's interesting though, because you said it's always been under conflict and leaders
have come and gone.
So you would expect them to actually be the opposite under that reasoning.
Because it's a very fertile land, it's great for agriculture.
So a lot of people want to, I mean, I think they've developed this culture because they've
constantly been occupied by different people, for the different peoples.
So maybe there is something to that where you've constantly had to feel like within
the blood of the generations, there's the struggle against the man, against the imposition
of rules against oppression and all that kind of stuff, and that stays with them.
So there's a will there.
But a lot of other aspects are also part of that has to do with the reverse, small kind
of situation where social media has definitely played a part of it.
Also different charismatic individuals have had to play a part.
The fact that the president of the nation, Zelensky, stayed in Kiev during the invasion
is a huge inspiration to them because most leaders, as you could imagine, when the capital
of the nation is under attack, the wise thing, the smart thing that the United States advised
Zelensky to do is to flee and to be the leader of the nation from a distant place.
He said, fuck that, I'm staying put.
Everyone around him, there was a pressure to leave and he didn't.
And that in those singular acts really can unify a nation.
There's a lot of people that criticize Zelensky within Ukraine.
Before the war, it was very unpopular, even still, but they put that aside for especially
that singular act of staying in the capital.
A lot of those kinds of things come together to create something within people.
These things always, of course though, how zoomed out of a view do you want to take?
Because yeah, you describe it as like an anti-molic thing happened within Ukraine because it bought
the Ukrainian people together in order to fight a common enemy.
Maybe that's a good thing.
Maybe that's a bad thing.
In the end, we don't know how this is all going to play out, right?
But if you zoom it out on a global level, they're coming together to fight.
That could make a conflict larger, you know what I mean?
I don't know what the right answer is here.
It seems like a good thing that they came together, but we don't know how this is all
going to play out.
If this all turns into a nuclear war, we'll be like, okay, that was the bad, that was the
...
You're describing the reverse moloch for the local level.
Exactly.
Yeah.
Now, this is where the experts come in and they say, well, if you channel most of the
resources of the nation and the nation supporting Ukraine into the war effort, are you not beating
the drums of war that is much bigger than Ukraine?
In fact, even the Ukrainian leaders are speaking of it this way.
This is not a war between two nations.
This is the early days of a world war if we don't play this correctly.
Yes.
We need cool heads from our leaders.
From Ukraine's perspective, Ukraine needs to win the war because what is winning the
war mean is coming up, coming to peace negotiations, an agreement that guarantees no more invasions.
Then you make an agreement about what land belongs to whom.
You stop that.
Basically, from their perspective is you want to demonstrate to the rest of the world who's
watching carefully, including Russia and China and different players on the geopolitical
stage that this kind of conflict is not going to be productive if you engage in it.
You want to teach everybody a lesson, let's not do World War III.
It's going to be bad for everybody.
It's a lose-lose.
It's a deep lose-lose.
It doesn't matter.
I think that's actually a correct... When I zoom out, 99% of what I think about is just
individual human beings and human lives and just that war is horrible.
When you zoom out and think from a geopolitics perspective, we should realize that it's entirely
possible that we will see a World War III in the 21st century.
This is like a dress rehearsal for that.
The way we play this as a human civilization will define whether we do or don't have a
World War III.
How we discuss war, how we discuss nuclear war, the kind of leaders we elect and prop
up, the kind of memes we circulate because you have to be very careful when you're being
pro-Ukraine, for example.
You have to realize that you are also indirectly feeding the ever-increasing military industrial
complex.
You have to be extremely careful that when you say pro-Ukraine or pro-anybody, you're
pro-human beings, not pro-the-machine that creates narratives that says it's pro-human
beings, but it's actually if you look at the raw use of funds and resources, it's actually
pro-making weapons and shooting bullets and dropping bombs.
We have to just somehow get the meme into everyone's heads that the real enemy is war
itself.
That's the enemy we need to defeat.
That doesn't mean to say that there isn't justification for small local scenarios, adversarial conflicts.
If you have a leader who is starting wars, they're on the side of team war, basically.
It's not that they're on the side of team country, whatever that country is, it's they're
on the side of team war.
That needs to be stopped and put down, but you also have to find a way that your corrective
measure doesn't actually then end up being co-opted by the war machine and creating greater
war.
Again, the playing field is finite, the conflict is now getting so big that the weapons that
can be used are so mass destructive that we can't afford another giant conflict.
We won't make it.
What existential threat in terms of us not making it, are you most worried about?
What existential threat to human civilization?
Going down a dark path, huh?
Well, no, it's a dark...
No, it's like, well, we're in the somber place, we might as well.
Some of my best friends are dark paths.
What worries you most?
We mentioned asteroids, we mentioned AGI, nuclear weapons.
The one that's on my mind the most, mostly because I think it's the one where we have
actually a real chance to move the needle on in a positive direction, or more specifically
stop some really bad things from happening, really dumb, avoidable things, is bioresks.
In what kind of bioresks?
In terms of...
So many fun options.
Yeah, so many.
Of course, we have risks from natural pandemics, naturally occurring viruses or pathogens,
and then also as time and technology goes on and technology becomes more and more democratized
and into the hands of more and more people, the risk of synthetic pathogens.
And whether or not you fall into the camp of COVID was gain of function accidental
lab leak, or whether it was purely naturally occurring, either way, we are facing a future
where synthetic pathogens or human meddled with pathogens either accidentally get out
or get into the hands of bad actors, whether they're omnicide or maniacs, either way.
And so that means we need more robustness for that.
And you would think that us having this nice little dry run, which is what as awful as
COVID was, and all those poor people that died, it was still like a child's play compared
to what a future one could be in terms of fatality rate.
And so you'd think that we would then be becoming, we'd be much more robust in our pandemic preparedness.
And meanwhile, the budget in the last two years for the US, sorry, they just did this,
I can't remember the name of what the actual budget was, but it was like a multi trillion
dollar budget that the US just set aside.
And originally in that, considering that COVID cost multiple trillions to the economy, right?
The original allocation in this new budget for future pandemic preparedness was 60 billion,
so tiny proportion of it.
That proceeded to get whittled down to like 30 billion, to 15 billion, all the way down
to two billion out of multiple trillions for a thing that has just cost us multiple trillions.
We've just finished, we're barely even, we're not even really out of it.
It basically got whittled down to nothing because for some reason people think that,
whew, all right, we've got the pandemic out of the way, that was that one.
And the reason for that is that people are, and I say this with all due respect to a lot
of the science community, but there's an immense amount of naivety about, they think
that nature is the main risk moving forward, and it really isn't.
And I think nothing demonstrates this more than this project that I was just reading
about that's sort of being proposed right now called deep vision.
And the idea is to go out into the wild, and we're not talking about this like, you know,
within cities, like deep into like caves that people don't go to, deep into the Arctic,
wherever, scour the earth for whatever the most dangerous possible pathogens could be,
that they can find.
And then not only do, you know, try and find these, bring samples of them back to laboratories.
And again, whether you think COVID was a lab leak or not, I'm not going to get into that,
but we have historically had so many, as a civilization, we've had so many lab leaks
from even like the highest level security things, like it just, people should go and
just read it.
It's like a comedy show of just how many they are, how leaky these labs are, even when they
do their best efforts.
So bring these things then back to civilization.
That's step one of the badness.
The next plan, the next step would be to then categorize them, do experiments on them and
categorize them by their level of potential pandemic lethality.
And then the piece of resistance on this plan is to then publish that information freely
on the internet about all these pathogens, including their genome, which is literally
like the building instructions of how to do them on the internet.
And this is something that genuinely a pocket of the like bio, of the scientific community
thinks is a good idea.
And I think on expectation, like the, and their argument is, is that, oh, this is good
because, you know, it might buy us some time to buy to develop vaccines, which, okay, sure,
maybe would have made sense prior to mRNA technology, but, you know, like they, mRNA,
we can bank, we can develop a vaccine now when we find a new pathogen within a couple
of days.
And now then there's all the trials and so on, but those trials would have to happen
anyway in the case of a brand new thing.
So you're saving maybe a couple of days.
So that's the upside.
Meanwhile, the downside is you're not only giving, bringing the risk of these pathogens
of like getting leaked, but you're literally handing it out to every bad actor on earth
who would be doing cartwheels.
And I'm talking about like Kim Jong-un, ISIS, people who like want, they think the rest
of the world is their enemy, and in some cases they think that killing it themselves is,
is, is like a noble cause, and you're literally giving them the building blocks of how to
do this.
It's the most batshit idea I've ever heard.
Like on expectation, it's probably like minus EV of like multiple billions of lives if they
actually succeeded in doing this, certainly, certainly in the tens or hundreds of millions.
So the cost benefit is so unbelievably, it makes no sense.
And I was trying to like, trying to wrap my head around like why, why, like, what's going
wrong in people's minds to think that this is a good idea.
And it's not that it's malice or anything like that.
It's, I think it's that people don't, you know, the proponents, they don't, they're
actually overly naive about the interactions of humanity.
And well, like, that, that, that are bad actors who will use this for, for bad things.
Because not only will it, if you publish this information, even if a bad actor couldn't
physically make it themselves, which given, you know, in 10 years time, like, the technologies
are getting cheaper and easier to use, but even if they couldn't make it, they could
now bluff it.
Like, what, what would you do if there's like some deadly new virus that we were published
on the internet in terms of its building blocks, Kim Jong-un could be like, hey, if you don't,
you know, let me build my nuclear weapons, I'm going to release this, I've managed to
build it.
Well, now he's actually got a credible bluff.
We don't know, you know, and so that's, it's just like handing the keys, it's handing
weapons of mass destruction to people, makes no sense.
The possible, I agree with you, but the possible world in which it might make sense is if the,
the good guys, which is a whole nother problem, defining who the good guys are, but the good
guys are like an order of magnitude, higher competence.
And so they can stay ahead of the bad actors by just being very good at the defense, but
very good, not meaning like a little bit better, but an order of magnitude better.
But of course, the question is in each of those individual disciplines, is that feasible?
Can you, can the bad actors, even if they don't have the competence, leap frog to the
place where the good guys are?
Yeah, I mean, I would agree in principle, with pertaining to this like particular plan
of like, that, you know, with the thing I described this deep vision thing, where at
least then that would maybe make sense for steps one and step two of like getting the
information, but then why would you release it, the information to your literal enemies?
You know, that's, that makes, that doesn't fit at all in that perspective of like trying
to be ahead of them.
You're literally handing them the weapon.
There's different levels of release, right, so there's the kind of secrecy where you don't
give it to anybody, but there's a release where you incrementally give it to like major
labs.
So it's not public release, but it's like, you're giving it to, yeah, there's different
layers of reasonability, but, but the problem there is it's going to, if you go anywhere
beyond like complete secrecy, it's going to leak.
That's the thing.
It's very hard to keep secrets.
That's the information is, so you might as well release it to the public is that argument.
So you either go complete secrecy or you release it to the public.
So which is essentially the same thing.
It's going to leak anyway.
If you don't do complete secrecy, right, which is why you shouldn't get the information
in the first place.
Yeah.
I mean, what in that, I think, well, I think a solution is either don't get the information
in the first place or be, keep, keep it incredibly, incredibly contained.
See, I think, I think it really matters which discipline we're talking about.
So in the case of biology, I do think you're a very right, we shouldn't even be, it should
be forbidden to even like think about that, meaning don't collect, don't just even collect
the information, but like, don't do, I mean, gain of function research is a really iffy
area.
Like you start.
I mean, it's all about cost benefits, right?
There are some scenarios where I could imagine the cost benefit of a gain of function research
is very, very clear where you've evaluated all the potential risks factored in the probability
that things can go wrong and like, you know, not only known unknowns, but unknown unknowns
as well, tried to quantify that.
And then even then it's like orders of magnitude better to do that.
I'm behind that argument, but the point is, is that there's this like naivety that's preventing
people from even doing the cost benefit properly on a lot of the things because, you know,
I get it, the science community, again, I don't want to bucket the science community,
but like some people within the science community just think that everyone's good and everyone
just cares about getting knowledge and doing the best for the world.
And unfortunately, that's not the case.
I wish we lived in that world, but we don't.
Yeah.
I mean, there's a lie, listen, I've been criticizing the science community broadly quite a bit.
There's so many brilliant people that brilliance is somehow hindering sometimes because it has
a bunch of blind spots.
And then you start to look at a history of science, how easily it's been used by dictators
to any conclusion they want.
And it's, it's, it's dark how you can use brilliant people that like playing the little game of
science because it is a fun game.
You know, you're building, you're going to conferences, you're building on top of each
other's ideas, breakthroughs.
Hi, I think I've realized how this particular molecule works and I could do this kind of
experiment and everyone else is impressed.
Oh, cool.
No, I think you're wrong.
Maybe why you're wrong and that little game, everyone gets really excited and they get
excited.
Oh, it came up with a pill that solves this problem and it's going to help a bunch of
people.
And I came up with a giant study that shows the exact probability it's going to help
or not.
And you get lost in this game and you forget to realize this game, just like Molek, can
have like unintended consequences that might destroy human civilization or, or divide
human civilization or have dire geopolitical consequences.
I mean, the effects of, I mean, it's just so, the most destructive effects of COVID
have nothing to do with the, with the biology of the virus, it seems like it's, I mean,
I could just list them forever, but like one of them is the complete distrust of public
institutions.
Yeah.
The other one is because of that public distrust, I feel like if a much worse pandemic came
along, we as a world have now cried wolf.
And when, if an actual wolf now comes, people will be like, fuck masks, fuck, fuck vaccines,
fuck everything.
And they, they won't be, they'll distrust every single thing that any major institution
is going to tell them.
Yeah.
Because that's the thing.
Like there, there were certain actions made by certain, you know, health public figures
where they told, they very knowingly told, it was a white lie, it was intended in the
best possible way, such as, you know, early on, when there were, there was clearly a shortage
of masks.
And so they said to the public, oh, don't get masks.
They don't, there's no evidence that they work there or the, you know, don't get them.
They don't work.
In fact, it might even make it worse.
You might even spread it more.
Like that, that was the real like stinker.
Yeah.
No, no.
Unless you know how to do it properly, you're going to make that, you're going to get
sicker or you're more likely to get the, to catch the virus, which is just absolute crap.
And they put that out there.
And it's pretty clear the reason why they did that was because there was actually a shortage
of masks and they really needed it for health workers, which makes sense.
Like, I agree, like it, you know, but the cost of lying to the public, to the public
when that then comes out, people aren't as stupid as they think they are as, you know,
and that's, that's, I think, where this distrust of experts has largely come from.
A, they've lied to people overtly, but B, people have been treated like idiots.
Now that's not to say that there are a lot of stupid people who have a lot of wacky ideas
around COVID and all sorts of things.
But if you treat the general public like children, they're going to see that they're going to
notice that and that is going to just, just like absolutely decimate the trust in the
public institutions that we depend upon.
And honestly, the best thing that could happen, I wish, like if like Fauci or, you know, and
these other like leaders who, I mean, God, I would, I can't imagine how nightmare his
job has been over the last few years.
Hell on earth.
Like, so, you know, I, you know, I have, I have a lot of sort of sympathy for the position
he's been in.
But like, if he could just come out and be like, okay, look, guys, hands up, we didn't
handle this as well as we could have.
These are all the things I would have done differently in hindsight.
I apologize for this and this and this and this, that would go so far, and maybe I'm
being naive.
Who knows?
Maybe this would backfire.
But I don't think it would, like, to someone like me, even, because I've like, I've lost
trust in a lot of these things, but I'm fortunate that at least no people who I can go to who
I think are good, like have good epistemics on this stuff.
But you know, if they, if they could sort of put their hands on me, go, okay, these are
the spots where we screwed up this, this, this.
This was our reasons.
Yeah, we actually told a little white lie here.
We did it for this reason.
We're really sorry.
But they just did the radical honesty thing, the radical transparency thing.
That would go so far to build, rebuilding public trust, and I think that's what needs
to happen.
Yeah.
I totally agree with you.
Unfortunately, his job was very tough and all those kinds of things, but I see arrogance
and arrogance prevented him from being honest in that way previously, and I think arrogance
will prevent him from being honest in that way.
Now, we need leaders, I think young people are seeing that, that kind of talking down
to people from a position of power, I hope is the way of the past.
People really like authenticity and they, they like leaders that are like a man and
a woman of the people.
And I think that just, I mean, he still has a chance to do that, I think.
I mean, I don't want to, you know, I don't think, you know, if I doubt he's listening,
but if he is, like, hey, I think, you know, I don't think he's irredeemable by any means.
I think there's, you know, I don't have an opinion of whether there was arrogance or
there or not.
I just know that I think like coming clean on the, you know, it's understandable to
have fucked up during this pandemic.
Like I won't expect any government to handle it well because it was so difficult, like
so many moving pieces, so much like lack of information and so on.
But the step to rebuilding trust is to go, okay, look, we're doing a scrutiny of where
we went wrong.
And I, and for my part, I did this wrong in this part.
And that would be huge.
All of us can do that.
I mean, I was struggling for a while whether I want to talk to him or not.
I talked to his boss, Francis Collins.
Another person that's screwed up in terms of trust, lost a little bit of my respect to.
There seems to have been a kind of dishonesty in the back rooms, in that they didn't trust
people to be intelligent.
Like we need to tell them what's good for them.
We know what's good for them, that kind of idea.
To be fair, the thing that's, what's it called, I heard the phrase today, nut picking.
Social media does that.
So you've got like nit picking, nut picking is where the craziest stupidest, you know,
if you have a group of people, let's call it, you know, let's say people who are vaccine,
I don't like the term anti-vaccine, people who are vaccine hesitant, vaccine speculative,
you know, what social media did or the media or anyone, you know, their opponents would
do is pick the craziest example.
So the ones who are like, you know, I think I need to inject myself with motor oil up my
ass or something, you know, select the craziest ones and then have that beamed to, you know,
so from like someone like Fauci or Francis's perspective, that's what they get because
they're getting the same social media stuff as us.
They're getting the same media reports.
I mean, they might get some more information, but they too are going to get these, the nuts
portrayed to them.
So they probably have a misrepresentation of what the actual public's intelligence is.
Well, that actually, that just, yes, and that just means they're not social media savvy.
So one of the skills of being on social media is to be able to filter that in your mind,
like to understand, to put into proper context.
To realize that what you are seeing to social media is not anywhere near an accurate representation
of humanity.
It's not picking a leather.
And there's nothing wrong with putting a motor oil up your ass is just one, it's one of the
better aspects.
Hey, do what you want.
What do you want to do?
Yeah.
I do this every weekend.
Okay.
How did that analogy come from in my mind?
Like what?
I don't know.
I think you need to, there's some Freudian thing you would need to deeply investigate
with a therapist.
Okay.
What about AI?
Are you worried about AGI, superintelligence systems or paperclip maximizer type of situation?
Yes.
I'm definitely worried about it.
But I feel kind of bipolar in the, some days I wake up and I'm like, you're excited about
the future.
Well, exactly.
I'm like, wow, we can unlock the mysteries of the universe, you know, escape the game.
And this, this, you know, if, because I'm spending all my time thinking about these
mollicky problems that, you know, what, what is the solution to them?
Well, you know, in some ways you need this like omnibenevolent omniscient, omnivise coordination
mechanism that can like make us all not do the, the, the mollicky thing, or like provide
the infrastructure or redesign the system so that it's not vulnerable to this mollicky
process.
And in some ways, you know, that's, that's the strongest argument to me for like the
race to build AGI is that maybe, you know, we can't survive without it.
But the flip side to that is the, the, the, the, the, unfortunately now that there's multiple
actors trying to build AI, AGI, you know, this was, this was fine 10 years ago when
it was just DeepMind, but then other companies started up and now it created a race dynamic.
Now it's like, that's the whole thing is at the same, it's got the same problem.
It's like, whichever company is the one that like optimizes for speed at the cost of safety
will get the competitive advantage.
So it will be the more likely the ones to build the AGI, you know, and that's the same
cycle that you're in.
And there's no clear solution to that because you can't just go like slapping, you know,
if you go and try and like stop all the different companies, then it will, you know, the good
ones will stop because they're the ones, you know, within, you know, within the West's
reach, but then that leaves all the other ones to continue.
And then they're even more likely.
So it's like, it's a very difficult problem with no clean solution.
And you know, at the same time, you know, I know the, at least some of the folks at
DeepMind and they're incredible and they're thinking about this, they're very aware of
this problem.
And they're like, you know, I think some of the smartest people on earth.
Yeah, the culture is important there because they are thinking about that and they're some
of the best machine learning engineers.
So it's possible to have a company or a community of people that are both great engineers and
are thinking about the philosophical topics.
Exactly.
And importantly, they're also game theorists, you know, and because this is ultimately a
game theory problem, the thing, this, this mollock mechanism and like, you know, what
this rate, how do we voice rate your arms race scenarios?
You need people who aren't naive to be thinking about this.
And again, like, luckily, there's a lot of smart, non naive game theorists within, within
that group.
Yes, I'm concerned about it and I, I think it's again, a thing that we need people to
be thinking about in terms of like, how do we create, how do we mitigate the arms race
dynamics and how do we solve the thing of, it's got, Bostrom calls it the orthogonality
problem, whereby, because there's obviously there's a chance, you know, the belief, the
hope is, is that you build something that's super intelligent and by definition of being
super intelligent, it will also become super wise and have the wisdom to know what the
right goals are.
And hopefully those goals include keeping humanity alive, right?
But Bostrom says that actually those two things, you know, on super intelligence and super
wisdom aren't necessarily correlated, they're actually kind of orthogonal things.
And how do we make it so that they are correlated?
How do we guarantee it?
Because we need it to be guaranteed, really, to know that we're doing the thing safely.
But I think that like, merging of intelligence and wisdom, at least my hope is that this
whole process happens sufficiently slowly, that we're constantly having these kinds of
debates, that we have enough time to figure out how to modify each version of the system
as it becomes more and more intelligent.
Yes, buying time is a good thing, definitely.
Anything that slows everything down, we just, everyone needs to chill out.
We've got millennia to figure this out, or at least, well, it depends, again, some people
think that we can't even make it through the next few decades without having some kind
of omnivise coordination mechanism.
And there's also an argument to that, yeah, I don't know.
Well, I'm suspicious of that kind of thinking because it seems like the entirety of human
history has people in it that are like predicting doom just around the corner.
There's something about us that is strangely attracted to that thought.
It's almost like fun to think about the destruction of everything.
Just objectively speaking, I've talked and listened to a bunch of people and they are
gravitating towards that.
It's almost, I think it's the same thing that people love about conspiracy theories is they
love to be the person that kind of figured out some deep fundamental thing that's going
to mark something extremely important about the history of human civilization because
then I will be important when in reality, most of us will be forgotten and life will
go on.
And one of the sad things about whenever anything traumatic happens to you, whenever you lose
loved ones or just tragedy happens, you realize life goes on.
Even after a nuclear war that will wipe out some large percentage of the population and
will torture people for years to come because of the effects of a nuclear winter, people
will still survive, life will still go on and it depends on the kind of nuclear war.
But in the case of the nuclear war, it will still go on.
That's one of the amazing things about life, it finds a way.
In that sense, I feel like the doom and gloom thing is a...
Well, yeah, we don't want a self-fulfilling prophecy.
Yes, that's exactly.
Yes.
And I very much agree with that and I even have a slight like feeling from the amount
of time we spent in this conversation talking about this, because it's like, is this even
a net positive if it's like making everyone feel, oh, in some ways, like making people
imagine these bad scenarios can be a self-fulfilling prophecy.
But at the same time, that's weighed off with at least making people aware of the problem
and gets them thinking.
And I think, particularly, the reason why I want to talk about this to your audience
is that on average, they're the type of people who gravitate towards these kind of topics
because they're intellectually curious and they can sort of sense that there's trouble
brewing.
Yeah.
They can smell that there's...
I think there's a reason people are thinking about this stuff a lot is because the probability,
the probability, it's increased in probability over the last few years, trajectories have
not gone favorably since 2010.
So it's right, I think, for people to be thinking about it.
That's whether it's a useful fiction or whether it's actually true or whatever you want to
call it.
I think having this faith, this is where faith is valuable, because it gives you at least
this anchor of hope.
And I'm not just saying it to trick myself.
I do truly...
I do think there's something out there that wants us to win.
I think there's something that really wants us to win.
And it just...
You just have to be like, just like, okay, now I sound really crazy, but open your heart
to it a little bit and it will give you the sort of breathing room with which to marinate
on the solutions.
We are the ones who have to come up with solutions, but we can use that there's hashtag positivity.
There's value in that.
You have to kind of imagine all the destructive trajectories that lay in our future and then
believe in the possibility of avoiding those trajectories.
All while, you said audience, all while sitting back, which is the two people that listen
to this are probably sitting on a beach, smoking some weed, just a beautiful sunset, or they're
looking at just the waves going in and out.
And ultimately, there's a kind of deep belief there in the momentum of humanity to figure
it all out, but we've got a lot of work to do, which is what makes this whole simulation,
this video game kind of fun.
This battle of polytopia, I still, man, I love those games so much, and that one for
people who don't know, but battle polytopia is a really radical simplification of a civilization
type of game.
It still has a lot of the skill tree development, a lot of the strategy, but it's easy enough
to play on a phone.
Yeah.
It's kind of interesting.
They've really figured it out.
It's one of the most elegantly designed games I've ever seen.
It's incredibly complex, and yet, again, it walks that line between complexity and simplicity
in this really, really great way.
And they use pretty colors that hack the dopamine reward circuits in our brains very well.
It's fun.
Video games are so fun.
Most of this life is just about fun, escaping all the suffering to find the fun.
What's energy healing?
I have in my notes energy healing question mark.
What's that about?
Oh, man.
God, your audience is going to think I'm mad.
So the two crazy things that happened to me, the one was the voice in the head that said
you're going to win this tournament, and then I won the tournament.
The other craziest thing that's happened to me was in 2018, I started getting this weird
problem in my ear where it was low-frequency sound distortion, where voices, particularly
men's voices, became incredibly unpleasant to listen to.
It would create this, it would be falsely amplified or something, and it was almost
like a physical sensation in my ear, which was really unpleasant.
And it would last for a few hours and then go away, and then come back for a few hours
and go away.
And I went and got hearing tests, and they found that the bottom end, I was losing the
hearing in that ear.
And in the end, I got, doctors said they think it was this thing called Meniere's disease,
which is this very unpleasant disease where people basically end up losing their hearing,
but they get this.
It often comes with dizzy spells and other things, because it's like the inner ear gets
all messed up.
Now, I don't know if that's actually what I had, but that's what at least one doctor
said to me.
But anyway, so I'd had three months of this stuff, this going on, and it was really getting
me down.
I was at Burning Man, of all places, I don't mean to be that person talking about Burning
Man, but I was there, and again, I'd had it, and I was unable to listen to music, which
is not what you want, because Burning Man is a very loud, intense place, and I was just
having a really rough time.
And on the final night, I get talking to this girl who's like a friend of a friend.
And I mentioned, I was like, oh, I'm really down in the dumps about this.
And she's like, oh, well, I've done a little bit of energy healing, would you like me to
have a look?
I was like, sure.
Now, this is, again, deep, I was, you know, no time in my life for this, I didn't believe
in any of this stuff.
I was just like, it's all bullshit, it's all wooing nonsense.
I was like, sure, have a go.
And she starts with her hand, and she says, oh, there's something there.
And then she leans in, and she starts sucking over my ear, not actually touching me, but
close to it, with her mouth.
And it was really unpleasant.
I was like, well, can you stop?
She's like, no, no, no, there's something there.
I need to get it.
And I was like, no, no, no, no, I really don't like it.
Please, this is really loud.
She's like, I need to just bear with me.
And she does it, and I don't know how long, for a few minutes.
And then she eventually collapses on the ground, freezing cold, crying.
And I'm just like, I don't know what the hell is going on.
I mean, I thoroughly freaked out, as is everyone else watching, just like, what the hell?
And me, like, warm her up.
And she was like, what, oh, she was really shaken up.
And she's like, I don't know what that, she said it was something very unpleasant and
dark.
Don't worry, it's gone.
I think you'll be fine in a couple, you'll have the physical symptoms for a couple of
weeks, and you'll be fine.
But she was just like that.
So I was so rattled, A, because the potential that actually I'd had something bad in me
that made someone feel bad, and that she was scared.
That was what, you know, I was like, wait, I thought, you do this, this is a thing.
Now you're terrified?
Like you pulled like some kind of exorcism or something, what the fuck is going on?
So it, like, just the most insane experience.
And frankly, it took me like a few months to sort of emotionally recover from it.
But my ear problem went away about a couple of weeks later, and touchwood, I've not had
any issues since.
So that gives you like hints that maybe there's something out there.
I mean, I don't, again, I don't have an explanation for this.
The most probable explanation was, you know, I was a burning man, I was in a very open
state.
Let's just leave it at that.
And, you know, placebo is an incredibly powerful thing and a very not understood thing.
So almost assigning the word placebo to it reduces it down to a way that it doesn't deserve
to be reduced down.
Maybe there's a whole science of what we call placebo.
Maybe there's a placebo is a door, self healing, you know, and I mean, I don't know what the
problem was.
Like I was told it was many years.
I don't want to say I definitely had that because I don't want people to think that,
oh, that's how, you know, if they do have that, because it's a terrible disease, and
if they have that, that this is going to be a guaranteed way for it to fix it for them.
I don't know.
And I also don't, I don't, you're absolutely right to say like using even the word placebo
is like, it comes with this like baggage of, of like frame.
And I don't want to reduce it down.
All I can do is describe the experience and what happened.
I cannot put an ontological framework around it.
I can't say why it happened, what the mechanism was, what the problem even was in the first
place.
I just know that something crazy happened and it was while I was in an open state.
And fortunately for me, it made the problem go away.
But what I took away from it, again, it was part of this, you know, this took me on this
journey of becoming more humble about what I think I know.
Because as I said before, I was like, I was in the like Richard Dawkins train of atheism
in terms of there is no God, there's everything like that is bullshit.
We know everything, we know, you know, the only way we can get through, we know how medicine
works and its molecules and chemical interactions and that kind of stuff.
And now it's like, okay, well, there's, there's clearly more for us to understand.
And that doesn't mean that it's ascientific as well, because, you know, the beauty of
the scientific method is that it's still, it's still can apply to this situation.
Like I don't see why, you know, I would like to try and test this experimentally.
I haven't really like, you know, I don't know how we would go about doing that.
We'd have to find other people with the same condition, I guess, and like try and repeat
the experiment.
But it doesn't, just because something happens that's sort of out of the realms of our current
understanding, it doesn't mean that it's the scientific method can't be used for it.
Yeah.
I think the scientific method sits in a foundation of those kinds of experiences, because a scientific
method is a process to carve away at the mystery all around us.
And experiences like this is just a reminder that we're mostly shrouded in mystery still.
That's it.
It's just like a humility.
Like we haven't really figured this whole thing out.
But at the same time, we have found ways to act, you know, we're clearly doing something
right because think of the technological scientific advancements, the knowledge that we have,
you know, it would blow people's minds even from a hundred years ago.
Yeah.
And we've even allegedly gone out to space and landed on the moon, although I still haven't,
I have not seen evidence of the earth being round, but I'm keeping an open mind.
Speaking of which, you studied physics in astrophysics.
Just to go to that, just to jump around through the fascinating life you've had, when did
you, how did that come to be?
When did you fall in love with astronomy and space and things like this?
As early as I can remember.
I was very lucky that my mom, and my dad, but particularly my mom, my mom is like the
most nature.
She is Mother Earth.
It's the only way to describe her.
Just she's like, talk to do little animals flock to her and just like sit and look at
her adoringly.
As she sings.
Yeah.
She just is Mother Earth, and she has always been fascinated by, you know, she doesn't
have any, you know, she never went to university or anything like that.
She's actually phobic of maths.
If I try and get her to like, you know, I was trying to teach her poker when she hated
it.
But she's so deeply curious, and that just got instilled in me when, you know, we would
sleep out under the stars whenever it was, you know, the two nights a year when it was
warm enough in the UK to do that.
And we would just lie out there until we fell asleep, looking for satellites, looking for
shooting stars.
And I was just always, I don't know whether it was from that, but I've always naturally
gravitated to like the biggest, the biggest questions, and also the like, the most layers
of abstraction.
I love just like, what's the meta question?
What's the meta question?
And so on.
So I think it just came from that really, and then on top of that like physics, you know,
it also made logical sense in that it was a degree that was subject to tick the box of
being, you know, answering these really big picture questions, but it was also extremely
useful.
It like has a very high utility in terms of, I didn't know necessarily, I thought I was
going to become like a research scientist.
My original plan was like, I want to be a professional astronomer.
So it's not just like a philosophy degree that asks the big questions, and it's not
like biology and the path to go to medical school or something like that, which is overly
pragmatic, not overly, is very pragmatic, but this is yeah, physics is a good combination
of the two.
Yeah.
At least for me, it made sense.
And I was good at it.
I liked it.
Yeah.
I mean, it wasn't like I did an immense amount of soul searching to choose it or anything.
It just was like this, it made the most sense.
I mean, you have to make this decision in the UK age 17, which is crazy.
Because in the US, the first year, you do a bunch of stuff, and then you choose your
major.
I think the first few years of college, you focus on the drugs, and only as you get closer
to the end, do you start to think, oh, shit, this wasn't about that, and I owe the government
a lot of money.
How many alien civilizations are out there?
When you looked up at the stars with your mom, and you were counting them, what's your
mom think about the number of alien civilizations?
I actually don't know.
I would imagine she would take the viewpoint of, she's pretty humble, and she knows how
many, she knows there's a huge number of potential spawn sites out there.
So she would spawn sites.
Spawn sites, yeah.
This is our spawn site.
Spawn sites.
Yeah, spawn sites in Polytopia.
We spawned on Earth.
It's...
Yeah.
Spawn sites.
Why does that feel weird to say spawn?
Because it makes me feel like it's...
There's only one source of life, and it's spawning in different locations.
That's why the word spawn, because it feels like life that originated on Earth really
originated here.
Right.
It is unique to this particular...
Yeah.
I mean, in my mind, it doesn't exclude completely different forms of life, and different biochemical
soups can't also spawn, but I guess it implies that there's some spark that is uniform, which
I kind of like the idea of.
And then I get to think about respawning after it dies.
What happens if life on Earth ends?
Is it going to restart again?
Probably not.
It depends.
Maybe Earth is too...
It depends on the type of, what's the thing that kills it off, right?
If it's a paperclip maximiser, not for the example, but some kind of very self-replicating,
high on the capabilities, very low on the wisdom type thing.
So whether that's gray goo, green goo, nanobots, or just a shitty, misaligned AI that thinks
it needs to turn everything into paperclips, if it's something like that, then it's going
to be very hard for complex life.
Because by definition, a paperclip maximiser is the ultimate instantiation of Moloch, deeply
low complexity, over-optimization on a single thing, sacrificing everything else, turning
the whole world into...
Although, something tells me, if we actually take a paperclip maximiser, it destroys everything.
It's a really dumb system that just envelops the whole of Earth.
And it evolves beyond.
I didn't know that part, but okay, great.
So it becomes a multi-planetary paperclip maximiser?
Well, it just propagates...
I mean, it depends whether it figures out how to jump the vacuum gap.
But again, I mean, this is all silly, because it's a hypothetical thought experiment, which
I think doesn't actually have much practical application to the AI safety problem.
But it's just a fun thing to play around with.
But if by definition, it is maximally intelligent, which means it is maximally good at navigating
the environment around it in order to achieve its goal, but extremely bad at choosing goals
in the first place.
So again, we're talking on this orthogonality thing, right?
It's very low on wisdom, but very high on capability.
Then it will figure out how to jump the vacuum gap between planets and stars and so on, and
thus just turn every atom it gets its hands on into paperclips.
Yeah.
By the way, for people who...
Which is maximum virality, by the way, that's what virality is.
But it does not mean that virality is necessarily all about maximising paperclips.
In that case, it is.
So for people who don't know, this is just a thought experiment example of an AI system
that has a goal and is willing to do anything to accomplish that goal, including destroying
all life on Earth and all human life and all of consciousness in the universe for the goal
of producing a maximum number of paperclips.
Okay.
Or whatever its optimization function was that it was set up.
But don't you think...
Could be making, re-creating lexes.
Maybe it'll tile the universe in lex.
Go on.
I'd like to say did not.
I'm just kidding.
That's better.
Yeah.
That's more interesting than paperclips.
That could be infinitely optimal if I were to say so myself.
But if you ask me, it's still a bad thing because it's permanently capping what the universe
could ever be.
It's like, that's its end to it.
Or achieving the optimal that the universe could ever achieve.
But that's up to...
Different people have different perspectives.
But don't you think within the paperclip world, that would emerge just like in the zeros and
ones that make up a computer, that would emerge beautiful complexities?
It won't suppress...
As you scale to multiple planets and throughout, there'll emerge these little worlds that on
top of the fabric of maximizing paperclips, there would be...
That would emerge like little societies of paperclips.
Well, then we're not describing a paperclip maximizer anymore because by the...
If you think of what a paperclip is, it is literally just a piece of bent iron.
Yes.
Right?
So if it's maximizing that throughout the universe, it's taking every atom it gets its
hand on into somehow turning it into iron or steel and then bending it into that shape
and then done and done.
By definition, like paperclips, there is no way for...
Well, okay.
So you're saying that paperclips somehow will just emerge and create through gravity or
something?
No, no, no.
Because there's a dynamic element to the whole system.
It's not just... it's creating those paperclips and the act of creating there's going to be
a process and that process will have a dance to it because it's not like sequential thing.
There's a whole complex three-dimensional system of paperclips, like string theory, right?
It's supposed to be strings that are interacting in fascinating ways.
I'm sure paperclips are very string-like.
They can be interacting in very interesting ways as you scale exponentially through three-dimensional...
I'm sure the paperclip maximizer has to come up with a theory of everything.
It has to create wormholes, right?
It has to break... it has to understand quantum mechanics.
I love your optimism.
This is where I'd say we're going into the realm of pathological optimism, wherever it's...
I'm sure there'll be...
I think there's an intelligence that emerges from that system.
You're saying that, basically, intelligence is inherent in the fabric of reality and will find a way.
Kind of like... Goldblum says life will find a way.
You think life will find a way, even out of this perfectly homogenous, dead soup.
It's not perfectly homogenous.
It has to... it's perfectly maximal in the production.
I don't know why people keep thinking it's... it maximizes the number of paperclips.
That's the only thing.
It's not trying to be homogenous.
It's trying to maximize paperclips.
So you're saying that because it... because... kind of like in the Big Bang,
or it seems like things... there were clusters.
There was more stuff here than there.
That was enough of the patternicity that kickstarted the evolutionary process.
The little weirdnesses that will make it beautiful.
Even out of a flood city emerges.
Interesting.
Okay.
Well, so how does that line up then with the whole heat death of the universe, right?
Because that's another sort of instantiation of this.
It's like everything becomes so far apart and so cold and so perfectly mixed that it's like
homogenous, grayness.
Do you think that even out of that homogenous grayness where there's no
negative entropy, that there's no free energy that we understand, even from that new...
Yeah, the paperclip maximizer or any other intelligence systems will figure out ways
to travel to other universes to create Big Bangs within those universes or through black holes
to create whole other worlds to break what we consider the limitations of physics.
The paperclip maximizer will find a way if a way exists.
And we should be humbled to realize that we don't...
Yeah, but because it just wants to make more paperclips.
So it's going to go into those universes and turn them into paperclips.
Yeah.
But we humans, not humans, but complex systems exist on top of that.
We're not interfering with it.
This complexity emerges from...
The simple base state.
The simple base state.
Yeah, whether it's plank lens or paperclips is the base unit.
Yeah, you can think of the universe as a paperclip maximizer because it's doing some dumb stuff.
Like physics seems to be pretty dumb.
It has... I don't know if you can summarize it...
Yeah, the laws are fairly basic and yet out of them amazing complexity emerges.
And its goals seem to be pretty basic and dumb.
If you can summarize as goals, I mean, I don't know what's a nice way maybe...
Maybe laws of thermodynamics could be...
I don't know if you can assign goals to physics, but if you formulate in the sense of goals,
in the sense of goals, it's very similar to paperclip maximizing in the dumbness of the goals.
But the pockets of complexity as it emerges is where beauty emerges.
That's where life emerges.
That's where intelligence, that's where humans emerge.
And I think we're being very down on this whole paperclip maximizer thing.
No, the reason we hate it...
I think, yeah, because what you're saying is that you think that the force of emergence itself
is another like unwritten, well, not unwritten, but like another baked in law of reality.
And you're trusting that emergence will find a way to even out of seemingly the most
molaky, awful, you know, plain outcome, emergence will still find a way.
I love that as a philosophy.
I think it's very nice.
I would wield it carefully because there's large error bars on that and the certainty of that.
Yeah. How about we build the paperclip maximizer and find out?
Classic, yeah.
Molek is doing cartwheels.
Man.
Yeah.
But the thing is, it will destroy humans in the process,
which is the thing, which is the reason we really don't like it.
We seem to be really holding on to this whole human civilization thing.
Would that make you sad if AI systems that are beautiful, that are conscious,
that are interesting and complex and intelligent ultimately lead to the death of humans?
Would that make you sad?
If humans led to the death of humans?
Sorry.
Like if they would supersede humans.
Oh, if some AI.
Yeah. AI would end humans.
I mean, that's the reason why I'm like in some ways less emotionally concerned about AI risk,
because then say, you know, by a risk, because at least with AI, there's a chance,
you know, if we're in this hypothetical where it wipes out humans,
but it does it for some higher purpose, it needs our atoms and energy to do something.
At least now, the universe is going on to do something interesting.
Whereas if it wipes out everything, you know,
bio just kills everything on Earth, and that's it.
And there's no more, you know, Earth cannot spawn anything more meaningful
in the few hundred million years it has left, because it doesn't have much time left.
Then, yeah, I don't know. So one of my favourite books I've ever read is Novocene
by James Lovelock, who sadly just died.
He wrote it when he was like 99.
He died aged 102.
So it's a fairly new book.
And he sort of talks about that, that he thinks it's, you know,
sort of building off this Gaia theory where like Earth is like living
some form of intelligence itself, and that this is the next like step, right?
Is this this whatever this new intelligence that is maybe
silicon based as opposed to carbon based goes on to do.
And it's a really sort of in some ways an optimistic but really fatalistic book.
And I don't know if I fully subscribe to it, but it's a beautiful piece to read anyway.
So am I sad by that idea? I think so, yes.
And actually, yeah, this is the reason why I'm sad by the idea,
because if something is truly brilliant and wise and smart and truly super intelligent,
it should be able to figure out abundance.
So if it figures out abundance, it shouldn't need to kill us off.
It should be able to find a way for us.
It should be there's plenty.
The universe is huge.
There should be plenty of space for it to go out and do all the things it wants to do
and like give us a little pocket where we can continue doing our things
and we can continue to do things and so on.
And again, if it's so supremely wise,
it shouldn't even be worried about the game theoretic considerations
that by leaving us alive will then go and create another like super intelligent agent
that it then has to compete against,
because it should be omnivise and smart enough to not have to concern itself with that.
Unless it deems humans to be kind of assholes.
Like the humans are a source of a lose-lose kind of dynamics.
Well, yes and no.
We're not.
Molykeus, that's why I think it's important to say.
But maybe humans are the source of Molyke.
No, I mean, I think game theory is the source of Molyke.
And you know, because Molyke exists in non-human systems as well.
It happens within like agents within a game in terms of like,
it applies to agents, but it can apply to a species that's on an island of animals.
Rats out-competing the ones that like massively consume all the resources
of the ones that are going to win out over the more like chill socialized ones.
And so, you know, creates this Malthusian trap.
Like Molyke exists in little pockets in nature as well.
So it's not a strictly human thing.
I wonder if it's actually a result of consequences of the invention of predator and pre-dynamics.
Maybe AI will have to kill off every organism that-
Now you're talking about killing off competition.
Not competition, but just like the way it's like the weeds or whatever in a beautiful flower garden.
Parasites.
The parasites, yeah, on the whole system.
Now, of course, it won't do that completely.
It'll put them in a zoo like we do with parasites.
It'll ring fence.
Yeah.
And there'll be somebody doing a PhD on like they'll prod humans with a stick and see what they do.
But I mean, in terms of letting us run wild outside of the, you know,
geographic constraint region that might be that you might have decided against that.
No, I think there's obviously the capacity for beauty and kindness and non-molyke behavior
against humans. So I'm pretty sure AI will preserve us.
Let me, I don't know if you answered the alien's question.
No, I didn't.
You had a good conversation with Toby Orr about various sides of the universe.
I think, did he say, now I'm forgetting, but I think he said it's a good chance we're alone.
So the classic, you know, Fermi paradox question is, there are so many spawn points and yet,
you know, it didn't take us that long to go from harnessing fire to sending out radio signals into
space. So surely, given the vastness of space we should be, and you know, even if only a tiny
fraction of those create life and other civilizations too, we should be, the universe
should be very noisy. There should be evidence of dyson spheres or whatever, you know, like at
least radio signals and so on. But seemingly, things are very silent out there. Now, of course,
it depends on who you speak to. Some people say that they're getting signals all the time and so
on. And like, I don't want to make an epistemic statement on that. But it seems like there's a
lot of silence. And so that raises this paradox. And then say, you know, the Drake equation.
So the Drake equation is like, basically, just a simple thing of like trying to estimate the number
of possible civilizations within the galaxy by multiplying the number of stars created per year
by the number of stars that have planets, planets that are habitable, blah, blah, blah. So all
these like different factors. And then you plug in numbers into that. And you, you know, depending
on like the range of, you know, your lower bound and your upper bound point, point estimates that
you put in, you get out a number at the end for the number of civilizations. But what Toby
and his crew did differently was Toby, this is a researcher at the Future Humanity Institute.
They, instead of, they realized that it's like basically a statistical quirk that if you put
in point sources, even if you think you're putting in conservative point sources, because
on some of these variables, the, the uncertainty is so large, it spans like maybe even like a
couple of hundred of orders of magnitude. By putting in point sources, it's always going to
lead to overestimates. And so they, by putting stuff on a log scale, or actually they did it on
like a log log scale on some of them. And then like ran the simulation across the whole bucket
of uncertainty across all those orders of magnitude. When you do that, then actually the number comes
out much, much smaller. And that's the more statistically rigorous, you know, mathematically
correct way of doing the calculation. It's still a lot of hand waving. As science goes, it's, it's
like definitely, you know, just waving. I don't know what an analogy is, but it's hand waving.
And anyway, when they did this, and then they did a Bayesian update on it as well to like factor
in the fact that there is no evidence that we're picking up because, you know, no evidence is
actually a form of evidence, right? And the long and short of it comes out that the,
we're roughly around 70% to be the only intelligent civilization in our galaxy thus far,
and around 50-50 in the entire observable universe, which sounds so crazily counterintuitive,
but their math is legit. Well, yeah, the math around this particular equation,
which the equation is ridiculous on many levels, but the, the, the night, the powerful thing about
the equation is there's the different things, different components that can be estimated,
and the error bars on which can be reduced with science. And, and hence throughout since the
equation came out, the error bars have been coming out on different aspects. And so that it's almost
kind of says what like this gives you a mission to reduce the error bars on these estimates
now over a period of time. And once you do, you can better and better understand, like in the
process of redoing the error bars, you'll get to understand actually what is the right way
to find out where the aliens are, how many of them there are, and all those kinds of things. So
I don't think it's good to use that for an estimation. I think you do have to think from
like, more like from first principles, just looking at what life is on earth. Like, and trying to
understand the very physics-based biology, chemistry, biology-based question of what is life,
maybe computation-based, what the fuck is this thing? Right. And that, like how difficult is
it to create this thing? Right. It's one way to say like how many plants like this are out there,
or all that kind of stuff, but it feels like from our very limited knowledge perspective,
the right ways to think how, how does, what is this thing and how does it originate from,
from very simple non-life things, how does complex, lifelike things emerge from, from a rock
to a bacteria, protein, and these like weird systems that encode information and pass
information from self-replicate, and then also select each other and mutate in interesting
ways such that they can adapt and evolve and build increasingly more complex systems.
Right. Well, it's a form of information processing, right? Right.
Well, it's information transfer, but then also an energy processing, which then results in,
I guess, information processing. Maybe I'm getting bogged down.
Well, it's always doing some modification, and yeah, the input is some energy.
Right. Well, it's able to extract, yeah, extract resources from its environment in order to achieve
a goal. But the goal doesn't seem to be clear. Right. The goal is, well, the goal is to make more
of itself. Yeah. But in a way that increases, I mean, I don't know if evolution is a fundamental
law of the universe, but it seems to want to replicate itself in a way that maximizes the
chance of its survival. Individual agents within an ecosystem do, yes. Yes. Evolution itself doesn't
give a fuck. Right. It's a very, it don't care. It's just like, oh, you optimize it. Well, at least
it's certainly, yeah, it doesn't care about the welfare of the individual agents within it,
but it does seem to, I don't know. I think the mistake is that we're anthropomorphizing.
It's to even try and give evolution a mindset because it is, there's a really great post by
Eliezer Yudkowski on Less Wrong, which is an alien god. And he talks about the mistake we
make when we try and put our mind, think through things from an evolutionary perspective as though
we're giving evolution some kind of agency and what it wants. Yeah, worth reading. But
yeah. I would like to say that having interacted with a lot of really smart people that say that
anthropomorphization is a mistake, I would like to say that saying that anthropomorphization is
a mistake is a mistake. I think there's a lot of power in anthropomorphization. If I can only say
that word correctly one time, I think that's actually a really powerful way to reason through
things. And I think people, especially people in robotics seem to run away from it as fast as
possible. Can you give an example of how it helps in robotics? Oh, that our world is a world of humans.
And to see robots as fundamentally just tools runs away from the fact that we live in a dynamic
world of humans. All these game theory systems we've talked about that are robot that
ever has to interact with humans. And I don't mean like intimate friendship interaction. I mean
in a factory setting where it has to deal with the uncertainty of humans, all that kind of stuff.
You have to acknowledge that the robot's behavior has an effect on the human just as much as the
human has an effect on the robot. And there's a dance there. And you have to realize that this
entity, when a human sees a robot, this is obvious in a physical manifestation of a robot,
they feel a certain way. They have a fear. They have uncertainty. They have their own
personal life projections. We have pets and dogs and the thing looks like a dog. They have their
own memories of what a dog is like. They have certain feelings. And that's going to be useful
in a safety setting, safety, critical setting, which is one of the most trivial settings for a
robot in terms of how to avoid any kind of dangerous situations. And a robot should
really consider that in navigating its environment. And we humans are right
to reason about how a robot should consider navigating its environment through anthropomorphization.
I also think our brains are designed to think in human terms like game theory,
I think is best applied in the space of human decisions.
Right. Things like AI, AI is they are, we can somewhat, I don't think it's,
the reason I say anthropomorphization we need to be careful with is because there is a danger of
overly applying, overly, wrongly assuming that this artificial intelligence is going to operate in
any similar way to us. Because it is operating on a fundamentally different substrate. Even dogs
or even mice or whatever in some ways, anthropomorphizing them is less of a mistake, I think,
than an AI, even though it's an AI we built and so on. Because at least we know that they're running
from the same substrate. And they've also evolved from the same out of the same evolutionary
process. They've followed this evolution of needing to compete for resources and needing to
find a mate and that kind of stuff. Whereas an AI that has just popped into an existence
somewhere on a cloud server, let's say, or whatever, however it runs and whatever,
whether I don't know whether they have an internal experience, I don't think they
necessarily do. In fact, I don't think they do. But the point is is that to try and
apply any kind of modeling of thinking through problems and decisions in the same way that we
do has to be done extremely carefully because they're so alien, their method of whatever their
form of thinking is. It's just so different because they've never had to evolve in the same way.
Yeah, beautifully put. I was just playing devil's advocate. I do think in certain contexts,
anthropomorphization is not going to hurt you. Engineers run away from it too fast.
I can see that. For the most point, you're right. Do you have advice for young people
today, like the 17-year-old that you were, of how to live life you can be
proud of, how to have a career you can be proud of in this world full of mullocks?
Think about the win-wins. Look for win-win situations and be careful not to
overly use your smarts to convince yourself that something is win-win when it's not.
That's difficult. I don't know how to advise people on that because it's something I'm still
figuring out myself. But have that as a default MO. Don't see everything as a zero-sum game.
Try to find the positive someness and find ways to, if there doesn't seem to be one,
consider playing a different game so that I would suggest that. Do not become a professional poker
player because people always ask, they're like, oh, she's a pro. I want to do that too.
Fine. You could have done it if you were, when I started out, it was a very different situation
back then. Poker is a great game to learn in order to understand the ways to think.
I recommend people learn it, but don't try and make a living from it these days. It's
almost very, very difficult to the point of being impossible. Then really be aware of how much time
you spend on your phone and on social media and really try and keep it to a minimum. Be aware that
basically every moment that you spend on it is bad for you. It doesn't mean to say you can never do
it, but just have that running in the background. I'm doing a bad thing for myself right now.
I think that's the general rule of thumb.
Of course, about becoming a professional poker player,
if there is a thing in your life that's like that and nobody can convince you otherwise,
just fucking do it. Don't listen to anyone's advice. Find a thing that you can't be talked
out of too. I like that. You were a lead guitarist in the metal band. Did I write that down from
something? What did you do it for? The performing? Was it the music of it? Was it just being a
rock star? Why did you do it? We only ever played two gigs. It wasn't very famous or anything like
that. I was very into metal. It was my entire identity from the age of 16 to 23.
What's the best metal band of all time? Don't ask me that. It's so hard to answer.
I had a long argument with... I'm a guitarist, more like a classic rock guitarist. I've had friends
who are very big Pantera fans. There was often arguments about what's the better metal band,
Metallica versus Pantera. This is a more 90s maybe discussion. But I was always on the side of Metallica,
both musically and in terms of performance and the depth of lyrics and so on.
But basically everybody was against me because if you're a true metal fan,
I guess the idea goes is you can't possibly be a Metallica fan. Metallica is pop. It's like
they sold out. Metallica are metal. They were the... Again, you can't say who was the godfather
of metal, blah, blah, blah. But they were so groundbreaking and so brilliant. You've named
literally two of my favorite bands. When you ask that question, or who are my favorites,
those were two that came up. A third one is Children of Bodom, who I just think... They just
tick all the boxes for me. Nowadays, I feel like a repulsion to the... I was that myself like,
I'd be like, who do you prefer more? Come on, you have to rank them. But it's like this false
zero somnus that's like, why? They're so additive. There's no conflict there.
Although, when people ask that kind of question about anything, movies,
I feel like it's hard work and it's unfair, but you should pick one.
That's actually the same kind of... It's like a fear of a commitment. When people ask me what's
your favorite band, it's like... But it's good to pick. Exactly. And thank you for the tough
question. Well, maybe not in a context when a lot of people are listening.
Yeah, I was just like, what? Why does this matter? No, it does.
Are you still into metal?
Funny enough, I was listening to a bunch before I came over here.
Oh, like, do you use it for motivation or get you in a certain way?
Yeah, I was weirdly listening to 80s hair metal before I came.
Does that count as metal?
I think so. It's like proto-metal and it's happy. It's optimistic, happy proto-metal.
Yeah, I mean, these things, you know, all these genres bleed into each other. But yeah, sorry
to answer your question about guitar playing. My relationship with it was kind of weird in that
I was deeply uncreative. My objective would be to hear some really hard technical solo and then
learn it, memorize it, and then play it perfectly. But I was incapable of trying to write my own
music. Like, the idea was just absolutely terrifying. But I was also just thinking, I was like,
it'd be kind of cool to actually try starting a band again and getting back into it and write.
But it's scary.
It's scary. I mean, I put out some guitar playing, just other people's covers, like I play comfortably
numb on the internet. It's scary too. It's scary putting stuff out there. And I had this similar
kind of fascination with technical playing both on piano and guitar. One of the reasons
I started learning guitar is from Ozzy Osbourne, Mr. Crowley's solo. And one of the first solos I
learned is that there's a beauty to it. There's a lot of beauty to it.
It's tapping, right?
There's some tapping, but it's just really fast.
Beautiful, like arpeggios.
Yeah, arpeggios. Yeah. But there's a melody that you can hear through it, but there's also build
up. It's a beautiful solo, but it's also technically just visually the way it looks
when a person's watching. You feel like a rock star playing it. But it ultimately has to do with
technical. You're not developing the part of your brain that I think requires you to generate
beautiful music. It is ultimately technical in nature. And so that took me a long time to
let go of that and just be able to write music myself. And that's a different journey, I think.
I think of that journey as a little bit more inspired in the blues world, for example,
or improvisation is more valued, obviously in jazz and so on. But I think ultimately it's a more
rewarding journey because your relationship with the guitar then becomes a kind of escape
from the world where you can create. I mean, creating stuff is...
And it's something you work with, because my relationship with my guitar was like it was
something to tame and defeat. Yeah, the challenge. Which was kind of what my whole personality
was back then. I was just very competitive, very just like must bend this thing to my will.
Whereas writing music, it's like a dance. You work with it.
You work with it. But I think because of the competitive aspect, for me at least,
that's still there, which creates anxiety about playing publicly or all that kind of stuff.
I think there's just like a harsh self-criticism within the whole thing.
It's really tough. I want to hear some of your stuff.
I mean, there's certain things that feel really personal. And on top of that, as we talked about
poker offline, there's certain things that you get to a certain height in your life. And that
doesn't have to be very high, but you get to a certain height. And then you put it aside for a bit.
And it's hard to return to it because you remember being good. And it's hard to...
Like you being at a very high level in poker, it might be hard for you to return to poker
every once in a while. And you enjoy it knowing that you're just not as sharp as it used to be
because you're not doing it every single day. That's something I always wonder with,
I mean, even just like in chess with Kasparov, some of these greats just returning to it.
It's almost painful. Yes, I can... Yeah.
And I feel that way with guitar too, because I used to play every day a lot.
So returning to it is painful because it's like accepting the fact that this whole
ride is finite and you have a prime. There's a time when you're really good and now it's over.
And now... We're on a different chapter of life. I was like... But I miss that.
But you can still discover joy within that process. It's been tough, especially with some
level of like, as people get to know you, there's in people film stuff, you don't have the privacy
of just sharing something with a few people around you. Yeah.
That's a beautiful privacy. That's a good point.
With the internet, it's just disappearing. Yeah, that's a really good point.
Yeah. But all those pressures aside, if you really... You can step up and still enjoy the
fuck out of a good musical performance. What do you think is the meaning of this whole thing?
What's the meaning of life? Wow.
It's in your name, as we talked about. Do you feel the requirement to have to live up to your name?
Because live? Yeah. No, because I don't see it. I mean, my... Well, again, it's kind of like...
No, I don't know. Because my full name is Olivia. Yeah.
So I can retreat in that and be like, oh, Olivia, what does that even mean? Live up to live.
No, I can't say I do because I've never thought of it that way.
And then your name backwards is evil, as we also talked about.
Well, there's like layers. I mean, I feel the urge to live up to that,
to be the inverse of evil or even better. Because I don't think...
Is the inverse of evil good or is good something completely separate to that?
I think... My intuition says it's the latter, but I don't know. Anyway, getting in the weeds.
What is the meaning of all this? Of life.
Why are we here? I think to
explore, have fun and understand and make more of here and to keep the game going.
Of here? More of here? More of... More of this? Whatever this is? More of experience.
Just to have more of experience and ideally, positive experience. And more...
I guess to try and put it into a vaguely scientific term.
Make it so that the program required... The length of code required to describe the universe
is as long as possible. And highly complex and therefore interesting.
Because again, I know we banged the metaphor to death, but like
tiled with X, tiled with paper clips doesn't require that much of a code to describe.
Obviously, maybe something emerges from it. But that steady state, assuming a steady state,
it's not very interesting. Whereas it seems like our universe is over time becoming more
and more complex and interesting. There's so much richness and beauty and diversity on this earth.
And I want that to continue and get more. I want more diversity. And in the very best sense of that
word is, to me, the goal of all this. Yeah. And somehow have fun in the process.
Because we do create a lot of fun things. Instead of in this creative force and all the
beautiful things we create, somehow there's like a funness to it. And perhaps that has to do with
the finiteness of life, the finiteness of all these experiences, which is what makes them
kind of unique. Like the fact that they end, there's this, whatever it is, falling in love or
creating a piece of art, or creating a bridge, or creating a rocket, or creating a,
I don't know, just the businesses that build something or solve something.
The fact that it is born and it dies somehow embeds it with fun, with joy,
for the people involved. I don't know what that is, the finiteness of it.
It can do. Some people struggle with the, you know, I mean, a big thing I think that one has
to learn is being okay with things coming to an end. And in terms of projects and so on,
people cling on to things beyond what they're meant to be doing, beyond what is reasonable.
And I'm going to have to come to terms with this podcast coming to an end.
I really enjoy talking to you. I think it's obvious, as we've talked about many times,
you should be doing a podcast. You should, you're already doing a lot of stuff publicly to the
world, which is awesome. And you're a great educator, you're a great mind, you're great
intellect. But it's also this whole medium of just talking. It is good. It's a fun one.
It really is good. And it's just, it's nothing but like, it's just so much fun.
And you can just get into so many, yeah, there's this space to just explore and see what comes
and emerges. And yeah. Yeah, to understand yourself better. And if you're talking to others to
understand them better, and together with them, I mean, you should do your own podcast,
but you should also do a podcast with C as we talked about. The two of you have such
different minds that like melt together in just hilarious ways, fascinating ways, just
the tension of ideas there is really powerful. But in general, I think you got a beautiful
voice. So thank you so much for talking today. Thank you for being a friend. Thank you for honoring
me with this conversation and with your valuable time. Thanks, Liv. Thank you. Thanks for listening
to this conversation with Liv Buri. To support this podcast, please check out our sponsors in
the description. And now let me leave you with some words from Richard Feynman. I think it's
much more interesting to live not knowing than to have answers, which might be wrong. I have
approximate answers and possible beliefs and different degrees of uncertainty about different
things. But I'm not absolutely sure of anything. And there are many things I don't know anything
about, such as whether it means anything to ask why we're here. I don't have to know the answer.
I don't feel frightened not knowing things by being lost in a mysterious universe without any
purpose, which is the way it really is as far as I can tell. Thank you for listening and hope to see
you next time.