logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 9h 33m 5s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

we can actually figure out where are the aliens out there in spacetime by being clever about the
few things we can see, one of which is our current date. And so now that you have this living cosmology,
we can tell the story that the universe starts out empty. And then at some point,
things like us appear very primitive. And then some of those stop being quiet and expand.
And then for a few billion years, they expand, and then they meet each other. And then for the
next 100 billion years, they commune with each other. That is, the usual models of cosmology say
that in roughly 150 billion years, the expansion of the universe will happen so much that all
you'll have left is some galaxy clusters and they that are sort of disconnected from each other.
But before then, they will interact. There will be this community of all the grabby alien
civilizations. And each one of them will hear about and even meet thousands of others.
And we might hope to join them someday and become part of that community.
The following is a conversation with Robin Hansen, an economist at George Mason University,
and one of the most fascinating, wild, fearless, and fun minds I've ever gotten a chance to accompany
for a time in exploring questions of human nature, human civilization, and alien life out there in
our impossibly big universe. He is the co-author of a book titled The Elephant in the Brain, Hidden
Motives in Everyday Life, The Age of M, Work, Love, and Life When Robots Rule the Earth,
and a fascinating recent paper I recommend on, quote, grabby aliens, titled If Loud Aliens Explain
Human Earliness, Quiet Aliens Are Also Rare. This is the Lex Friedman podcast. Support it.
Please check out our sponsors in the description. And now, dear friends, here's Robin Hansen.
You are working on a book about, quote, grabby aliens. This is a technical term, like the big
bang. So what are grabby aliens? Grabby aliens expand fast into the universe, and they change
stuff. That's the key concept. So if they are out there, we would notice. That's the key idea. So
the question is, where are the grabby aliens? So Fermi's question is, where are the aliens? And
we could vary that in two terms, right? Where are the quiet, hard to see aliens, and where are the
big loud grabby aliens? So it's actually hard to say where all the quiet ones are, right?
Right. There could be a lot of them out there, because they're not doing much. They're not
making a big difference in the world. But the grabby aliens, by definition, are the ones you
would see. We don't know exactly what they do with where they went. But the idea is they're in
some sort of competitive world where each part of them is trying to grab more stuff and do something
with it. And, you know, almost surely whatever is the most competitive thing to do with all the stuff
they grab, isn't to leave it alone the way it started, right? So we humans, when we go around
the earth and use stuff, we change it. We turn a forest into a farmland, turn a harbor into a city.
So the idea is aliens would do something with it. And so we're not exactly sure what it would
look like, but it would look different. So somewhere in the sky, we would see big spheres
of different activity, whereas things had been changed because they had been there.
Expanding spheres. Right. So as you expand, you aggressively interact and change the environment.
So the word grabby versus loud, you're using them sometimes synonymously, sometimes not.
Grabby, to me, is a little bit more aggressive. What does it mean to be loud? What does it mean
to be grabby? What's the difference? And loud in what ways? A visual? Is it sound? Is it some other
physical phenomenon like gravitational waves? Are you using this kind of in a broad philosophical
sense or there's a specific thing that it means to be loud in this universe of ours?
My co-authors and I put together a paper with a particular mathematical model.
And so we use the term grabby aliens to describe that more particular model. And the idea is,
it's a more particular model of the general concept of loud. So loud would just be the
general idea that they would be really obvious. So grabby is the technical term. Is it in the
title of the paper? It's in the body. The title is actually about loud and quiet.
So the idea is there's, you want to distinguish your particular model of things from the general
category of things everybody else might talk about. So that's how we distinguish.
The paper titles, if loud aliens explain human
earliness, quiet aliens are also rare. If life on earth, God, this is such a good abstract.
If life on earth had to achieve and hard and hard steps to reach humanity's level,
then the chance of this event rose as time to the end of power. So we'll talk about power,
we'll talk about linear increase. So what is the technical definition of grabby?
How do you envision grabbiness and why are in contrast with humans? Why aren't humans grabby?
So where's that line? Is it well-defineable? What is grabby? What is non-grabby?
We have a mathematical model of the distribution of advanced civilizations, i.e. aliens in space and
time. That model has three parameters, and we can set each one of those parameters from data,
and therefore we claim this is actually what we know about where they are in space-time.
So the key idea is they appear at some point in space-time, and then after some short delay,
they start expanding, and they expand at some speed. And the speed is one of those parameters.
That's one of the three. And the other two parameters are about how they appear in time.
That is, they appear at random places, and they appear in time according to a power law,
and that power law has two parameters, and we can fit each of those parameters to data.
And so then we can say, now we know. We know the distribution of advanced civilizations in
space and time. So we are right now a new civilization, and we have not yet started to
expand. But plausibly, we would start to do that within, say, 10 million years of the
current moment. That's plenty of time. And 10 million years is a really short duration
in the history of the universe. So we are at the moment a sort of random sample of the kind of
times at which an advanced civilization might appear, because we may or may not become grabby,
but if we do, we'll do it soon. And so our current date is a sample, and that gives us
one of the other parameters. The second parameter is the constant in front of the power law,
and that's arrived from our current date. So power law, what is the n in the power law?
That's the complicated thing to explain. Right.
Advanced life appeared by going through a sequence of hard steps. So starting with very
simple life, and here we are at the end of this process at pretty advanced life. And so we had
to go through some intermediate steps, such as sexual selection, photosynthesis,
multicellular animals. And the idea is that each of those steps was hard.
Evolution just took a long time searching in a big space of possibilities to find
each of those steps. And the challenge was to achieve all of those steps by a deadline of when
the planets would no longer host simple life. And so Earth has been really lucky compared to
all the other billions of planets out there, and that we managed to achieve all these steps
in the short time of the five billion years that Earth can support simple life.
So not all steps, but a lot of them, because we don't know how many steps there are before
you start the expansion. These are all the steps from the birth of life to the initiation of major
expansion. Right. So we're pretty sure that it would happen really soon so that it couldn't be
the same sort of a hard step as the last ones in terms of taking a long time. So
when we look at the history of Earth, we look at the durations of the major things that have happened,
and that suggests that there's roughly, say, six hard steps that happened, say between three and 12,
and that we have just achieved the last one that would take a long time.
Which is?
Well, we don't know. But whatever it is, we've just achieved the last one.
Are we talking about humans or aliens here? So let's talk about some of these steps. So
Earth is really special in some way. We don't exactly know the level of specialness. We don't
really know which steps were the hardest or not, because we just have a sample of one.
But you're saying that there's three to 12 steps that we have to go through
to get to where we are that are hard steps, hard to find by something that took a long time
and is unlikely. There's a lot of ways to fail. There's a lot more ways to fail than to succeed.
The first step would be sort of the very simplest form of life of any sort.
And then we don't know whether that first sort is the first sort that we see in the
historical record or not. But then some other steps are, say, the development of photosynthesis,
the development of sexual reproduction. There's the development of eukaryite cells,
which are certain kind of complicated cell that seems to have only appeared once.
And then there's multicellularity that is multiple cells coming together to large organisms like us.
And in this statistical model of trying to fit all these steps into a finite window,
the model actually predicts that these steps could be a varying difficulties. That is,
they could each take different amounts of time on average. But if you're lucky enough that they
all appear at a very short time, then the durations between them will be roughly equal.
And the time remaining left over in the rest of the window will also be the same length.
So we at the moment have roughly a billion years left on earth until simple life like us would no
longer be possible. Life appeared roughly 400 million years after the very first time on life
was possible at the very beginning. So those two numbers right there give you the rough estimate
of six hard steps. Just to build up an intuition here. So we're trying to create a simple mathematical
model of how life emerges and expands in the universe. And there's a section in this paper,
How Many Hard Steps? Question mark. Right. The two most plausibly diagnostic earth durations seem
to be the one remaining after now before earth becomes uninhabitable for complex life. So you
estimate how long earth lasts, how many hard steps. There's windows for doing different hard
steps. And you can sort of like queuing theory mathematically estimate of like the solution
or the passing of the hard steps or the taking of the hard steps sort of like coldly mathematical
look. If life pre expansionary life requires a number of steps, what is the probability of taking
those steps on an earth that lasts a billion years or two billion years or five billion years
or 10 billion years? And you say solving for E using the observed durations of 1.1 and 0.4
then gives E values of 3.9 and 12.5 range 5.7 to 26 suggesting a middle estimate of at least six.
That's where you said six hard steps. Right. Just to get to where we are. Right. We started at the
bottom. Now we're here. And that took six steps on average. The key point is on average, these things
on any one random planet would take trillions of years, just a really long time. And so we're
really lucky that they all happened really fast in a short time before our window closed. And the
chance of that happening in that short window goes as that time period to the power of the
number of steps. And so that was where the power we talked about before it came from. And so that
means in the history of the universe, we should overall roughly expect advanced life to appear
as a power law in time. So that very early on, there was very little chance of anything appearing.
And then later on, as things appear, other things are appearing somewhat closer to them in time
because they're all going as this power law. What is the power law? For people who are not
short, math inclined, can you describe what a power law is?
So say the function x is linear and x squared is quadratic. So it's the power of two. If we make
x to the three, that's cubic or the power of three. And so x to the sixth is the power of six.
And so we'd say life appears in the universe on a planet like Earth in that proportion to the time
that it's been ready for life to appear. And that over the universe, in general,
it'll appear at roughly a power law like that. What is the x? What is n? Is it the number of
hard steps? Yes, the number of hard steps. So that's the idea. So it's like, if you're gambling
and you're doubling up every time, this is the probability of you just keep winning.
So it gets very unlikely very quickly. And so we're the result of this unlikely chain
of successes. It's actually a lot like cancer. So the dominant model of cancer in an organism
like each of us is that we have all these cells. And in order to become cancerous,
a single cell has to go through a number of mutations. And these are very unlikely mutations.
And so any one cell is very unlikely to have all these mutations happen by the time your
life spans over. But we have enough cells in our body that the chance of any one cell producing
cancer by the end of your life is actually pretty high, more like 40%. And so the chance of cancer
appearing in your lifetime also goes as a power law, this power of the number of mutations that's
required for any one cell in your body to become cancerous. The longer you live, the likely you
are to have cancer cells. And the power is also roughly six. That is, the chance of you getting
cancer is roughly the power of six of the time you've been since you were born. It is perhaps not
lost on people that you're comparing the power laws of the survival or the arrival of the human
species to cancerous cells. The same mathematical model. But of course, we might have a different
value assumption about the two outcomes. But of course, from the point of view of cancer,
it's more similar. From the point of view of cancer, it's a win-win. We'll both get to thrive,
I suppose. It is interesting to take the point of view of all kinds of life forms on earth,
of viruses, of bacteria. They have a very different view. It's like the Instagram channel,
Nature is Metal. The ethic under which nature operates doesn't often coincide, correlate with
human morals. It seems cold and machine-like in the selection process that it performs.
I am an analyst. I'm a scholar, an intellectual. And I feel I should carefully distinguish
predicting what's likely to happen and then evaluating or judging what I think would be
better to happen. And it's a little dangerous to mix those up too closely because then we can
have wishful thinking. And so I try typically to just analyze what seems likely to happen
regardless of whether I like it or that we do anything about it. And then once you see a rough
picture of what's likely to happen if we do nothing, then we can ask, well, what might we
prefer? And ask, where could the levers be to move it at least a little toward what we might
prefer? And that's useful. But often doing that just analysis of what's likely to happen if
we do nothing offends many people. They find that dehumanizing or cold or metal, as you say,
to just say, well, this is what's likely to happen. And it's not your favorite, sorry, but
maybe we can do something, but maybe we can't do that much.
This is very interesting that the cold analysis, whether it's geopolitics, whether it's medicine,
whether it's economics, sometimes misses some very specific aspect of
human condition. Like, for example, when you look at a doctor and the act of a doctor helping
a single patient, if you do the analysis of that doctor's time and cost of the medicine or the
surgery or the transportation of the patient, this is the Paul Farmer question. Is it worth
spending 10, 20, $30,000 on this one patient? When you look at all the people that are suffering
in the world, that money could be spent so much better. And yet, there's something about human
nature that wants to help the person in front of you. And that is actually the right thing to do,
despite the analysis. And sometimes when you do the analysis, there's something about the human
mind that allows you to not take that leap, that irrational leap to act in this way,
that the analysis explains it away. For example, the US government, the DOT, Department of
Transportation, puts a value of, I think, like $9 million on a human life. And the moment you
put that number on a human life, you can start thinking, well, okay, I can start making decisions
about this or that with a sort of cold economic perspective. And then you might lose, you might
deviate from a deeper truth of what it means to be human somehow. You have to dance, because then
if you put too much weight on the anecdotal evidence on these kinds of human emotions,
then you're going to lose, you could also probably more likely deviate from truth.
But there's something about that cold analysis. Like, I've been listening to a lot of people
coldly analyze wars, war in Yemen, war in Syria, Israel, Palestine, war in Ukraine.
And there's something lost when you do a cold analysis of why something happened.
When you talk about energy, talking about sort of conflict, competition over resources.
When you talk about geopolitics, sort of models of geopolitics and why a certain war happened,
you lose something about the suffering that happens. I don't know. It's an interesting
thing because you're both, you're exceptionally good at models in all domains, literally.
But also there's a humanity to you. So it's an interesting dance. I don't know if you can
comment on that dance. Sure. It's definitely true, as you say, that for many people, if you are accurate
in your judgment of, say, for a medical patient, what's the chance that this treatment might help?
And what's the cost? And compare those to each other. And you might say,
this looks like a lot of cost for a small medical gain. And at that point, knowing that fact that
might take the wing, the air out of your sails, you might not be willing to do the thing that
maybe you feel is right anyway, which is still to pay for it. And then somebody knowing that might
want to keep that news from you, not tell you about the low chance of success or the high cost
in order to save you this tension, this awkward moment where you might fail to do what they and
you think is right. But I think the higher calling, the higher standard to hold you to,
which many people can be held to, is to say, I will look at things accurately, I will know the
truth, and then I will also do the right thing with it. I will be at peace with my judgment
about what the right thing is in terms of the truth. I don't need to be lied to in order to
figure out what the right thing to do is. And I think if you do think you need to be lied to in
order to figure out what the right thing to do is, you're at a great disadvantage because
then people will be lying to you, you will be lying to yourself and you won't be as
effective at achieving whatever good you are trying to achieve.
But getting the data, getting the facts is step one, not the final step. Absolutely.
So it's a, I would say having a good model, getting the good data is step one and it's a burden
because you can't just use that data to arrive at sort of the easy, convenient thing. You have
to really deeply think about what is the right thing. You can't use the, so the dark aspect of
data of models is you can use it to excuse away actions that aren't ethical. You can use data
to basically excuse away anything. But not looking at data, let you use yourself to pretend and think
that you're doing good when you're not. Exactly. But it is a burden. It doesn't excuse you from
still being human and deeply thinking about what is right, that very kind of gray area,
that very subjective area. That's part of the human condition. But let us return for a time to
aliens. So you started to define sort of the model, the parameters of grabbiness,
as we approach grabbiness. So what happens? So again, when there was three parameters,
there's the speed at which they expand, there's the rate at which they appear in time,
and that rate has a constant and a power. So we've talked about the history of life on Earth,
suggest that power is around six, but maybe three to 12. We can say that constant comes from our
current date, sort of sets the overall rate. And the speed, which is the last parameter,
comes from the fact that when we look in the sky, we don't see them. So the model predicts very
strongly that if they were expanding slowly, say 1% of the speed of light, our sky would be full
of vast spheres that were full of activity. That is, at a random time when a civilization is first
appearing, if it looks out into its sky, it would see many other grabby alien civilizations in the
sky. And they would be much bigger than the full moon. They'd be huge spheres in the sky,
and they would be visibly different. We don't see them.
Can we pause for a second? Okay. There's a bunch of hard steps that Earth had to pass
to arrive at this place we are currently, which we're starting to launch rockets out into space.
We're kind of starting to expand a bit, very slowly. Okay. But this is like the birth. If
you look at the entirety of the history of Earth, we're now at this precipice of like expansion.
We could. We might not choose to, but if we do, we will do it in the next 10 million years.
10 million. Wow. Time flies when you're having fun.
I was taking more time on the cosmological scale. So that might be only 1,000. But the
point is, even if it's up to 10 million, that hardly makes any difference to the model. So I
might as well give you 10 million. This makes me feel, I was so stressed about planning what
I'm going to do today. And now you've got plenty of time. Plenty of time. I just need to be
generating some offspring quickly here. Okay. So in this moment, this 10 million year gap
or window when we start expanding, and you're saying, okay, so this is an interesting moment
where there's a bunch of other alien civilizations that might, at some
history of the universe arrived at this moment, we're here. They passed all the hard steps.
There's a model for how likely it is that that happens. And then they start expanding. And you
think of an expansion is almost like a sphere. Right. That's when you say speed, we're talking
about the speed of the radius growth. Exactly. The surface. How fast? The surface. How faithful.
Okay. And so you're saying that there is some speed for that expansion, average speed. And then
you can play with that parameter. And if that speed is super slow, then maybe that explains
why we haven't seen anything. If it's super fast, well, if the slow would create the puzzle,
it's low predicts we would see them, but we don't see them. Okay. And so the way to explain that
is that they're fast. So the idea is, if they're moving really fast, then we don't see them till
they're almost here. Okay. This is counterintuitive. All right. Hold on a second. So I think this
works best when I say a bunch of dumb things. Okay. And then you elucidate the full complexity
and the beauty of the dumbness. Okay. So there's these spheres out there in the universe that
are made visible because they're sort of using a lot of energy. So they're generating a lot of
light stuff. They're changing things. They're changing things. And change would be visible
a long way off. They would take apart stars, rearrange them, restructure galaxies. They
would just be big, huge stuff. Okay. If they're expanding slowly, we would see a lot of them
because the universe is old. Is old enough to where we would see the... We're assuming we're
just typical, maybe at the 50th percentile of them. So like half of them have appeared so far,
the other half will still appear later. And the math of our best estimate is that
they appear roughly once per million galaxies. And we would meet them in roughly a billion years
if we expanded out to meet them. So we're looking at a grab the aliens model, 3D sim, right?
That's the actual name of the video. By the time we get to 13.8 billion years,
the fun begins. Okay. So this is... We're watching a three-dimensional sphere rotating. I presume
that's the universe. And then grab the aliens. They're expanding and filling that universe
with all kinds of fun. Pretty soon it's all full. It's full. So that's how the grab the aliens come
in contact, first of all with other aliens, and then with us humans. The following is a simulation
of the grab the aliens model of alien civilizations. Civilizations are born that expand outwards at
constant speed. A spherical region of space is shown. By the time we get to 13.8 billion years,
this sphere will be about 3,000 times as wide as the distance from the Milky Way to Andromeda.
Okay. This is fun. It's huge. Okay. It's huge. All right. So why don't we see...
We're one little tiny, tiny, tiny, tiny dot in that giant, giant sphere.
Right. Why don't we see any of the grab the aliens?
It depends on how fast they expand. So you could see that if they expanded at the speed of light,
you wouldn't see them until they were here. So like out there if somebody is destroying the
universe with a vacuum decay, there's this doomsday scenario where somebody somewhere could change the
vacuum of the universe and that would expand at the speed of light and basically destroy
everything it hit. But you'd never see that until it got here because it's expanding at the speed of
light. If you're expanding really slow, then you see it from a long way off. So the fact we don't
see anything in the sky tells us they're expanding fast, say over a third the speed of light. And
that's really, really fast. But that's what you have to believe if you look out and you don't see
anything. Now you might say, well, maybe I just don't want to believe this whole model. Why should
I believe this whole model at all? And our best evidence why you should believe this model is our
early date. We are right now at almost 14 billion years into the universe on a planet around a star
that's roughly five billion years old. But the average star out there will last roughly five
trillion years. That is a thousand times longer. And remember that power law, it says that the chance
of advanced life appearing on a planet goes as the power of sixth of the time. So if a planet
lasts a thousand times longer, then the chance of it appearing on that planet, if everything would
stay empty at least, is a thousand to the sixth power or 10 to the 18. So enormous overwhelming
chance that if the universe would just stay set and empty and waiting for advanced life to appear,
when it would appear would be way at the end of all these planet lifetimes. That is the long
planets near the end of the lifetime, trillions of years into the future. But we're really early
compared to that. And our explanation is at the moment, as you saw in the video, the universe
is filling up in roughly a billion years, it'll all be full. And at that point, it's too late for
advanced life to show up. So you had to show up now before that deadline. Okay, can we break
that apart a little bit? Okay. Or linger on some of the things you said. So with the power law,
the things we've done on earth, the model you have says that it's very unlikely. Like we're lucky
SOBs. Is that is that mathematically correct to say? We're crazy early. That is when early
means like in the history of the universe in the history. Okay, so given this model,
how do we make sense of that? If we're super, can we just be the lucky ones?
Well, 10 to the 18 lucky, you know, how lucky do you feel? So, you know,
that's a pretty lucky right? You know, 10 to the 18 is a billion billion. So then if you were just
being honest and humble, that that means, what does that mean? It means one of the assumptions
that calculated this crazy early must be wrong. Yeah, that's what it means. So the key assumption
we suggest is that the universe would stay empty. So most life would appear like 1000 times longer
later than now, if everything would stay empty waiting for it to appear. What was so what is
not empty? So the grabby aliens are filling the universe right now, roughly at the moment they
filled half of the universe, and they've changed it. And when they fill everything, it's too late
for stuff like us to appear. But wait, hold on a second. Did anyone help us get lucky? If it's so
difficult, what how do like, so it's like cancer, right? There's all these cells, each of which
randomly does or doesn't get cancer. And eventually some cell gets cancer. And, you know, we were one
of those. But hold on a second. Okay. But we got it early. Early compared to the prediction with
an assumption that's wrong. That's so that's how we do a lot of, you know, theoretical analysis.
You have a model that makes a prediction that's wrong, then that helps you reject that model.
Okay, let's try to understand exactly where the wrong is. So the assumption is that the universe
is empty, stays empty, stays empty, and waits until this advanced life appears in trillions of years.
That is, if the universe would just stay empty, if there was just, you know, nobody else out there,
then when you should expect advanced life to appear, if you're the only one in the universe,
when should you expect to appear? You should expect to appear trillions of years in the future.
I see. Right. So this is a very sort of nuanced mathematical assumption. I don't think we can
intuit it cleanly with words. But if you assume that you're just waiting, the universe stays empty
and you're waiting for one life civilization to pop up, then it's going to, it should happen very
late, much later than now. And if you look at Earth, the way things happen on Earth, it happened
much, much, much, much, much earlier than it was supposed to according to this model, if you take
the initial assumption. Therefore, you can say, well, the initial assumption of the universe
staying empty is very unlikely. Right. Okay. And the other, the other alternative theory is the
universe is filling up and will fill up soon. And so we are typical for the origin data of things
that can't appear before the deadline. Before the, okay, it's filling up. So why don't we
see anything if it's filling up? Because they're expanding really fast. Close to the speed of
like, exactly. So we will only see it when it's here. Almost here. Okay. What are the ways in
which we might see a quickly expanding? This is both exciting and terrifying. It is terrifying.
It's like watching a truck, like driving at you at 100 miles an hour. And right. So we would see
spheres in the sky, at least one sphere in the sky growing very rapidly. And like very rapidly.
Right? Yes. Very rapidly. So we're not, so there's, there's, you know, different depth,
because we were just talking about 10 million years. This would be, you might see it 10 million
years in advance coming. I mean, you still might have a long warning. Or again, the universe is
14 billion years old. The typical origin times of these things are spread over several billion
years. So the chance of one originating at a, you know, very close to you in time is very low.
So they still might take millions of years from the time you see it, from the time it gets here.
A lot of million years of years to be terrified of this sphere coming at you.
But, but, but coming at you very fast. So if they're traveling close to the speed of light.
But they're coming from a long way away. So remember, the rate at which they appear is one
per million galaxies. Right. So they're, they're roughly 100 galaxies away.
I see. So the delta between the speed of light and their actual travel speed is very important.
Right. So even if they're going at, say, half the speed of light,
we'll have a long time then. Yeah. But what if they're traveling exactly at a speed of light?
Then we see them like, then we wouldn't have much warning, but that's less likely. Well,
we can't exclude it. And they could also be somehow traveling faster than the speed of light.
But I think we can exclude because if they could go faster than speed of light, then
they would just already be everywhere. So in a universe where you can travel
faster than the speed of light, you can go backwards in space time. So any time you appeared
anywhere in space time, you could just fill up everything. Yeah. And so anybody in the future,
whoever appeared, they would have been here by now. Can you exclude the possibility that
those kinds of aliens aren't already here? Well, you have to have a different discussion of that.
Right. So let's actually leave that, let's leave that discussion aside just to
linger and understand the grabby alien expansion, which is beautiful and fascinating. Okay.
So there's these giant expanding spheres of alien civilizations. Now,
when those spheres collide mathematically,
it's very likely that we're not the first collision of grabby alien civilizations,
I suppose is one way to say it. So there's like the first time the spheres touch each other,
they recognize each other, they meet. They recognize each other first before they meet.
They see each other coming. They see each other coming. And then so there's a bunch of them,
there's a combinatorial thing where they start seeing each other coming. And then there's a
third neighbor, it's like, what the hell? And then there's a fourth one. Okay. So what does that,
you think, look like? What lessons from human nature that's the only data we have?
Well, can you draw up? So the story of the history of the universe here is what I would
call a living cosmology. So what I'm excited about in part by this model is that it lets us
tell a story of cosmology where there are actors who have agendas. So most ancient peoples, they
had cosmologies, the stories they told about where the universe came from and where it's going and
what's happening out there. And their stories, they like to have agents and actors, gods or
something out there doing things. And lately, our favorite cosmology is dead, kind of boring.
We're the only activity we know about or see and everything else just looks dead and empty.
But this is now telling us, no, that's not quite right. At the moment, the universe is filling
up. And in a few billion years, it'll be all full. And from then on, the history of the universe
will be the universe full of aliens. Yeah. So that's a, it's a really good reminder,
a really good way to think about cosmologies. We're surrounded by a vast darkness. And we
don't know what's going on in that darkness until the light from whatever generate lights
arrives here. So we kind of, yeah, we look up at the sky, okay, they're stars. Oh, they're pretty.
But you don't think about the giant expanding spheres of aliens. Right.
See them. But now we're looking at the clock. If you're clever, the clock tells you. So I like
the analogy with the ancient Greeks. So you might think that an ancient Greek, you know,
staring at the universe couldn't possibly tell how far away the sun was or how far away the
moon is or how big the earth is, that all you can see is just big things in the sky you can't
tell. But they were clever enough actually to be able to figure out the size of the earth
and the distance to the moon and the sun and the size of the moon and sun. That is,
they could figure those things out actually by being clever enough. And so similarly,
we can actually figure out where are the aliens out there in space time by being clever about
the few things we can see, one of which is our current date. And so now that you have this living
cosmology, we can tell the story that the universe starts out empty. And then at some point, things
like us appear very primitive. And then some of those just stop being quiet and expand. And then
for a few billion years, they expand and then they meet each other. And then for the next 100
billion years, they commune with each other. That is, the usual models of cosmology say that in
roughly 150 billion years, the expansion of the universe will happen so much that all you'll have
is left to some galaxy clusters that are sort of disconnected from each other. But before then,
for the next 100 billion years, excuse me, they will interact. There will be this community of all
the grabby alien civilizations. And each one of them will hear about and even meet thousands of
others. And we might hope to join them someday and become part of that community. That's an
interesting thing to aspire to. Yes. Interesting is an interesting word. Is the universe of alien
civilizations defined by war as much or more than war-defined human history?
I would say it's defined by competition. And then the question is how much competition implies war.
So up until recently, competition defined life on Earth. Competition between species and organisms
and among humans, competitions among individuals and communities. And that competition often took
the form of war in the last 10,000 years. Many people now are hoping or even expecting to sort
of suppress and end competition in human affairs. They regulate business competition. They prevent
military competition. And that's a future I think a lot of people will like to continue and
strengthen. People will like to have something close to world government or world governance or
at least a world community. And they will like to suppress war and many forms of business and
personal competition over the coming centuries. And they may like that so much that they prevent
interstellar colonization, which would become the end of that era. That is, interstellar colonization
would just return severe competition to human or our descendant affairs. And many civilizations may
prefer that and ours may prefer that. But if they choose to allow interstellar colonization,
they will have chosen to allow competition to return with great force. That is, there's really
not much of a way to centrally govern a rapidly expanding sphere of civilization. And so I think
that's one of the most solid things we can predict about grabie aliens is they have accepted
competition and they have internal competition. And therefore, they have the potential for competition
when they meet each other at the borders. But whether that's military competition is more of
an open question. So military meaning physically destructive, right. So there's a lot to say
there. So one idea that you kind of proposed is progress might be maximized through competition,
through some kind of healthy competition, some definition of healthy. So like,
constructive, not destructive competition. So like, we would likely grabie alien civilizations
would be likely defined by competition because they can expand faster because they competition
allows innovation and sort of the battle of ideas. The way I would take the logic is to say,
you know, competition just happens if you can't coordinate to stop it. And you probably can't
coordinate to stop it in an expanding interstellar way. So competition is a fundamental force
in the universe. It has been so far. And it would be within an expanding grabie alien civilization.
But we today have the chance, many people think and hope, of greatly controlling and limiting
competition within our civilization for a while. And that's an interesting choice
whether to allow competition to re to sort of regain its full force, or whether to suppress
and manage it. Well, one of the open questions that has been raised in the past less than 100 years
is whether our desire to lessen the destructive nature of competition or the destructive kind
of competition will be outpaced by the destructive power of our weapons. Sort of if nuclear weapons
and weapons of that kind become more destructive than our desire for peace, then all it takes is
one asshole at the party to ruin the party. It takes one asshole to make a delay, but not that
much of a delay on the cosmological scales we're talking about. So even a vast nuclear war,
if it happened here right now on Earth, it would not kill all humans. It certainly wouldn't kill
all life. And so human civilization would return within 100,000 years. So all the history of atrocities
and if you look at the Black Plague, which is not human cause atrocities or whatever.
There are a lot of military atrocities in history. Absolutely. In the 20th century,
those challenges to think about human nature, but the cosmic scale of time and space,
they do not stop the human spirit essentially. The humanity goes on. Through all the atrocities,
it goes on. Most likely. So even a nuclear war isn't enough to destroy us or to stop our potential
from expanding. But we could institute a regime of global governance that limited competition,
including military and business competition of sorts, and that could prevent our expansion.
Of course, to play devil's advocate, global governance is centralized power,
power corrupts, and absolute power corrupts, absolutely. One of the aspects of competition
that's been very productive is not letting any one person, any one country, any one center of power
become absolutely powerful. Because that's another lesson is it seems to corrupt or something about
ego in the human mind that seems to be corrupted by power. So when you say global governance,
that terrifies me more than the possibility of war because it's...
I think people will be less terrified than you are right now. And let me try to
paint the picture from their point of view. This isn't my point of view, but I think it's going to
be a widely shared point of view. This is two devils advocates arguing, two devils. So for the
last half century and into the continuing future, we actually have had a strong elite global community
that shares a lot of values and beliefs and has created a lot of convergence in global policy.
So if you look at electromagnetic spectrum or medical experiments or pandemic policy or
nuclear power energy or regulating airplanes or just in a wide range of area, in fact, the world
has very similar regulations and rules everywhere. And it's not a coincidence because they are part
of a world community where people get together at places like Davos, et cetera, where world elites
want to be respected by other world elites, and they have a convergence of opinion, and that
produces something like global governance, but without a global center. This is what human mobs
or communities have done for a long time. That is, humans can coordinate together on shared behavior
without a center by having gossip and reputation within a community of elites. And that is what
we have been doing and are likely to do a lot more of. So for example, one of the things that's
happening, say, with the war in Ukraine is that this world community of elites has decided that
they disapprove of the Russian invasion, and they are coordinating to pull resources together from
all around the world in order to oppose it. And they are proud of that, sharing that opinion and
their, and their feel that they are morally justified in their stance there. And that's
this kind of event that actually brings world elite communities together, where they come
together and they push a particular policy and position that they share and that they achieve
successes. And the same sort of passion animates global elites with respect to, say, global warming
or global poverty and other sorts of things. And they are, in fact, making progress on those
sorts of things through shared global community of elites. And in some sense, they are slowly
walking toward global governance, slowly strengthening various world institutions of
governance, but cautiously, carefully watching out for the possibility of a single power that might
corrupt it. I think a lot of people over the coming centuries will look at that history and like it.
It's an interesting thought. And thank you for playing that devil's advocate there.
But I think the elites too easily lose touch of the morals that the best of human nature and power
corrupts. Sure. But if their view is the one that determines what happens, their view may
end up there, even if you or I might criticize it from that point of view. So from a perspective
of minimizing human suffering, elites can use topics of the war in Ukraine and climate change
and all of those things to sell an idea to the world. And with disregard to the amount of suffering
it causes, their actual actions. So like you can tell all kinds of narratives. That's the way propaganda
works. Right. Hitler really sold the idea that everything Germany is doing is either it's the
victim is defending itself against the cruelty of the world and it's actually trying to bring out
about a better world. So every power center thinks they're doing good. And so this is
this is the positive of competition of not of having multiple power centers. This kind of
gathering of elites makes me very, very, very nervous. The dinners, the meetings in the closed
rooms. I don't know. I another, but remember, we talked about separating our cold analysis of
what's likely or possible from what we prefer. And so that's this isn't exactly enough time for
that. We might say, I would recommend we don't go this route of a world strong world governance.
And because I would say it'll preclude this possibility of becoming grabby aliens of filling
the next nearest million galaxies for the next billion years with vast amounts of activity
and interest and value of life out there. That's the thing we would lose by deciding that we
wouldn't expand, that we would stay here and keep our comfortable shared governance.
So you wait, you think that global governance is makes it more likely or less likely that we
expand out into the universe less. So okay, this is the key. This is the key point. Right. Right.
So screw the elites. Wait, do we want to expand? So again, I want to separate my neutral analysis
from my evaluation and say, first of all, I have an analysis that tells us this is a key choice
that we will face and that it's key choice other aliens have faced out there. And it could be that
only one in 10 or one in 100 civilizations chooses to expand and the rest of them stay quiet. And
that's how it goes out there. And we face that choice too. And it'll happen sometime in the
next 10 million years, maybe the next thousand. But the key thing to notice from our point of view
is that even though you might like our global governance, you might like the fact that we've
come together, we no longer have massive wars and we no longer have destructive competition.
And that we could continue that, the cost of continuing that would be to prevent
interstellar colonization. That is, once you allow interstellar colonization, then you've lost control
of those colonies and whatever they change into, they could come back here and compete with you
back here as a result of having lost control. And I think if people value that global governance
and global community and regulation and all the things it can do enough, they would then
want to prevent interstellar colonization. I want to have a conversation with those people.
I believe that both for humanity, for the good of humanity, for what I believe is good in humanity
and for expansion, exploration, innovation, distributing the centers of power is very beneficial.
So this whole meeting of elites, and I've been very fortunate to meet quite a large number of
elites, they make me nervous because it's easy to lose touch of reality. I'm nervous about that
and myself to make sure that you never lose touch as you get sort of older, wiser, you know, how you
generally get like disrespectful of kids, kids these days. No, the kids are...
Okay, but I think you should hear a stronger case for their position. So I'm going to play
that for the elites. Yes, well, for the limiting of expansion and for the regulation of behavior.
Okay, can I link on that? So you're saying those two are connected. So the human civilization
and alien civilizations come to a crossroads. They have to decide, do we want to expand or not?
And connected to that, do we want to give a lot of power to a central elite? Do we want to
distribute the power centers, which is naturally connected to the expansion? When you expand,
you distribute the power. If, say, over the next thousand years, we fill up the solar system,
right? We go out from Earth and we colonize Mars and we change a lot of things. Within a solar system,
still everything is within reach. That is, if there's a rebellious colony around Neptune,
you can throw rocks at it and smash it and then teach them discipline. Okay.
A central control over the solar system is feasible. But once you let it escape the
solar system, it's no longer feasible. But if you have a solar system that doesn't have a central
control, maybe broken into a thousand different political units in the solar system, then any
one part of that that allows interstellar colonization and it happens. That is, interstellar
colonization happens when only one party chooses to do it and is able to do it. And that's what it
is there for. So we can just say in a world of competition, if interstellar colonization is
possible, it will happen and then competition will continue. And that will ensure the continuation
of competition into the indefinite future. And competition, we don't know, but competition
could take violent forms or productive forms. And the case I was going to make is that I think
one of the things that most scares people about competition is not just that it creates
holocausts and death on massive scales, is that it's likely to change who we are and what we value.
Yes. So this is the other thing with power. As we grow, as human civilization grows,
because multi-planetary, multi-solar system potentially, how does that change us, do you think?
I think the more you think about it, the more you realize it can change us a lot.
So first of all, it's pretty dark, by the way. Well, it's just honest.
Right. Well, I was trying to agree there. I think the first thing you should say, if you look at
history, just human history over the last 10,000 years, if you really understood what people were
like a long time ago, you'd realize they were really quite different. Ancient cultures created
people who were really quite different. Most historical fiction lies to you about that.
It often offers you modern characters in an ancient world. But if you actually study history,
you will see just how different they were and how differently they thought. And they've changed
a lot many times, and they've changed a lot across time. So I think the most obvious prediction
about the future is even if you only have the mechanisms of change we've seen in the past,
you should still expect a lot of change in the future. But we have a lot bigger mechanisms
for change in the future than we had in the past. So I have this book called The Age of M,
Work, Love and Life and Robots Rule the Earth, and it's about what happens if brain emulations
become possible. So a brain emulation is where you take an actual human brain and you scan it and
find spatial and chemical detail to create a computer simulation of that brain. And then
those computer simulations of brains are basically citizens in a new world. They work and they vote
and they fall in love and they get mad and they lie to each other. And this is a whole new world.
And my book is about analyzing how that world is different than our world, basically using
competition as my key lever of analysis. That is, if that world remains competitive, then I can
figure out how they change in that world, what they do differently than we do. And it's very
different. And it's different in ways that are shocking sometimes to many people and ways some
people don't like. I think it's an okay world, but I have to admit, it's quite different. And
that's just one technology. If we add dozens more technologies and changes into the future,
we should just expect it's possible to become very different than who we are. I mean,
in the space of all possible minds, our minds are a particular architecture, a particular structure,
a particular set of habits, and they are only one piece in a vast base of possibilities. The
space of possible minds is really huge. So yeah, let's linger on the space of possible minds for
a moment, just to sort of humble ourselves, how peculiar our peculiarities are. Like the fact that
that we like a particular kind of sex, and the fact that we eat food through one hole,
and poop through another hole. And that seems to be a fundamental aspect of life. It's very
important to us. And that life is finite in a certain kind of way. We have a meat vehicle,
so death is very important to us. I wonder which aspects are fundamental or would be common
throughout human history, and also throughout history of life on earth, and throughout other
kinds of lives. Like what is really useful? You mentioned competition seems to be a one
fundamental thing. I've tried to do analysis of where our distant descendants might go in terms
of what are robust features we could predict about our descendants. So again, I have this analysis
of sort of the next generation, so the next era after ours, that if you think of human history as
having three eras so far, there was the forager error, the farmer error, and the industry error,
then my attempt and age of M is to analyze the next era after that. And it's very different. But
of course, there could be more and more errors after that. So analyzing a particular scenario
and thinking it through is one way to try to see how different the future could be. But that doesn't
give you some sort of sense of what's typical. But I have tried to analyze what's typical.
And so I have two predictions I think I can make pretty solidly. One thing is that we know at the
moment that humans discount the future rapidly. So we discount the future in terms of caring
about consequences, roughly a factor of two per generation. And there's a solid evolutionary
analysis why sexual creatures would do that. Because basically, your descendants only share
half of your genes, and your descendants are a generation away. So we only care about our
grandchildren. Basically, that's a factor of four later because it's later. So this actually
explains typical interest rates in the economy. That is, interest rates are greatly influenced
by our discount rates. And we basically discount the future by a factor of two per generation.
But that's a side effect of the way our preferences evolved as sexually selected
creatures, we should expect that in the longer run, creatures will evolve who don't discount the
future. They will care about the long run, and they will therefore not neglect the wrong one.
So for example, for things like global warming or things like that, at the moment, many
commenters are sad that basically ordinary people don't seem to care much, market prices
don't seem to care much, and ordinary people, it doesn't really impact them much because humans
don't care much about the long-term future. And futurists find it hard to motivate people and
to engage people about the long-term future because they just don't care that much. But that's a
side effect of this particular way that our preferences evolved about the future. And so
in the future, they will neglect the future less. And that's an interesting thing that we can predict
robustly. Eventually, maybe a few centuries, maybe longer, eventually our descendants will
still care about the future. Can you speak to the intuition behind that? Is it useful to think
more about the future? Right. If evolution rewards creatures for having many descendants,
then if you have decisions that influence how many descendants you have,
then that would be good if you made those decisions. But in order to do that, you'll have
to care about them. You'll have to care about that future. So to push back, that's if you're
trying to maximize the number of descendants. But the nice thing about not caring too much about
the long-term future is you're more likely to take big risks or you're less risk averse.
And it's possible that both evolution and just life in the universe is rewarded,
rewards the risk takers. Well, we actually have analysis of the ideal risk preferences, too.
So there's literature on ideal preferences that evolutions should promote. And for example,
there's literature on competing investment funds and what the managers of those funds should care
about in terms of risk, various kinds of risks, and in terms of discounting. So managers of
investment funds should basically have logarithmic risk, i.e., in shared risk, in correlated risk,
but be very risk-neutral with respect to uncorrelated risk. So that's a feature
that's predicted to happen about individual personal choices in biology and also for investment
funds. So that's also something we can say about the long run. What's correlated and uncorrelated
risk? If there's something that would affect all of your descendants, then if you take that risk,
you might have more descendants, but you might have zero. And that's just really bad
to have zero descendants. But an uncorrelated risk would be a risk that some of your descendants
would suffer, but others wouldn't. And then you have a portfolio of descendants. And so
that portfolio ensures you against problems with any one of them.
I like the idea of portfolio of descendants. And we'll talk about portfolios with your idea of
the, you briefly mentioned, we'll return there with M, E-M, the age of E-M, work, love, and life,
when robots rule the earth. E-M, by the way, is emulated minds. So this one of the-
M is short for emulations. M is short for emulations. And it's kind of an idea of how we
might create artificial minds, artificial copies of minds, or human-like intelligences.
I have another dramatic prediction I can make about long-term processes.
Yes. Which is, at the moment, we reproduce as the result of a hodgepodge of preferences that
aren't very well integrated, but sort of in our ancestral environment, induce us to reproduce.
So we have preferences over being sleepy, and hungry, and thirsty, and wanting to have sex,
and wanting to be excited, excitement, et cetera. And so in our ancestral environment,
the packages of preferences that we evolved to have did induce us to have more descendants.
And that's why we're here. But those packages of preferences are not a robust way to promote
having more descendants. They were tied to our ancestral environment, which is no longer true.
So that's one of the reasons we are now having a big fertility decline, because in our current
environment, our ancestral preferences are not inducing us to have a lot of kids,
which is, from evolution's point of view, a big mistake. We can predict that in the longer run,
there will arise creatures who just abstractly know that what they want is more descendants.
That's a very robust way to have more descendants, is to have that as your direct preference.
First of all, you're thinking it's so clear. I love it. So mathematical, and thank you
for thinking so clear with me, and bearing with my interruptions, and going on the tangents when
we go there. So you're just clearly saying that successful long-term civilizations will prefer
to have descendants, more descendants. Not just prefer consciously and abstractly prefer,
that is, it won't be the indirect consequence of other preferences. It will just be the thing
they know they want. There'll be a president in the future that says, we must have more sex.
We must have more descendants and do whatever it takes to do that.
Whatever. We must go to the moon and do the other things. Not because they're easy,
but because they're hard, but instead of the moon, let's have lots of sex. Okay. But there's a lot
of ways to have descendants, right? Right. So that's the whole point. When the world gets
more complicated, and there are many possible strategies, it's having that as your abstract
preference that will force you to think through those possibilities and pick the one that's most
effective. So just to clarify, descendants doesn't necessarily mean the narrow definition of
descendants, meaning humans having sex and then having babies. Exactly. You can have artificial
intelligence systems that in whom you instill some capability of cognition and perhaps even
consciousness, you can also create the genetics and biology clones of yourself or slightly modify
clones, thousands of them. Right. So all kinds of descendants. It could be descendants in the
space of ideas too, but somehow we no longer exist in this meat vehicle. It's now just like
whatever the definition of a life form is, you have descendants of those life forms.
Yes. And they will be thoughtful about that. They will have thought about what counts as a
descendant. And that'll be important to them to have the right concept. So the they there is very
interesting who they are. But the key thing is we're making predictions that I think are somewhat
robust about what our distant descendants will be like. Another thing I think you would automatically
accept is they will almost entirely be artificial. And I think that would be the obvious prediction
about any aliens we would meet. That is, they would long sense have given up reproducing biologically.
Well, it's like organic or something. It might be squishy and made out of hydrocarbons,
but it would be artificial in the sense of made in factories with designs on CAD things, right?
Factories with scale economy. So the factories we have made on earth today
have much larger scale economies than the factories in our cells. So the factories in
our cells are, they're marvels, but they don't achieve very many scale economies. They're tiny
little factories. But they're all factories. Yes. Factories on top of factories. So everything,
the the factories that are designed is different than sort of the factories that have evolved.
I think the nature of the word design is very interesting to uncover there. But let me,
in terms of aliens, let me go, let me analyze your Twitter like it's Shakespeare. Okay. There's a
tweet says, define hello in quotes, alien civilizations as one that might the next million
years identify humans as intelligent and civilized, travel to earth and say hello by making their
presence and advanced abilities known to us. The next 15 polls, this is a Twitter thread,
the next 15 polls ask about such hello aliens. And what these polls ask is your Twitter followers,
what they think those aliens would be like certain particular qualities. So
poll number one is what percent of hello aliens evolved from biological species with two main
genders? And, you know, the popular vote is above 80%. So most of them have two genders.
What do you think about that? I'll ask you about some of these, because it's so interesting,
it's such an interesting question. It is a fun set of questions. Yes, I like a fun set of questions.
So the genders as we look through evolutionary history, what's the usefulness of that as opposed
to having just one or like millions? So there's a question in evolution of life on earth. There are
very few species that have more than two genders. There are some, but they aren't very many.
But there's an enormous number of species that do have two genders, much more than one. And so
there's a literature on why did multiple genders evolve? And that's sort of what's the point of
having males and females versus hemathridites. So most plants are hemathridites. That is, they would
mate male female, but each plant can be either role. And then most animals have chosen to split
into males and females. And then they're differentiating the two genders. And, you know,
there's an interesting set of questions about why that happens. Because you can do selection.
You basically have like one gender competes for the affection of other. And there's sexual
partnership that creates the offspring. So there's sexual selection. It's nice to have
to a party. It's nice to have dance partners. And then they each one get to choose based on
certain characteristics. And that's an efficient mechanism for adopting to the environment,
being successfully adapted to the environment. It does look like there's an advantage.
If you have males, then the males can take higher variants. And so there can be stronger
selection among the males in terms of weeding out genetic mutations because the males have
higher variants in their mating success. Sure. Okay. Question number two. What percent of hello
aliens evolved from land animals as opposed to plants or ocean slash air organisms? By the way,
I did recently see that there's a only 10% of species on earth are in the ocean. So there's a
lot more variety on land. There is. It's interesting. So why is that? I don't even, I can't even
intuit exactly why that would be. Maybe survival on land is harder. And so you get a lot. The story
that I understand is it's about small niches. So speciation can be promoted by having multiple
different species. So in the ocean, species are larger. That is, there are more creatures in
each species because the ocean environments don't vary as much. So if you're good in one place,
you're good in many other places. But on land, especially in rivers, rivers contain an enormous
percentage of the kinds of species on land because they vary so much from place to place.
And so a species can be good in one place and then other species can't really compete because
they came from a different place where things are different. So it's a remarkable fact actually
that speciation promotes evolution in the long run. That is, more evolution has happened on land
because there have been more species on land because each species has been smaller.
And that's actually a warning about something called rot that I've thought a lot about,
which is one of the problems with even a world government, which is large systems of software
today just consistently rot and decay with time and have to be replaced. And that plausibly also
is a problem for other large systems, including biological systems, legal systems, regulatory
systems. And it seems like large species actually don't evolve as effectively as small ones do.
And that's an important thing to notice about. And that's actually different from ordinary
sort of evolution in economies on earth in the last few centuries, say. On earth,
the more technical evolution and economic growth happens in larger integrated cities and nations.
But in biology, it's the other way around. More evolution happened in the fragmented species.
Yeah. It's such a nuanced discussion because you can also push back in terms of nations and at least
companies. It's like large companies seem to evolve less effectively. There is something
they have more resources. They don't even have better resilience. When you look at the scale of
decades and centuries, it seems like a lot of large companies die.
But still large economies do better. Large cities grow better than small cities. Large
integrated economies like the United States or the European Union do better than small fragmented
ones. That's a very interesting long discussion. But so most the people and obviously votes on
Twitter represent the absolute objective truth of things. But an interesting question about oceans
is that, okay, remember I told you about how most planets would last for trillions of years
and then be later, right? So people have tried to explain why life appeared on earth by saying,
oh, all those planets are going to be unqualified for life because of various problems. That is,
they're around smaller stars which last longer and smaller stars have some things like more
solar flares, maybe more tidal locking. But almost all of these problems with longer lived
planets aren't problems for ocean worlds. And a large fraction of planets out there are ocean
worlds. So if life can appear on an ocean world, then that pretty much ensures that these planets
that last a very long time could have advanced life because most, no, there's a huge fraction
of ocean worlds. So that's actually an open question. So when you say, sorry, when you say
life appear, you're kind of saying life and intelligent life. So that's an open question
is land. And as I suppose the question behind the Twitter poll, which is a grabby alien civilization
that comes to say hello, what's the chance that they first began their early steps,
the difficult steps they took on land? What do you think? 80%
most people on Twitter think is very likely on land. What do you think?
I think people are discounting ocean worlds too much. That is, I think people tend to assume that
whatever we did must be the only way it's possible. And I think people aren't giving enough credit for
other possible paths. But dolphins, water world, by the way, people criticize that movie. I love
that movie. Kevin Costner can do me no wrong. Okay, next question. What percent of hello aliens
once had a nuclear war with greater than 10 nukes fired in anger? So not in the incompetence
and as an accident. Intentional firing of nukes and less than 20% was the most popular vote.
That just seems wrong to me. So like, I wonder what, so most people think once you get nukes,
we're not going to fire them. They believe in the power of the game. I think they're assuming that
if you had a nuclear war, then that would just end civilization for good. I think that's the
thinking. That's the main thing. And I think that's just wrong. I think you could rise again
after a nuclear war. It might take 10,000 years or 100,000 years, but it could rise again.
So what do you think about mutually assured destruction as a force to prevent people from
firing nuclear weapons? That's a question that's a new to a terrifying degree has been raised now
and what's going on. Clearly it has had an effect. The question is just how strong an effect for how
long? Clearly we have not gone wild with nuclear war and clearly the devastation that you would get
if you initiated nuclear war is part of the reasons people have been reluctant to start a war. The
question is just how reliably will that ensure the absence of a war? Yeah, the night is still
young. Exactly. It's been 70 years or whatever it's been. But what do you think? Do you think
we'll see nuclear war in the century? I don't know in the century, but it's the
sort of thing that's likely to happen eventually. There's a very loose statement. Okay, I understand.
Now this is where I pull you out of your mathematical model and ask a human question.
Do you think there's this particular question? I think we've been lucky that it hasn't happened
so far. But what is the nature of nuclear war? Let's think about this. There is
dictators. There's democracies.
Miscommunication. How do wars start? World War I, World War II. So the biggest datum here is that
we've had an enormous decline in major war over the last century. So that has to be taken into
account now. So the problem is war is a process that has a very long tail. That is, there are
rare, very large wars. So the average war is much worse than the median war because of this long
tail. And that makes it hard to identify trends over time. So the median war has clearly gone
way down in the last century that a median rate of war. But it could be that's because
the tail has gotten thicker. And in fact, the average war is just as bad. But most wars are
going to be big wars. So that's the thing we're not so sure about. There's no strong data on wars
with one because of the destructive nature of the weapons kill hundreds of millions of people.
There's no data on this. But we can start intuiting. But we can see that the power law,
we can do a power law fit to the rate of wars, and it's a power law with a thick tail. So it's
one of those things that you should expect most of the damage to be in the few biggest ones. So
that's also true for pandemics and a few other things. For pandemics, most of the damages
in the few biggest ones. So the median pandemic so far is less than the average that you should
expect in the future. But that fitting of data is very questionable because
yet what everything you said is correct. The question is like, what can we infer about the
future of civilization threatening pandemics or nuclear war from studying the history of the
20th century? So you can't just fit it to the data, the rate of wars and the destructive
nature. That's not how nuclear war will happen. Nuclear war happens with two assholes or idiots
that have access to a button. Small wars happen that way too. No, I understand that. But it's
very important. Small wars aside, it's very important to understand the dynamics, the human
dynamics and the geopolitics of the way nuclear war happens in order to predict how we can minimize
the chance of- But it is a common and useful intellectual strategy to take something that
could be really big or but is often very small and fit the distribution of the data. Small
things, which you have a lot of them and then ask, do I believe the big things are really that
different? So sometimes it's reasonable to say like save with tornadoes or even pandemics or
something. The underlying process might not be that different from the big and small ones.
It might not be. The fact that mutual sure destruction seems to work to some degree
shows you that to some degree it's different than the small wars. So it's a really important
question to understand is are humans capable, one human, like how many humans on earth? If I
give them a button now, say you pressing this button will kill everyone on earth. Everyone,
right? How many humans will press that button? I want to know those numbers like day to day,
minute to minute. How many people have that much irresponsibility, evil, incompetence,
ignorance, whatever word you want to assign. There's a lot of dynamics to the psychology
that leads you to press that button. But how many? My intuition is the number,
the more destructive the press of a button, the fewer humans you find and the number gets very
close to zero very quickly, especially people have access to such a button. But that's perhaps
a hope than a reality. Unfortunately, we don't have good data on this,
which is like how destructive are humans willing to be?
So I think part of this just has to think about, ask you what your time scales you're looking at,
right? So if you say, if you look at the history of war, we've had a lot of wars
pretty consistently over many centuries. So if you ask, will we have a nuclear war in the
next 50 years, I might say, well, probably not. If I say 500 or 5000 years, if the same sort of
risks are underlying and they just continue, then you have to add that up over time and think
the risk is getting a lot larger the longer a timescale we're looking at.
But okay, let's generalize nuclear war because what I was more referring to is something that
kills more than 20% of humans on earth and injures or makes the other 80% suffer horribly,
survive but suffer. That's what I was referring to. So when you look at 500 years from now,
there might not be nuclear war, there might be something else that's that kind of has that
destructive effect. And I don't know, these feel like novel questions in the history of
humanity. I just don't know. I think since nuclear weapons, this has been engineering
pandemics, for example, robotics, so nanobots. It just seems like a real new possibility that
we have to contend with and we don't have good models or from my perspective.
So if you look on say the last 1000 years or 10,000 years, we could say we've seen a certain
rate at which people are willing to make big destruction in terms of war.
Yes. Okay. And if you're willing to project that data forward, then I think like if you
want to ask over periods of thousands or tens of thousands of years, you would have a reasonable
data set. So the key question is what's changed lately? Yes. Okay. And so a big question of which
I've given a lot of thought to what are the major changes that seem to have happened in culture
and human attitudes over the last few centuries and what's our best explanation for those so
that we can project them forward into the future? And I have a story about that, which is the story
that we have been drifting back toward forager attitudes in the last few centuries as we get
rich. So the idea is we spent a million years being a forager and that was a very sort of
standard lifestyle that we know a lot about foragers sort of live in small bands. They
make decisions cooperatively, they share food, they don't have much property, etc.
And humans liked that. And then 10,000 years ago, farming became possible, but it was only
possible because we were plastic enough to really change our culture. Farming styles and
cultures are very different. They have slavery, they have war, they have property, they have
inequality, they have kings, they stay in one place instead of wandering, they don't have as
much diversity of experience or food, they have more disease. This farming life is just very
different. But humans were able to sort of introduce conformity and religion and all sorts of things
to become just a very different kind of creature as farmers. Farmers are just really different
than foragers in terms of their values and their lives. But the pressures that made foragers into
farmers were part mediated by poverty. Farmers are poor. And if they deviated from the farming
norms that people around them supported, they were quite at risk of starving to death. And then in
the last few centuries, we've gotten rich. And as we've gotten rich, the social pressures that
turned foragers into farmers have become less persuasive to us. So for example, a farming
young woman who was told, if you have a child out of wedlock, you and your child may starve,
that was a credible threat. She would see actual examples around her to make that
a believable threat. Today, if you say to a young woman, you shouldn't have a child out of wedlock,
she will see other young women around her doing okay that way. We're all rich enough to be able
to afford that sort of a thing. And therefore, she's more inclined, often to go with her
inclinations or sort of more natural inclinations about such things rather than to be pressured to
follow the official farming norms that you shouldn't do that sort of thing. And all through our
lives, we have been drifting back toward forager attitudes because we've been getting rich.
And so aside from at work, which is an exception, but elsewhere, I think this explains trends toward
less slavery, more democracy, less religion, less fertility, more promiscuity, more travel,
more art, more leisure, fewer work hours. All these trends are basically explained by
becoming more forager-like. And much science fiction celebrates this. Star Trek or the culture
novels, people like this image that we are moving toward this world where basically like foragers
were peaceful, we share, we make decisions collectively, we have a lot of free time,
we are into art. So forager is a word and it's a loaded word because it's connected to
the actual, what life was actually like at that time. As you mentioned, we sometimes don't do a
good job of telling accurately what life was like back then. But you're saying if it's not exactly
like foragers, it rhymes in some fundamental way. You also said peaceful. Is it obvious that a forager
with a nuclear weapon would be peaceful? I don't know if that's 100% obvious.
So again, we know fair bit about what foragers' lives were like. The main sort of violence they
had would be sexual jealousy. They were relatively promiscuous and so there'd be a lot of jealousy.
But they did not have organized wars with each other. That is, they were at peace with their
neighboring forager bands. They didn't have property in land or even in people. They didn't
really have marriage. And so they were, in fact, peaceful.
When you think about large scale wars, they don't start large scale wars. They didn't have coordinated
large scale wars in the ways chimpanzees do. Our chimpanzees do have wars between one tribe of
chimpanzees and others, but human foragers do not. Farmers return to that, of course,
the more chimpanzee-like styles. Well, that's a hopeful message. If we could return real quick
to the Hello Aliens Twitter thread, one of them is really interesting about language. What percent
of Hello Aliens would be able to talk to us in our language? This is the question of communication.
It actually gets to the nature of language. It also gets to the nature of how advanced you
expect them to be. I think some people see that we have advanced over the last thousands of years
and we aren't reaching any sort of limit. And so they tend to assume it could go on forever.
And I actually tend to think that within, say, 10 million years, we will sort of
max out on technology. We will sort of learn everything that's feasible to know for the most
part. And then obstacles to understanding would more be about cultural differences,
like ways in which different places had just chosen to do things differently.
And so then the question is, is it even possible to communicate across some cultural
differences? And I might think, yeah, I could imagine some maybe advanced aliens which become
so weird or different from each other, they can't communicate with each other. But we're probably
pretty simple compared to them. So I would think, sure, if they wanted to, they could communicate
with us. So it's the simplicity of the recipient. I tend to just to push back. Let's explore the
possibility where that's not the case. Can we communicate with ants? I find that
this idea that we're not very good at communicating in general. Oh, you're saying,
all right, I see. You're saying once you get orders of magnitude better at communicating.
Once they had maxed out on all communication technology in general, and they just understood
in general how to communicate with lots of things, and had done that for millions of years.
But you have to be able to, this is so interesting, as somebody who cares a lot about empathy and
imagining how other people feel, communication requires empathy, meaning you have to truly
understand how the other person, the other organism sees the world. It's like a four-dimensional
species talking to two-dimensional species. It's not as trivial as, to me at least, as it might
seem. So let me reverse my position a little because I'll say, well, the whole Hello Aliens
question really combines two different scenarios that we're slipping over. So one scenario would
be that the Hello Aliens would be like grabby aliens. They would be just fully advanced.
They would have been expanding for millions of years. They would have a very advanced civilization.
And then they would finally be arriving here after a billion years perhaps of expanding,
in which case they're going to be crazy advanced at some and maximal level. But
the Hello Aliens about aliens we might meet soon, which might be sort of UFO aliens, and
UFO aliens probably are not grabby aliens. How do you get here if you're not a grabby alien?
Well, they would have to be able to travel, but they would not be expansive.
So if it's a road trip, it doesn't count as grabby. So we're talking about expanding the
comfortable colony. The question is, if UFOs, some of them are aliens, what kind of aliens
would they be? This is sort of the key question you have to ask in order to try to interpret
that scenario. The key fact we would know is that they are here right now, but the universe
around us is not full of an alien civilization. So that says right off the bat that they chose
not to allow massive expansion of a grabby civilization. Is it possible that they chose it,
but we just don't see them yet? These are the stragglers, the journeymen, the...
So the timing coincidence is it's almost surely if they are here now, they are much older than
us. They are many millions of years older than us. And so they could have filled the galaxy in that
last millions of years if they had wanted to. That is, they couldn't just be right at the edge.
They're very unlikely. Most likely they would have been around waiting for us for a long time.
They could have come here any time in the last millions of years, and they've just chosen,
they've been waiting around for this, or they just chose to come recently. But the timing
coincides. It would be crazy and likely that they just happened to be able to get here,
say in the last 100 years. They would no doubt have been able to get here far earlier than that.
Again, we don't know. So this is a fringe like UFO sightings on Earth. We don't know if this
kind of increase in sightings have anything to do with actual visitation.
Right. I was just talking about the timing. They arose at some point in space time.
And it's very unlikely that that was just at a point that they could just barely get here recently.
Almost surely they could have been here much earlier. And throughout the stretch of several
billion years that Earth existed, they could have been here often. Exactly. So they could have therefore
filled the galaxy a long time ago if they had wanted to. Let's push back on that. The question
to me is, isn't it possible that the expansion of a civilization is much harder than the
travel? The sphere of the reachable is different than the sphere of the colonized.
So isn't it possible that the sphere of places where the stragglers go, the different people
that journey out, the explorers, is much, much larger and grows much faster than the civilization?
Right. So in which case they would visit us. There's a lot of visitors, the grad students
of the civilization. They're exploring, they're collecting the data, but we're not yet going
to see them. And by yet, I mean across millions of years.
The time delay between when the first thing might arrive and then when colonists could arrive in mass
and do a mass amount of work is cosmologically short. In human history, of course, sure,
there might be a century between that, but a century is just a tiny amount of time
on the scales we're talking about. So in computer science, there's ant colony optimization. It's
true for ants. So when the first ant shows up, it's likely, and if there's anything of value,
it's likely the other ants will follow quickly. Relatively short. It's also true that
traveling over very long distances, probably one of the main ways to make that feasible
is that you land somewhere, you colonize a bit, you create new resources, then allow you to go
farther. Many short hops as opposed to a giant long journey. Exactly. Those hops require that you
are able to start a colonization of sorts along those hops, right? You have to be able to stop
somewhere, make it into a way station such that you can then support your moving farther.
So what do you think of, there's been a lot of UFO sightings. What do you think about those
UFO sightings? And what do you think if any of them are of extraterrestrial origin, and we don't
see giant civilizations out in the sky, how do you make sense of that then?
I want to do some clearing of throats, which people like to do on this topic.
Right? They want to make sure you understand they're saying this and not that, right? So
I would say the analysis needs both a prior and a likelihood. So the prior is what are the
scenarios that are all plausible in terms of what we know about the universe? And then the likelihood
is the particular actual sightings, like how hard are those to explain through various means.
I will establish myself as somewhat of an expert on the prior. I would say my studies and the things
I've studied make me an expert and I should stand up and have an opinion on that and be able to
explain it. The likelihood, however, is not my area of expertise. That is, I'm not a pilot,
I don't do atmospheric studies of things I haven't studied in detail, the various kinds of
atmospheric phenomena or whatever that might be used to explain the particular sightings. I can
just say from my amateur stance, the sightings look damn puzzling. They do not look easy to dismiss.
The attempts I've seen to easily dismiss them seem to me to fail. It seems like these are pretty
puzzling, weird stuff that deserve an expert's attention in terms of considering asking what
the likelihood is. So analogy I would make is a murder trial. On average, if we say what's the
chance anyone person murdered another person as a prior probability, maybe 1 in 1,000 people get
murdered, maybe each person has 1,000 people around them who could plausibly have done it,
so the prior probability of a murder is 1 in a million. But we allow murder trials because
often evidence is sufficient to overcome it on 1 in a million prior, because the evidence is often
strong enough, right? My guess, rough guess for the UFOs as aliens scenario, at least some of them,
is the priors roughly 1 in 1,000, much higher than the usual murder trial, plenty high enough that
strong physical evidence could put you over the top to think it's more likely than not. But I'm not
an expert on that physical evidence. I'm going to leave that part to someone else. I'm going to say
the priors pretty high. This isn't a crazy scenario. So then I can elaborate on where my
prior comes from. What scenario could make most sense of this data? My scenario, to make sense,
has two main parts. First is panspermia siblings. So panspermia is the hypothesized process by
which life might have arrived on Earth from elsewhere. And a plausible time for that, I mean,
it would have to happen very early in Earth history because we see life early in Earth history. And a
plausible time could have been during the stellar nursery where the sun was born with many other
stars in the same close proximity with lots of rocks flying around, able to move things from
one place to another. Rock with life on it from some rock with planet with life came into that
stellar nursery. It plausibly could have seeded many planets in that stellar nursery all at the
same time. They're all born at the same time, in the same place, pretty close to each other.
There's lots of rocks flying around. So a panspermia scenario would then create siblings,
i.e., there would be, say, a few thousand other planets out there. So after the nursery forms,
it drifts, it separates. They drift apart. And so out there in the galaxy, there would now be a
bunch of other stars all formed at the same time, and we can actually spot them in terms of their
spectrum. And they would have then started on the same path of life as we did with that life being
seeded, but they would move at different rates. And most likely, most of them would never reach an
advanced level before the deadline. But maybe one other did, and maybe it did before us.
So if they did, they could know all of this and they could go searching for their siblings.
That is, they could look in the sky for the other stars with the spectrum that matches the spectrum
that came from this nursery. They could identify their sibling stars in the galaxy, the thousand
of them. And those would be of special interest to them because they would think, well, life might
be on those. And they could go looking for them. We just such a brilliant mathematical, philosophical,
physical, biological idea of panspermia siblings, because we all kind of started a similar time
in this local pocket of the universe. And so that changes a lot of the math.
So that would create this correlation between when advanced life might appear no longer just
random independent spaces and space time, there'd be this cluster, perhaps.
And that allows interaction between elements of the cluster, yes.
Non-grabby alien civilizations, like primitive alien civilizations like us
with others, and they might be a little bit ahead. That's so fascinating.
Well, they would probably be a lot ahead. So the puzzle is, if they happen before us,
they probably happen hundreds of millions of years before us.
But less than a billion.
Less than a billion, but still plenty of time that they could have become grabby and filled
the galaxy and gone beyond. So the fact is they chose not to become grabby. That would have to
be the interpretation. If we have panspermia.
So plenty of time to become grabby, you said.
Yes, they should be calling it.
And they chose not to.
Are we sure about this?
Again, 100 million years is enough.
100 million. So I told you before that I said within 10 million years,
our descendants will become grabby or not.
And they'll have that choice.
Okay.
And so they clearly more than 10 million years earlier than us.
So they chose not to.
But still go on vacation, look around.
So just not grabby.
If they chose not to expand, that's going to have to be a rule they set to not allow any
part of themselves to do it. Like if they let any little ship fly away with the ability to
create a colony, the game's over. Then the universe becomes grabby from their origin
with this one colony. So in order to prevent their civilization being grabby, they have to
have a rule they enforce pretty strongly that no part of them can ever try to do that.
Through a global authoritarian regime or through something that's internal to that,
meaning it's part of the nature of life that it doesn't want as like on advanced.
Political officer in the brain or whatever.
Yes. There's something in human nature that prevents you from one or like alien nature.
That as you get more advanced, you become lazier and lazier in terms of exploration and expansion.
So I would say they would have to have enforced a rule against expanding.
And that rule would probably make them reluctant to let people leave very far.
Any one vacation trip far away could risk an expansion from this vacation trip.
So they would probably have a pretty tight lid on just allowing any travel out from their origin
in order to enforce this rule. But then we also know, well, they would have chosen to come here.
So clearly, they made an exception from their general rule to say, okay, but an expedition
to Earth, that should be allowed. It could be intentional exception or
incompetent exception. But if incompetent, then they couldn't maintain this
over 100 million years, this policy of not allowing any expansion. So we have to see
they have successfully, they not just had a policy to try, they succeeded over 100 million years in
preventing the expansion. That's substantial competence. Let me think about this. So you
don't think there could be a barrier in 100 million years, you don't think there could be a barrier
to technological barrier to becoming expansionary?
Imagine the Europeans that tried to prevent anybody from leaving Europe to go to the New World.
And imagine what it would have taken to make that happen over 100 million years.
Yeah, it's impossible. They would have to have very strict, you know,
guards at the borders or at the border saying, no, you can't go.
But just to clarify, you're not suggesting that's actually possible.
I am suggesting it's possible. I don't know how you keep my silly human brain.
Maybe it's a brain that values freedom, but I don't know how you can keep,
no matter how much force, no matter how much censorship or control or so on. I just don't
know how you can keep people from exploring into the mysterious, into the unknown.
You're thinking of people, we're talking aliens. So remember, there's a vast space
of different possible social creatures they could have evolved from, different cultures
they could be in, different kinds of threats. I mean, there are many things that you talked
about that most of us would feel very reluctant to do. This isn't one of those.
Okay, so how if the UFO sightings represent alien visitors, how the heck are they getting here
under the Panspermia siblings? So Panspermia siblings is one part of the scenario, which is
that's where they came from. And from that, we can conclude they had this rule against expansion,
and they've successfully enforced that. That also creates a plausible agenda for why they would be
here, that is to enforce that rule on us. That is, if we go out and expanding, then we have defeated
the purpose of this rule they set up. Interesting. Right. So they would be here to convince us to
not expand. Convince and quotes. Right. Through various mechanisms. So obviously,
one thing we conclude is they didn't just destroy us. That would have been completely
possible. So the fact that they're here and we are not destroyed means that they chose not to
destroy us. They have some degree of empathy or whatever their morals are that would make them
reluctant to just destroy us. They would rather persuade us. To destroy their brethren. And so
they may have been, there's a difference in arrival and observation. They may have been observing for
a very long time. Exactly. And they arrived to try to, not to try. I don't think to try to ensure
that we don't become grabby. Which is because that's, we can see that they did not. They must
have enforced a rule against that. And they are therefore here to, that's a plausible interpretation.
Why they would risk this expedition when they clearly don't risk very many expeditions over
this long period? To allow this one exception. Because otherwise, if they don't, we may become
grabby. And they could have just destroyed us, but they didn't. And they're closely monitoring
the technological advancing of actualization. Right. Like what nuclear weapons is one thing
that, all right, cool. That might have less to do with nuclear weapons and more with nuclear energy.
And maybe they're monitoring fusion closely. Like, how clever are these apes getting?
I mean, so no doubt they have a button that if we get too uppity or risky, they can push the
button and ensure that we don't expand. But they'd rather do it some other way. So now
that explains why they're here and why they aren't out there. But there's another thing
that we need to explain. There's another key data we need to explain about UFOs if we're
going to have a hypothesis that explains them. And this is something many people have noticed,
which is they had two extreme options they could have chosen and didn't chose.
They could have either just remained completely invisible, clearly an advanced civilization
could have been completely invisible. There's no reason they need to fly around and be noticed.
They could just be in orbit in dark satellites that are completely invisible to us watching
whatever they want to watch. That would be well within their abilities. That's one thing they
could have done. The other thing they could do is just show up and land on the White House lawn,
as they say, and shake hands and make themselves really obvious. They could have done either of
those and they didn't do either of those. That's the next thing you need to explain about UFOs
as aliens. Why would they take this intermediate approach of hanging out near the edge of visibility
with somewhat impressive mechanisms, but not walking up and introducing themselves,
nor just being completely invisible? Okay. A lot of questions there. So one,
do you think it's obvious where the White House is or the White House lawn?
Obviously, where there are concentrations of humans that you could go up and introduce.
But is humans the most interesting thing about Earth? Are you sure about this?
If they're worried about an expansion, then they would be worried about a civilization that
could be capable of expansion. Obviously, humans are the civilization on Earth that's
by far the closest to being able to expand. I just don't know if aliens obviously see
humans, like the individual humans, like the organ of the meat vehicles as the center of
focus for observing a life on a planet. They're supposed to be really smart in advance.
Like this shouldn't be that hard for them. But I think we're actually the dumb ones,
because we think humans are the important things, but it could be our ideas. It could
be something about our technologies. But that's mediated with us. It's correlated with us.
No, we make it seem like it's mediated by us humans. But the focus for alien civilizations might be
the AI systems or the technologies themselves. That might be the organism.
Human is the food, the source of the organism that's under observation versus like...
So, what they wanted to have close contact with was something that was closely near humans,
then they would be contacting those. And we would just incidentally see, but we would still see.
But don't you think they... Isn't it possible, taking their perspective,
isn't it possible that they would want to interact with some fundamental aspect that
they're interested in without interfering with it? And that's actually a very... No matter
how advanced you are, it's very difficult to do.
But that's puzzling. So, I mean, the prototypical UFO observation is a shiny,
big object in the sky that has very rapid acceleration and no apparent surfaces for
using air to manipulate at speed. And the question is, why that? Again, if they just...
For example, if they just wanted to talk to our computer systems, they could move some sort of
like a little probe that connects to a wire and reads and sends bits there,
they don't need a shiny thing flying in the sky.
But I don't think they would be... They would be looking for the right way to communicate,
the right language to communicate. Everything you just said, looking at the computer systems,
I mean, that's not a trivial thing. Coming up with a signal that us humans would not freak out
too much about, but also understand, might not be that trivial.
Well, so not freaking out a part is another interesting constraint. So again, I said,
the two obvious strategies are just to remain completely invisible and watch,
which would be quite feasible or to just directly interact, that's come out and be really very
direct. I mean, there's big things that you can see around. There's big cities,
there's aircraft carriers, there's lots of... If you wanted to just find a big thing and come
right up to it and tap it on the shoulder or whatever, that would be quite feasible,
then they're not doing that. So my hypothesis is that one of the other questions there was,
do they have a status hierarchy? And I think most animals on earth, who are social animals,
have status hierarchy. And they would reasonably presume that we have a status hierarchy.
And... Take me to your leader.
Well, I would say their strategy is to be impressive and sort of get us to see them
at the top of our status hierarchy, just to... That's how, for example, we domesticate dogs,
right? We convince dogs we're the leader of their pack, right? And we domesticate many
animals that way, but as we just swap in to the top of their status hierarchy and we say,
we're your top status animal, so you should do what we say, you should follow our lead.
So the idea that would be, they are going to get us to do what they want by being top status.
You know, all through history, kings and emperors, etc., have tried to impress their citizens and
other people by having the bigger palace, the bigger parade, the bigger crown and diamonds,
right? Whatever. Maybe building a bigger pyramid, etc. Just... It's a very well-established
trend to just be high status by being more impressive than the rest.
To push back, when there's an order of several orders of magnitude of power differential,
asymmetry of power, I feel like that status hierarchy no longer applies. It's like memetic
theory. It's like... Most emperors are several orders of magnitude more powerful than anyone,
okay, a member of their empire. Let's increase that by even more. So like, if I'm interacting with
ants, I no longer feel like I need to establish my power with ants. I actually want to
lessen... I want to lower myself to the ants. I want to become the lowest possible ant so that
they would welcome me. So I'm less concerned about them worshiping me. I'm more concerned about them
welcoming me.
Well, it is important that you be non-threatening and that you be local. So I think, for example,
if the aliens had done something really big in the sky, you know, a hundred light years away,
that would be there, not here. And that could seem threatening. So I think their strategy
to be the high status would have to be to be visible, but to be here and non-threatening.
I just don't know if it's obvious how to do that. Like, take your own perspective. You see a planet
with relatively intelligent, like complex structures being formed, like, yeah, life forms.
We could see this under in Titan or something like that, the moon, you know, but Europa,
you start to see not just primitive bacteria of life, but multi-cellular life. And it seems to
form some very complicated cellular colonies, structures that they're dynamic. There's a
lot of stuff going on. Some gigantic cellular automata type of construct. How do you make yourself
known to them in an impressive fashion without destroying it? Like, we know how to destroy
potentially.
Right. So if you go touch stuff, you're likely to hurt it, right?
There's a good risk of hurting something by getting too close and touching it and interacting,
right?
Yeah, like landing on a White House lawn.
Right. So the claim is that their current strategy of hanging out at the periphery of
our vision and just being very clearly physically impressive with very clear physically impressive
abilities is at least a plausible strategy they might use to impress us and convince us, sort of,
we're at the top of their status hierarchy. And I would say if they came closer, not only
would they risk hurting us in ways that they couldn't really understand, but more plausibly,
they would reveal things about themselves we would hate. So if you look at how we treat
other civilizations on earth and other people, we are generally interested in foreigners and
people from other plant lands. And we were generally interested in their varying customs,
et cetera, until we find out that they do something that violates our moral norms and then we hate them.
And these are aliens for God's sakes, right? There's just going to be something about them
that we hate. They eat babies, who knows what it is, but something they don't think is offensive,
but that they think we might find. And so they would be risking a lot by revealing
a lot about themselves. We would find something we hated.
Interesting. But do you resonate at all with mimetic theory where we only feel this way about
things that are very close to us? So aliens are sufficiently different to where we'll be like
like, fascinated, terrified or fascinated, but not like...
Right, but if they want to be at the top of our status hierarchy to get us to follow them,
they can't be too distant. They have to be close enough that we would see them that way.
But pretend to be close enough, right, and not reveal much that mystery that old Clint Eastwood,
Cowboy, say less.
The point is, we're clever enough that we can figure out their agenda. That is just from the
fact that we're here. If we see that they're here, we can figure out, oh, they want us not to expand.
And look, they are this huge power, and they're very impressive. And a lot of us don't want to
expand. So that could easily tip us over the edge toward we already wanted to not expand. We already
wanted to be able to regulate and have a central community. And here are these very advanced smart
aliens who have survived for 100 million years, and they're telling us not to expand either.
This is brilliant. I love this so much.
The returning two-pan spermia siblings, just to clarify one thing, in that
framework, who originated, who planted it? Would it be a grabby alien civilization that planted
the siblings? Or no?
The simple scenario is that life started on some other planet billions of years ago,
and it went through part of the stages of evolution to advance life, but not all the way
to advanced life. And then some rock hit it, grabbed a piece of it on the rock, and that rock
drifted for maybe in a million years until it happened upon the stellar nursery where it then
seeded many stars.
And something about that life, without being super advanced, it was nevertheless resilient
to the harsh conditions of space.
There's some graphs that I've been impressed by that show sort of the level of genetic
information in various kinds of life on the history of Earth. And basically, we are now
more complex than the earlier life, but the earlier life was still pretty damn complex.
And so if you actually project this log graph in history, it looks like it was many
billions of years ago when you get down to zero. So plausibly, you could say there was just a lot
of evolution that had to happen before you get to the simplest life we've ever seen in history
of life on Earth was still pretty damn complicated. And so that's always been this puzzle.
How could life get to this enormously complicated level in the short period it seems to at the
beginning of Earth history? So it's only 300 million years at most when it appeared,
and then it was really complicated at that point. So Panspermia allows you to explain
that complexity by saying, well, it's been another five billion years on another planet,
going through lots of earlier stages where it was working its way up to the level of complexity
you see at the beginning of Earth.
Well, we'll try to talk about other ideas of the origin of life. But let me return to UFO
sightings. Is there other explanations that are possible outside of Panspermia siblings
that can explain no grabby aliens in the sky and yet alien arrival on Earth?
Well, the other categories of explanations that most people will use is, well, first of all,
just mistakes like you're confusing something ordinary for something mysterious, right?
Or some sort of secret organization like our government is secretly messing with us
and trying to do a false flag ops or whatever, right? They're trying to convince the Russians
or the Chinese that there might be aliens and scare them into not attacking or something,
right? Because the history of World War II say the US government did all these big fake operations
where they were faking a lot of big things in order to mess with people. So that's a possibility.
The government's been lying and faking things and paying people to lie about what they saw,
etc. That's a plausible set of explanations for the range of sightings seen. And another
explanation people offer is some other hidden organization on Earth. There's some secret
organization somewhere that has much more advanced capabilities than anybody's given it credit for.
For some reason, it's been keeping secret. They all sound somewhat implausible, but again,
we're looking for maybe one in a thousand sort of priors. The question is, could they be in that
level of plausibility? Can we just link on this? First of all, you've written, talked about,
thought about so many different topics. You're an incredible mind. And I just thank you for
sitting down today. I'm almost like at a loss of which place we explore. But let me, on this topic,
ask about conspiracy theories. You've written about institutions and authorities.
What, this is a bit of a therapy session, but what do we make of conspiracy theories?
The phrase itself is pushing you in a direction. So clearly, in history, we've had many large
coordinated keepings of secrets, say the Manhattan Project. And there was hundreds of thousands
of people working on that over many years, but they kept it a secret. Clearly, many large military
operations have kept things secrets over even decades with many thousands of people involved.
So clearly, it's possible to keep some things secret over time periods. But the more people
you involve and the more time you are assuming and the less centralized an organization or the
less discipline they have, the harder it gets to believe. But we're just trying to calibrate,
basically, in our minds, which kind of secrets can be kept by which groups over what time periods
for what purposes, right? But let me, I don't have enough data. So I'm somebody, I hang out with
people and I love people. I love all things, really. And I just, I think that most people,
even the assholes, have the capacity to be good and they're beautiful and I enjoy them.
So the kind of data my brain, whatever the chemistry of my brain is that sees the beautiful
things is maybe collecting a subset of data that doesn't allow me to intuit the competence that
humans are able to achieve in constructing conspiracy theories. So for example, one thing
that people often talk about is like intelligence agencies, this like broad thing they say,
the CIA, the FSB, the different, the British intelligence, I've fortunate or unfortunate
enough, never gotten a chance that I know of to talk to any member of those intelligence agencies,
nor like, take a peek behind the curtain or the first curtain, I don't know how many levels
of curtains there are. And so I don't, I can't intuit my interactions with government. I was
funded by DoD and DARPA and I've interacted, been to the Pentagon, like, with all due respect
to my friends, lovely friends in government and there are a lot of incredible people,
but there is a very giant bureaucracy that sometimes suffocates the ingenuity of the human
spirit is one way I can put it, meaning they are, I just, it's difficult for me to imagine
extreme competence at a scale of hundreds or thousands of human beings. Now that doesn't
mean that's my very anecdotal data of the situation. And so I try to build up my intuition
about centralized system of government, how much conspiracy is possible, how much the
intelligence agencies or some other source can generate sufficiently robust propaganda that
controls the populace. If you look at World War II, as you mentioned, there have been extremely
powerful propaganda machines on the side of Nazi Germany, on the side of the Soviet Union,
on the side of the United States and all these different mechanisms. Sometimes they control
the free press through social pressures. Sometimes they control the press through the threat of
violence as you do in authoritarian regimes. Sometimes it's like deliberately the dictator
like writing the news, the headlines and literally announcing it. And something about human psychology
forces you to embrace the narrative and believe the narrative and at scale that becomes reality
when the initial spark was just a propaganda thought in a single individual's mind. So I don't,
I can't necessarily intuit of what's possible, but I'm skeptical of the power of human institutions
to construct conspiracy theories that cause suffering at scale, especially in this modern age
when information is becoming more and more accessible by the populace. Anyway, that's,
I don't know if you can elucidate-
Which says it's cause-suffering at scale, but of course, say during wartime, the people who are
managing the various conspiracies like D-Day or Manhattan Project, they thought that their
conspiracy was avoiding harm rather than causing harm. So if you can get a lot of people to think
that supporting the conspiracy is helpful, then a lot more might do that. And there's just a lot
of things that people just don't want to see. So if you can make your conspiracy the sort of thing
that people wouldn't want to talk about anyway, even if they knew about it, you're, you know,
most of the way there. So I have learned many, over the years, many things that most ordinary
people should be interested in, but somehow don't know, even though the data has been very widespread.
So, you know, I have this book, The Elephant in the Brain, and one of the chapters is there on
medicine. And basically, most people seem ignorant of the very basic fact that when we do randomized
trials where we give some people more medicine than others, the people who get more medicine are
not healthier. And just overall in general, just like induce somebody to get more medicine because
you just give them more budget to buy medicine, say, not a specific medicine, just the whole
category. And you would think that would be something most people should know about medicine.
You might even think that would be a conspiracy theory to think that would be hidden. But in
fact, most people never learn that fact. So just to clarify, just a general high level statement,
the more medicine you take, the less healthy you are. Randomized experiments don't find that fact.
Do not find that more medicine makes you more healthy. They're just no connection.
Oh, in randomized experiments, there's no relationship between more medicine.
So it's not a negative relationship, but it's just no relationship.
And so the conspiracy theories would say that the businesses that sell you medicine don't want
you to know that fact. And then you're saying that there's also part of this is that people
just don't want to know. They just don't want to know. And so they don't learn this. So I've lived
in the Washington area for several decades now reading the Washington Post regularly. Every
week there was a special section on health and medicine. It never was mentioned in that section
of the paper in all the 20 years I read that. So do you think there is some truth to this
caricatured blue pill, red pill, where most people don't want to know the truth?
There are many things about which people don't want to know certain kinds of truths.
That is bad looking truths, truths that discouraging, truths that sort of take away
the justification for things they feel passionate about.
Do you think that's a bad aspect of human nature? That's something we should try to overcome?
Well, as we discussed, my first priority is to just tell people about it, to do the analysis,
and the cold facts of what's actually happening, and then to try to be careful about how we can
improve. So our book, The Elephant in the Rain, co-authored with Kevin Simler, is about
hidden motives in everyday life. And our first priority there is just to explain to you what
are the things that you are not looking at that you have reluctant to look at. And many people
try to take that book as a self-help book where they're trying to improve themselves and make
sure they look at more things. And that often goes badly because it's harder to actually do that
than you think. And so we at least want you to know that this truth is available if you want
to learn about it. It's the Nietzsche, if you gaze long into the abyss, the abyss gazes into you.
Let's talk about this elephant in the brain. Amazing book, The Elephant in the Room is,
quote, an important issue that people are reluctant to acknowledge or address a social taboo.
The elephant in the brain is an important but unacknowledged feature of how our mind works
and introspective taboo. You describe selfishness and self-deception as the core or some of the
core elephants, some of the elephants, elephant offspring in the brain, selfishness and self-deception.
All right. Can you explain, can you explain why these are the taboos in our brain that we
don't want to acknowledge towards us? Your conscious mind, the one that's
listening to me that I'm talking to at the moment, you like to think of yourself as the
president or king of your mind, ruling over all that you see issuing commands that immediately
obeyed. You are instead better understood as the press secretary of your brain. You don't make
decisions. You justify them to an audience. That's what your conscious mind is for.
You watch what you're doing and you try to come up with stories that explain what you're doing
so that you can avoid accusations of violating norms. Humans, compared to most other animals,
have norms and this allows us to manage larger groups with our morals and norms about what we
should or shouldn't be doing. This is so important to us that we needed to be constantly watching
what we were doing in order to make sure we had a good story to avoid norm violations. Many
norms are about motives. If I hit you on purpose, that's a big violation. If I hit you accidentally,
that's okay. I need to be able to explain why it was an accident and not on purpose.
Where does that need come from for your own self-preservation?
Right. Humans have norms and we have the norm that if we see anybody violating a norm,
we need to tell other people and then coordinate to just make them stop
and punish them for violating. Such benefits are strong enough and severe enough that we each
want to avoid being successfully accused of violating norms. For example, hitting someone
on purpose is a big, clear norm violation. If we do it consistently, we may be thrown out of the
group and that would mean we would die. We need to be able to convince people we are not going
around hitting people on purpose. If somebody happens to be at the other end of our fist and
their face connects, that was an accident and we need to be able to explain that.
Similarly, for many other norms humans have, we are serious about these norms and we don't want
people to violate them. We find them violating, we're going to accuse them. Many norms have a
motive component and so we are trying to explain ourselves and make sure we have a good motive
story about everything we do, which is why we're constantly trying to explain what we're doing
and that's what your conscious mind is doing. It is trying to make sure you've got a good motive
story for everything you're doing and that's why you don't know why you really do things.
What you know is what the good story is about why you've been doing things.
That's the self-deception. You're saying that there's a machine, the actual dictator is selfish
and then you're just the press secretary who's desperately doesn't want to get fired
and is justifying all of the decisions of the dictator and that's the self-deception.
Right. Now, most people actually are willing to believe that this is true in the abstract.
Our book has been classified as psychology and it was reviewed by psychologists and the basic
way that psychology referees and reviewers responded to say this is well-known.
Most people accept that there's a fair bit of self-deception.
But they don't want to accept it about themselves directly.
Well, they don't want to accept it about the particular topics that we talk about.
So, people accept the idea in the abstract that they might be self-deceived or that they might
not be honest about various things, but that hasn't penetrated into the literatures where
people are explaining particular things like why we go to school, why we go to the doctor,
why we vote, etc. So, our book is mainly about 10 areas of life and explaining about
in each area what our actual motives there are. And people who study those things
have not admitted that hidden motives are explaining those particular areas.
So, they haven't taken the leap from theoretical psychology to actual public policy.
Exactly.
And economics and all that kind of stuff. Well, let me just linger on this
and bring up my old friends, Zingman Freud and Carl Jung. So, how vast is this
landscape of the unconscious mind, the power and the scope of the dictator?
Is it only dark there? Is it some light? Is there some love?
The vast majority of what's happening in your head, you're unaware of.
So, in a literal sense, the unconscious, the aspects of your mind that you're not conscious of
is the overwhelming majority. But that's just true in a literal engineering sense.
Your mind is doing lots of low-level things and you just can't be consciously aware of all that
low-level stuff. But there's plenty of room there for lots of things you're not aware of.
But can we try to shine a light at the things we're unaware of, specifically,
now again, staying with the philosophical psychology side for a moment. Can you shine
the light in the young in shadow? What's going on there? What is this machine like?
What level of thoughts are happening there? Is it something that we can even interpret?
If we somehow could visualize it, is it something that's human interpretable?
Or is it just a chaos of monitoring different systems in the body, making sure you're happy,
making sure you're fed all those kind of basic forces that form abstractions on top of each other
and they're not introspective at all? We humans are social creatures. Plausibly,
being social is the main reason we have these unusually large brains. Therefore,
most of our brain is devoted to being social. And so, the things we are very obsessed with
and constantly paying attention to are, how do I look to others? What would others think of me
if they knew these various things they might learn about me? So, that's close to being fundamental
to what it means to be human, is caring what others think. Right. To be trying to present a
story that would be okay for what other things, but we're very constantly thinking, what do other
people think? So, let me ask you this question then about you, Robin Hansen, who many places,
is sometimes for fun, sometimes as a basic statement of principle likes to disagree with
the majority of people think. So, how do you explain, how are you self-deceiving yourself
in this task? And how are you being self, like, why is the dictator manipulating you inside your
head to be so critical? Like, there's norms. Why do you want to stand out in this way?
Why do you want to challenge the norms in this way? Almost by definition, I can't tell you what
I'm deceiving myself about. But the more practical strategy that's quite feasible is to ask about
what are typical things that most people deceive themselves about and then to own up to those
particular things. Sure. What's a good one? So, for example, I can very much acknowledge that
I would like to be well thought of, that I would be seeking attention and glory and praise
from my intellectual work and that that would be a major agenda driving my intellectual attempts.
So, if there were topics that other people would find less interesting, I might be less
interested in those for that reason. For example, I might want to find topics where
other people are interested, and I might want to go for the glory of finding a big
insight rather than a small one and maybe one that was especially surprising. That's also,
of course, consistent with some more ideal concept of what an intellectual should be.
But most intellectuals are relatively risk averse. They are in some local intellectual tradition,
and they are adding to that, and they are staying conforming to the usual assumptions and usual
accepted beliefs and practices of a particular area so that they can be accepted in that area
and treated as part of the community. But you might think for the purpose of the larger
intellectual project of understanding the world better, people should be less eager to just add
a little bit to some tradition, and they should be looking for what's neglected between the major
traditions and major questions. They should be looking for assumptions maybe we're making that
are wrong. They should be looking at ways, things that are very surprising, like things that would
be, you would have thought a priori unlikely that once you are convinced of it, you find that to be
very important and a big update. So, you could say that one motivation I might have is less
motivated to be sort of comfortably accepted into some particular intellectual community and more
willing to just go for these more fundamental long shots that should be very important if you
could find them. Which would, if you can find them, would get you a tentative-reciated respect
across a larger number of people across the longer time span of history.
Right. So, maybe the small local community will say, you suck. You must conform,
but the larger community will see the brilliance of you breaking out of the cage of the small
conformity into a larger cage. There's always a bigger cage, and then you'll be remembered by more.
Yeah. Also, that explains your choice of colorful shirt that looks great in a black background,
so you definitely stand out. Now, of course, you could say, well, you could get all this
attention by making false claims of dramatic improvement, and then wouldn't that be much
easier than actually working through all the details? Why not to make true claims?
Let me ask the press secretary, why not? So, of course, you spoke several times about how much
you value truth and the pursuit of truth. That's a very nice narrative. Hitler and Stalin also
talked about the value of truth. Do you worry when you introspect, as broadly as all humans
might, that it becomes a drug? Being a martyr, being the person who points out that the emperor
wears no clothes, even when the emperor is obviously dressed, just to be the person who
points out that the emperor is wearing no clothes. Do you think about that?
So, I think the standards you hold yourself to are dependent on the audience you have in mind.
So, if you think of your audience as relatively easily fooled or relatively gullible, then you
won't bother to generate more complicated, deep arguments and structures and evidence
to persuade somebody who has higher standards because why bother? You can get away with something
much easier. And of course, if you are, say, a salesperson or you make money on sales, then
you don't need to convince the top few percent of the most sharp customers. You can just go for the
bottom 60% of the most gullible customers and make plenty of sales, right? So, I think
intellectuals have to vary. One of the main ways intellectuals varies is in who is their
audience in their mind. Who are they trying to impress? Is it the people down the hall?
Is it the people who are reading their Twitter feed? Is it their parents? Is it their high
school teacher? Or is it Einstein and Freud and Socrates, right? So, I think those of us who are
especially arrogant, especially think that we're really big shot or have a chance at being a really
big shot, we were naturally going to pick the big shot audience that we can. We're going to be and
try to impress Socrates and Einstein. Is that why you're hanging out with Tyler Cohen a lot?
Sure. I mean, try to convince him myself. Right. You know, and you might think, you know, from the
point of view of just making money or having sex or other sorts of things, this is misdirected energy,
right? Trying to impress the very most highest quality minds. That's such a small sample and
they can't do that much for you anyway. Yeah. So, I might well have had more, you know,
ordinary success in life, be more popular, invited to more parties, make more money if I had targeted
a lower tier set of intellectuals with the standards they have. But for some reason,
I decided early on that Einstein was my audience or people like him. And I was going to impress them.
Yeah. I mean, you pick your set of motivations, you know, convincing, impressing Tyler Cohen is
not going to help you get laid. Trust me, I tried. All right. What are some notable
sort of effects of the elephant in the brain in everyday life? So, you mentioned
when we tried to apply that to economics, to public policy. So, when we think about medicine,
education, all those kinds of things. So, what are some things that... Well, the key thing is
medicine is much less useful health-wise than you think. So, you know, if you were focused on
your health, you would care a lot less about it. And if you were focused on other people's health,
you would also care a lot less about it. But if medicine is, as we suggest, more about showing
that you care and let other people showing that they care about you, then a lot of priority on
medicine can make sense. So, that was our very earliest discussion in the podcast. You were
talking about should you give people a lot of medicine when it's not very effective? And then
the answer then is, well, if that's the way that you show that you care about them and you really
want them to know you care, then maybe that's what you need to do if you can't find a cheaper,
more effective substitute. So, if we actually just pause on that for a little bit, how do we start
to untangle the full set of self-deception happening in the space of medicine?
So, we have a method that we use in our book that is what I recommend for people to use in all
these sorts of topics. The straightforward method is, first, don't look at yourself. Look at other
people. Look at broad patterns of behavior in other people. And then ask, what are the various
theories we could have to explain these patterns of behavior? And then just do the simple matching.
Which theory better matches the behavior they have? And the last step is to assume that's true
of you too. Don't assume you're an exception. If you happen to be an exception, that won't go so
well. But nevertheless, on average, you aren't very well positioned to judge if you're an exception.
So, look at what other people do. Explain what other people do and assume that's you too.
But also, in the case of medicine, there's several parties to consider. So, there's the individual
person that's receiving the medicine. There's the doctors that are prescribing the medicine.
And there's drug companies that are selling drugs. There are governments that have regulations,
they're lobbyists. So, you can build up a network of categories of humans in this.
And they each play their role. So, how do you introspect the sort of analyze the system at a
system scale versus at the individual scale? So, it turns out that in general, it's usually much
easier to explain producer behavior than consumer behavior. That is, the drug companies or the
doctors have relatively clear incentives to give the customers whatever they want.
And similarly, say governments in democratic countries have incentive to give the voters
what they want. So, that focuses your attention on the patient and the voter in this equation and
saying what do they want? They would be driving the rest of the system. Whatever they want,
the other parties are willing to give them in order to get paid. So, now we're looking for
puzzles in patient and voter behavior. What are they choosing and why do they choose that?
And how much exactly? And then we can explain that potentially again, returning to the producer
by the producer being incentivized to manipulate the decision making processes of the voter and
the consumer. Well, now in almost every industry, producers are in general happy to lie and exaggerate
in order to get more customers. This is true of auto repair as much as human body repair and medicine.
So, the differences between these industries can't be explained by the willingness of the
producers to give customers what they want or to do various things that we have to again
go to the customers. Why are customers treating body repair different than auto repair?
Yeah, and that potentially requires a lot of thinking, a lot of data collection and potentially
looking at historical data too because things don't just happen overnight. Over time there's
trends. In principle it does, but actually it's a lot easier than you might think. I think the
biggest limitation is just the willingness to consider alternative hypotheses. So, many of
the patterns that you need to rely on are actually pretty obvious, simple patterns. You just have
to notice them and ask yourself, how can I explain those? Often you don't need to look at the most
subtle, most difficult statistical evidence that might be out there. The simplest patterns are
often enough. All right. So, there's a fundamental statement about self-deception in the book.
There's the application of that, like we just did in medicine. Can you steelman the argument that
many of the foundational ideas in the book are wrong? Meaning there's two that you just made,
which is it can be a lot simpler than it looks. Can you steelman the case that it's case by case?
It's always super complicated. It's a complex system. It's very difficult to have a simple
model about. It's very difficult to disrespect. And the other one is that the human brain isn't
not just about self-deception. There's a lot of motivations to play. And we are able to really
introspect our own mind. And what's on the surface of the conscious is actually quite a good representation
of what's going on in the brain. And you're not deceiving yourself. You're able to actually arrive
to deeply think about where your mind stands and what you think about the world. And it's less
about impressing people and more about being a free thinking individual. So when a child tries
to explain why they don't have their homework assignment, they are sometimes inclined to say,
the dog ate my homework. They almost never say the dragon ate my homework. The reason is the
dragon is a completely implausible explanation. Almost always when we make excuses for things,
we choose things that are at least in some degree plausible. It could perhaps have happened.
That's an obstacle for any explanation of a hidden motive or a hidden feature of human behavior.
If people are pretending one thing while really doing another, they're usually going to pick
as a pretense something that's somewhat plausible. That's going to be an obstacle to
proving that hypothesis. If you are focused on sort of the local data that a person would
typically have if they were challenged. So if you're just looking at one kid and his lack of
homework, maybe you can't tell whether his dog ate his homework or not. If you happen to know he
doesn't have a dog, you might have more confidence. You will need to have a wider range of evidence
than a typical person would when they're encountering that actual excuse in order to see
past the excuse. That will just be a general feature of it. So in order, if I say, there's
the usual story about where we go to the doctor and then there's this other explanation, it'll
be true that you'll have to look at wider data in order to see that because people don't usually
offer excuses unless in the local context of their excuse, they can get away with it. That is,
it's hard to tell. So in the case of medicine, I have to point you to sort of larger sets of data.
But in many areas of academia, including health economics, the researchers there also
want to support the usual points of view. And so they will have selection effects in their
publications and their analysis whereby they, if they're getting a result too much contrary to
the usual point of view everybody wants to have, they will file draw that paper or redo the analysis
until they get an answer that's more to people's liking. So that means in the health economics
literature, there are plenty of people who will claim that in fact we have evidence that medicine
is effective. And when I respond, I will have to point you to our most reliable evidence
and ask you to consider the possibility that the literature is biased in that when the
when the evidence isn't as reliable, when they have more degrees of freedom in order to get
the answer they want, they do tend to get the answer they want. But when we get to the kind
of evidence that's much harder to mess with, that's where we will see the truth be more revealed.
So with respect to medicine, we have millions of papers published in medicine over the years,
most of which give the impression that medicine is useful.
There's a small literature on randomized experiments of the aggregate effects of medicine
where there's maybe a few half dozen or so papers where it would be the hardest to hide it because
it's such a straightforward experiment done in a straightforward way that it's hard to manipulate.
And that's where I will point you to to show you that there's relatively little correlation
between health and medicine. But even then, people could try to save the phenomenon and say,
well, it's not hidden motives, it's just ignorance. They could say, for example, medicine's complicated,
most people don't know the literature. Therefore, they can be excused for ignorance. They are just
ignorantly assuming that medicine is effective. It's not that they have some other motive that
they're trying to achieve. And then I will have to do as with a conspiracy theory analysis and
I'm saying, well, how long has this misperception been going on? How consistently has it happened
around the world and across time? And I would have to say, look, if we're talking about, say,
a recent new product like Segway scooters or something, I could say not so many people have
seen them or used them. Maybe they could be confused about their value. If we're talking
about a product that's been around for thousands of years used in roughly the same way all across
the world and we see the same pattern over and over again, this sort of ignorance mistake just
doesn't work so well. It's also is a question of how much of the self-deception is prevalent
versus foundational because there's a kind of implied thing where it's foundational to human
nature versus just a common pitfall. This is a question I have. So maybe human progress is made
by people who don't fall into the self-deception. It's a basic aspect of human nature, but then
you escape it easily if you're motivated. The motivational hypotheses about the self-deceptions
are in terms of how it makes you look to the people around you. Again, the press secretary.
So the story would be most people want to look good to the people around them.
They're for most people present themselves in ways that help them look good to the people around them.
That's sufficient to say there would be a lot of it. It doesn't need to be 100%, right? There's
enough variety in people and in circumstances that sometimes taking a contrarian strategy can be in
the interest of some minority of the people. So I might, for example, say that that's a strategy
I've taken. I've decided that being contrarian on these things could be winning for me in that
there's a room for a small number of people like me who have these sort of messages who can then get
more attention even if there's not room for most people to do that. And that can be explaining
sort of the variety. Similarly, you might say, look, just look at most oddities things. Most
people would like to look good in the sense of physically, just you look good right now,
you're wearing a nice suit, you have a haircut, you shaved, right? So and we got my own hair,
by the way. Okay, well then, all the more impressive. That's a counter argument for your
client. So clearly if you look at most people and their physical appearance, clearly most people
are trying to look somewhat nice, right? They shower, they shave, they comb their hair. But
we certainly see some people around who are not trying to look so nice, right? Is that a
big challenge? The hypothesis that people want to look nice? Not that much, right? We can see
in those particular people's context, more particular reasons why they've chosen to be
an exception to the more general rule. So the general rule does reveal something foundational
generally. Right. That's the way things work. Let me ask you, you wrote a blog post about the
accuracy of authorities since we were talking about this, especially in medicine. Just looking
around us, especially during this time of the pandemic, there's been a growing distrust of
authorities, of institutions, even an institution of science itself. What are the pros and cons of
authorities, would you say? So what's nice about authorities? What's nice about institutions?
And what are their pitfalls? One standard function of authority is as something you
can defer to respectively without needing to seem too submissive or ignorant or gullible.
That is, when you're asking what should I act on or what belief should I act on,
you might be worried if I chose something too contrarian, too weird, too speculative,
that would make me look bad. So I would just choose something very conservative. So maybe an
authority lets you choose something a little less conservative because the authority is your
authorization. The authority will let you do it. And somebody says, why did you do that thing? And
they say, the authority authorized. The authority tells me I should do this. Why aren't you doing
it? Right. So the authority is often pushing for the conservative? Well, the authority can
do more. I mean, so for example, we just think about, I don't know, in a pandemic even, right?
You could just think, I'll just stay home and close all the doors. Or I'll just ignore it,
right? You could just think of some very simple strategy that might be defensible
if there were no authorities. But authorities might be able to
know more than that. They might be able to look at some evidence, draw a more context dependent
conclusion, declare it as the authority's opinion, and then other people might follow that. And that
could be better than doing nothing. So what you mentioned, WHO, the world's most beloved organization.
So, you know, this is me speaking in general, WHO and CDC has been kind of, depending on degrees and
details, just not behaving as I would have imagined in the best possible evolution of
human civilization, authorities should act. They seem to have failed in some fundamental
way in terms of leadership in a difficult time for our society. Can you say what are the pros and
cons of this particular authority? So again, if there were no authorities whatsoever, no accepted
authorities, then people would sort of have to sort of randomly pick different local authorities
who would conflict with each other and then they'd be fighting each other about that or just not believe
anybody and just do some initial default action that you would always do without responding to
context. So the potential gain of an authority is that they could know more than just basic ignorance.
And if people followed them, they could both be more informed than ignorance and all doing the same
thing. So they're each protected from being accused or complained about. That's the idea
of an authority. That would be the good. What's the con? Okay. What's the negative? How does that
go wrong? The con is that if you think of yourself as the authority and asking what's my best strategy
as an authority, it's unfortunately not to be maximally informative. So you might think the
ideal authority would not just tell you more than ignorance, it would tell you as much as possible.
Okay. It would give you as much detail as you could possibly listen to and manage to assimilate.
And it would update that as frequently as possible or as frequently as you were able to
listen and assimilate. And that would be the maximally informative authority. The problem is
there's a conflict between being an authority or being seen as an authority and being maximally
informative. That was the point of my blog post that you're pointing out to here. That is,
if you look at it from their point of view, they won't long remain the perceived authority if they
are too incautious about how they use that authority. And one of the ways to be incautious
would be to be too informative. Okay. That's still in the pro call for me because you're talking
about the tensions that are very data-driven and very honest. And I would hope that authorities
struggle with that, how much information to provide to people to maximize, to maximize outcomes.
Now, I'm generally somebody that believes more information is better because I trust in the
intelligence of people. But I'd like to mention a bigger con on authorities, which is the human
question. This comes back to global government and so on, is that there's humans that sit in
chairs during meetings and those authorities, they have different titles. It's for humans form
hierarchies. And sometimes those titles get to your head a little bit. And you start to want to
think, how do I preserve my control over this authority, as opposed to thinking through like,
what is the mission of the authority? What is the mission of WHO and other such organization?
And how do I maximize the implementation of that mission? You start to think, well, I kind of
like sitting in this big chair at the head of the table. I'd like to sit there for another few years
or better yet, I want to be remembered as the person who in a time of crisis was at the head
of this authority and did a lot of good things. So you stop trying to do good under what good means,
given the mission of the authority, and you start to try to carve a narrative to manipulate the
narrative. First, in the meeting room, everybody around you, just a small little story you tell
yourself, then you interns the managers throughout the whole hierarchy of the company. Okay, once
you everybody in the company or in the organization believes this narrative, now you start to control
the release of information, not because you're trying to maximize outcomes, but because you're
trying to maximize the effectiveness of the narrative that you are truly a great representative of
this authority in human history. And I just feel like those human forces, whenever you have an
authority, it starts getting to people's heads. One of the most, me as a scientist, one of the
most disappointing things to see during the pandemic is the use of authority from colleagues of mine
to roll their eyes to dismiss other human beings just because they got a PhD, just because they're
an assistant associate full faculty, just because they are deputy head of X organization, NIH,
whatever the heck the organization is, just because they got an award of some kind. And
at a conference, they won a best paper award seven years ago, and then somebody shook their
hand and gave them a medal, maybe it was a president. And it's been 20, 30 years that people
have been patting them on the back saying how special they are, especially when they're controlling
money and getting sucked up to from other scientists who really want the money in a
self deception kind of way. They don't actually really care about your performance. And all of
that gets to your head. And no longer are you the authority that's trying to do good and less
than the suffering in the world. You become an authority that just wants to maximize self preserve
yourself in a sitting on a throne of power. So this is core to sort of what it is to be
an economist. I'm a professor of economics. There you go with the authority again. No. So it's
it's about saying just joking is we often have a situation where we see a world of behavior
and then we see ways in which particular behaviors are not sort of maximally socially useful.
And we have a variety of reactions to that. So one kind of reaction is to sort of morally
blame each individual for not doing the maximally socially useful thing
under perhaps the idea that people could be identified and shamed for that and maybe induced
into doing the better thing if only enough people were calling them out on it. But another way to
think about it is to think that people sit in institutions with certain stable institutional
structures and that institutions create particular incentives for individuals and that individuals
are typically doing whatever is in their local interest in the context of that institution.
And then perhaps to less blame individuals for winning their local institutional game and more
blaming the world for having the wrong institutions. So economists are often like wondering what
are institutions we could have instead of the ones we have and which of them might promote
better behavior. And this is a common thing we do all across human behavior is to think of
what are the institutions we're in and what are the alternative variations we could imagine and
then to say which institutions would be most productive. I would agree with you that our
information institutions that is the institutions by which we collect information and aggregate it
and share it with people are especially broken in the sense of far from the ideal of what would be
the most cost effective way to collect and share information. But then the challenge is to try
to produce better institutions. And as an academic I'm aware that academia is particularly broken
in the sense that we give people incentives to do research that's not very interesting or important
because basically they're being impressive and we actually care more about whether academics
are impressive than whether they're interesting or useful. And I can go happy to go into detail
with lots of different known institutions and their known institutional failings ways in which
those institutions produce incentives that are mistaken. And that was the point of the post
we started with talking about the authorities. If I need to be seen as an authority that's
at odds with my being informative and I might choose to be the authority instead of being
informative because that's my institutional incentives. And if I may, I'd like to, given that
beautiful picture of incentives and individuals that you just painted,
let me just apologize for a couple of things. One, I often put too much blame on leaders of
institutions versus the incentives that govern those institutions. And as a result of that,
I've been, I believe, too critical of Anthony Fauci, too emotional about my criticism of
Anthony Fauci. And I'd like to apologize for that because I think there's a deep, there's deeper
truths to think about. There's deeper incentives to think about. That said, I do sort of, I'm a
romantic creature by nature. I romanticize Winston Churchill and I, when I think about Nazi Germany,
I think about Hitler more than I do about the individual people of Nazi Germany. You think
about leaders, you think about individuals, not necessarily the parameters, the incentives that
govern the system that, because it's harder. It's harder to think through deeply about the models
from which those individuals arise, but that's the right thing to do. But also, I don't want to
apologize for being emotional sometimes and being... I'm happy to blame the individual leaders in the
sense that I might say, well, you should be trying to reform these institutions if you're just there
to get promoted and look good at being at the top. Maybe I can blame you for your motives and your
priorities in there, but I can understand why the people at the top would be the people who are
selected for having the priority of primarily trying to get to the top. I get that.
Can I maybe ask you about particular universities? They've received, like science has received
an increase in distrust overall as an institution, which breaks my heart because I think science is
beautiful as a, not maybe not as an institution, but as one of the things, one of the journeys that
humans have taken on. The other one is university. I think university is actually a place, for me at
least, in the way I see it, is a place of freedom of exploring ideas, scientific ideas, engineering
ideas, more than a corporate, more than a company, more than a lot of domains in life.
It's not just in its ideal, but it's in its implementation, a place where you can be a kid
for your whole life and play with ideas. I think with all the criticism that universities still
not currently receive, I don't think that criticism is representative of universities.
They focus on very anecdotal evidence of particular departments, particular people,
but I still feel like there's a lot of place for freedom of thought, at least MIT, at least in
the fields I care about, in particular kind of science, particular kind of technical fields,
mathematics, computer science, physics, engineering, robotics, artificial intelligence.
This is a place where you get to be a kid, yet there is bureaucracy that's rising up.
There's more rules, there's more meetings, and there's more administration,
having PowerPoint presentations, which to me, you should be more of a renegade explorer of
ideas and meetings destroy, they suffocate that radical thought that happens when you're an
undergraduate student and you can do all kinds of wild things when you're a graduate student.
Anyway, all that to say, you've thought about this aspect too. Is there something
positive, insightful you could say about how we can make for better universities in the decades
to come, this particular institution? How can we improve them?
I hear that centuries ago, many scientists and intellectuals were aristocrats. They had time
and could, if they chose, choose to be intellectuals. That's a feature of the combination that they
had some source of resources that allowed them leisure and that the kind of competition they
were faced in among aristocrats allowed that sort of a self-indulgence or self-pursuit, at least at
some point in their lives. The analogous observation is that university professors often have sort of
the freedom and space to do a wide range of things. I am certainly enjoying that as a tenured professor.
You're a really, sorry to interrupt, a really good representative of that. Just the exploration
you're doing, the depth of thought that most people are afraid to do, the kind of broad
thinking that you're doing, which is great. The fact that that can happen is a combination of
these two things analogously. One is that we have fierce competition to become a tenured
professor, but then once you become tenured, we give you the freedom to do what you like.
That's a happenstance. It didn't have to be that way and in many other walks of life, even though
people have a lot of resources, etc., they don't have that kind of freedom set up. I'm kind of
lucky that tenure exists and that I'm enjoying it. I can't be too enthusiastic about this unless
I can approve of the source of the resources that's paying for all this. For the aristocrat,
if you thought they stole it in war or something, you wouldn't be so pleased, whereas if you thought
they had earned it or their ancestors had earned this money that they were spending as an aristocrat,
then you could be more okay with that. For universities, I have to ask, where are the
main sources of resources that are going to the universities and are they getting their money's
worth or are they getting a good value for that payment? First of all, they're students.
The question is, are students getting good value for their education? Each person is getting value
in the sense that they are identified and shown to be a more capable person, which is then worth
more salary as an employee later, but there is a case for saying there's a big waste to the system
because we aren't actually changing the students who are educating them. We're more sorting them
or labeling them. That's a very expensive process to produce that outcome and part of the expense
is the freedom from tenure I get. I feel like I can't be too proud of that because it's basically
a tax on all these young students to pay this enormous amount of money in order to be labeled
as better, whereas I feel like we should be able to find cheaper ways of doing that.
The other main customer is researcher patrons like the government or other foundations. Then
the question is, are they getting their money worth out of the money they're paying for research to
happen? My analysis is they don't actually care about the research progress. They are mainly
buying an affiliation with credentialed impressiveness on the part of the researchers. They mainly pay
money to researchers who are impressive and have high impressive affiliations, and they don't really
much care what research project happens as a result. Is that a cynical? There's a deep truth
to that cynical perspective. Is there a less cynical perspective that they do care
about the long-term investment into the progress of science and humanity?
They might personally care, but they're stuck in an equilibrium
wherein they basically most foundations like governments or research or the Ford Foundation,
the individuals there are rated based on the prestige they bring to that organization.
Yeah. Even if they might personally want to produce more intellectual progress,
they are in a competitive game where they don't have tenure, and they need to produce this
prestige. Once they give grant money to prestigious people, that is the thing that shows that they
have achieved prestige for the organization, and that's what they need to do in order to retain
their position. You do hope that there's a correlation between prestige and actual competence?
Of course, there is a correlation. The question is just, could we do this better some other way?
Yes. I think it's almost, I think it's pretty clear we could. What is harder to do is move the
world to a new equilibrium where we do that instead. What are the components of the better
ways to do it? Is it money? The sources of money and how the money is allocated to give the individual
researchers freedom? Years ago, I started studying this topic exactly because this was my issue,
and this was many decades ago now, and I spent a long time, and my best guess still is prediction
markets, betting markets. If you as a research patron want to know the answer to a particular
question, like what's the mass of the electron neutrino, then what you can do is just subsidize
a betting market in that question, and that will induce more research into answering that question
because the people who then answer that question can then make money in that betting market with
the new information they gain. That's a robust way to induce more information on a topic. If you
want to induce an accomplishment, you can create prizes, and there's, of course, a long history
of prizes to induce accomplishments. We moved away from prizes, even though we once used them
a far more often than we did today, and there's a history to that. For the customers who want to
be affiliated with impressive academics, which is what most of the customers want, students,
journalists, and patrons, I think there's a better way of doing that, which I just wrote
about in my second most recent blog post. Can you explain? Sure. What we do today is we take
sort of acceptance by other academics recently as our best indication of their deserved prestige.
That is, recent publications, recent job affiliation, institutional affiliations,
recent invitations to speak, recent grants. We are today taking other impressive academics
recent choices to affiliate with them as our best guesstimate of their prestige.
I would say we could do better by creating betting markets in what the distant future will judge to
have been their deserved prestige, looking back on them. I think most intellectuals, for example,
think that if we looked back two centuries, say, two intellectuals from two centuries ago,
and tried to look in detail at their research and how it influenced future research and which
path it was on, we could much but more accurately judge their actual deserved prestige. That is,
who was actually on the right track, who actually helped, which will be different than what people
at the time judged using the immediate indications of the time at which position they had or which
publications they had or things like that. In this way, if you think from the perspective
of multiple centuries, you would higher prioritize true novelty, you would disregard the temporal
proximity, like how recent the thing is, and you would think like, what is the brave, the bold,
the big, a novel idea that this- And you would actually-
You would be able to rate that because you could see the path with which ideas took,
which things had dead ends, which led to what other followings. You could, looking back centuries
later, have a much better estimate of who actually had what long-term effects on intellectual
progress. My proposal is, we actually pay people in several centuries to do this historical analysis,
and we have betting, we have prediction markets today, where we buy and sell assets,
which will later off pay off in terms of those final evaluations. Now, we'll be inducing people
today to make their best estimate of those things by actually looking at the details of people and
setting the prices according. And so my proposal would be, we rate people today on those prices
today. So instead of looking at their list of publications or affiliations, you look at the
actual price of assets that represent people's best guess of what the future will say about them.
That's brilliant. So this concept of idea futures, can you elaborate what this would entail?
I've been elaborating two versions of it here. So one is, if there's a particular question,
say the mass of the electron neutrino, and what you as a patron want to do is get an answer to
that question, then what you would do is subsidize a betting market in that question under the
assumption that eventually we'll just know the answer and we can pay off the bets that way.
And that is a plausible assumption for many kinds of concrete intellectual questions like,
what's the mass of the electron neutrino? In this hypothetical world that you're constructing,
that may be a real world, do you mean literally financial? Yes, literal. Very literal. Very cash.
Very direct and literal. Yes. So the idea would be research labs would be for profit.
They would have as their expense, paying researchers to study things, and then their
profit would come from using the insights the researchers gains to trade in these financial
markets. Just like hedge funds today make money by paying researchers to study firms and then
making their profits by trading on that insight in the ordinary financial market.
And the market would, if it's efficient, would be able to become better and better predicting
the powerful ideas that the individual is able to generate.
The variance around the mass of the electron neutrino would decrease with time as we learned
that value of that parameter better and any other parameters that we want to estimate.
You don't think those markets would also respond to recency of prestige and all those kinds of
things? Well, they would respond, but the question is if they might respond incorrectly,
but if you think they're doing it incorrectly, you have a profit opportunity where you can go
fix it. So we'd be inviting everybody to ask whether they can find any biases or errors in
the current ways in which people are estimating these things from whatever clues they have.
Right. There's a big incentive for the correction mechanism in academia currently. It's the safe
choice to go with the prestige and there's no... Even if you privately think that the prestige
is overrated. Even if you think strongly that it's overrated. Still, you don't have an incentive
to defy that publicly. You're going to lose a lot unless you're a contrarian that writes brilliant
blogs and then you could talk about or have... Right. Initially, this was my initial concept
of having these betting markets on these key parameters. What I then realized over time was
that that's more what people pretend to care about. What they really mostly care about is just who's
how good. That's what most of the system is built on is trying to rate people and rank them. I
designed this other alternative based on historical evaluation centuries later,
just about who's how good because that's what I think most of the customers really care about.
Customers. I like the word customers here. Humans. Right. Well, every major area of life
which has specialists who get paid to do that thing must have some customers from elsewhere who
are paying for it. Well, who are the customers for the mass of the neutrino? Yes, I understand
a sense people who are willing to pay for a thing. That's an important thing to understand
about anything who are the customers. What's the product like? Medicine, education, academia,
military, etc. That's part of the hidden motives analysis. Often people have a thing they say
about what the product is and who the customer is and maybe you need to dig a little deeper
to find out what's really going on. Or a lot deeper. You've written that you seek out quote
view quakes. You're able as an intelligent black box word generating machine, you're able to generate
a lot of sexy words. I like it. I love it. View quakes, which are insights which dramatically
changed my worldview, your worldview. You write, I loved science fiction as a child,
studied physics and artificial intelligence for a long time each, and now study economics and
political science, all fields full of such insights. Let me ask, what are some view quakes or a
beautiful surprising idea to you from each of those fields? Physics, AI, economics, political
science? I know it's a tough question. Something that springs to mind about physics, for example,
that just is beautiful. Right from the beginning, say, special relativity was a big surprise.
Most of us have a simple concept of time and it seems perfectly adequate for everything we've
ever seen. To have it explained to you that you need to have a mixture concept of time and space
where you put it into the space-time construct, how it looks different from different perspectives,
that was quite a shock. That was such a shock that it makes you think, what else do I know
that isn't the way it seems? Certainly, quantum mechanics is certainly another enormous shock
in terms of from your point. You have this idea that there's a space and then there's
particles at points and maybe fields in between. Quantum mechanics is just a whole different
representation. It looks nothing like what you would have thought as the basic representation
of the physical world. That was quite a surprise. What would you say is the catalyst for the
view quake in theoretical physics in the 20th century? Where does that come from? The interesting
thing about Einstein, it seems like a lot of that came from almost thought experiments. It wasn't
almost experimentally driven. Actually, I don't know the full story of quantum mechanics. How
much of it is experiment like where? If you look at the full trace of idea generation there,
of all the weird stuff that falls out of quantum mechanics, how much of that was the experimentalist?
How much was it the theoreticians? Usually, in theoretical physics, the theories lead the way.
So maybe can you elucidate? What is the catalyst for these?
The remarkable thing about physics and about many other areas of academic intellectual life is that
it just seems way over determined. That is, if it hadn't been for Einstein or if it hadn't been for
Heisenberg, certainly within a half a century, somebody else would have come up with essentially
the same things. Is that something you believe or is that something? Yes. So I think when you look
at just the history of physics and the history of other areas, some areas like that, there's just
this enormous convergence that the different kind of evidence that was being collected was
so redundant in the sense that so many different things revealed the same things that eventually
you just have to accept it because it just gets obvious. So if you look at the details,
of course, Einstein did it for somebody else. And it's well worth celebrating Einstein for that.
And we, by celebrating the particular people who did something first or came across something first,
we are encouraging all the rest to move a little faster, to try to push us all a little faster,
which is great. But I still think we would have gotten roughly to the same place within
a half century. So sometimes people are special because of how much longer it would have taken.
So some people say general relativity would have taken longer without Einstein than other things.
I mean, Heisenberg quantum mechanics, I mean, there were several different
formulations of quantum mechanics all around the same few years means no one of them made
that much of a difference. We would have had pretty much the same thing regardless of which
of them did it exactly when. Nevertheless, I'm happy to celebrate them all. But this is a choice
I make in my research. That is when there's an area where there's lots of people working together,
who are sort of scoping each other and getting a result just before somebody else does, you ask,
well, how much of a difference would I make there? At most, I could make something happen
a few months before somebody else. And so I'm less worried about them missing things. So when
I'm trying to help the world like doing research, I'm looking for neglected things. I'm looking for
things that nobody's doing it. If I didn't do it, nobody would do it. Nobody would do it.
Or at least for a long time. In the next 10, 20 years kind of thing. Exactly.
Same with general relativity, just, you know, who would do it. It might take another 10,
20, 30, 50 years. So that's the place where you can have the biggest impact is finding the
things that nobody would do unless you did them. And then that's when you get the big
view quake, the insight. So what about artificial intelligence? Would it be the EMs, the emulated
minds? What idea, whether that struck you in the shower one day or the you just-
Clearly, the biggest view quake in artificial intelligence is the realization of just how
complicated our human minds are. So most people who come to artificial intelligence from other
fields or from relative ignorance, a very common phenomenon, which you must be familiar with,
is that they come up with some concept and then they think that must be it. Once we implement
this new concept, we will have it. We will have full human level or higher artificial
intelligence, right? And they're just not appreciating just how big the problem is,
how long the road is, just how much is involved. Because that's actually hard to appreciate.
When we just think, it seems really simple. And studying artificial intelligence, going
through many particular problems, looking at each problem, all the different things you need
to be able to do to solve a problem like that, makes you realize all the things your minds are
doing that you are not aware of. That's that vast subconscious that you're not aware of.
That's the biggest view cave from artificial intelligence. By far, for most people who study
artificial intelligence, is to see just how hard it is.
I think that's a good point. But I think it's a very early view quake. It's when the
stunning Kruger crashes hard. It's the first realization that humans are actually quite
incredible. The human mind, the human body is quite incredible.
There's a lot of different parts to it. But then, see, it's already been so long for me
that I've experienced that view quake that for me, I now experience the view quakes of,
holy shit, this little thing is actually quite powerful, like neural networks. I'm amazed.
Because you've become almost cynical after that first view quake of, this is so hard,
like evolution did some incredible work to create the human mind.
But then you realize, just like you have, you've talked about a bunch of simple models,
that simple things can actually be extremely powerful, that maybe emulating of the human mind
is extremely difficult. But you can go a long way with a large neural network. You can go a
long way with a dumb solution. It's that Stuart Russell thing with the reinforcement learning.
Holy crap, you can go quite a long way with a simple thing.
But we still have a very long road to go.
I can't, I refuse to sort of know. The road is full of surprises. So long is an interesting,
like you said, with the six hard steps that humans have to take to arrive at where we are
from the origin of life on earth. So it's long maybe in the statistical improbability
of the steps that have to be taken. But in terms of how quickly those steps could be taken,
I don't know if my intuition says it's, if it's hundreds of years away, or if it's
a couple of years away, I prefer to measure. Pretty confidence at least a decade.
And well, we can mildly confidence at least three decades.
I can steel man either direction. I prefer to measure that journey in Elon Musk's.
That's the new, we don't get Elon Musk very often. So that's a long time scale.
For now, I don't know, maybe you can clone or maybe multiply or even know what Elon Musk,
what that is, what is that? What is?
That's a good question. Exactly. Well, that's an excellent question.
How does that fit into the model of the three parameters that are required for becoming a
grabby alien civilization? That's the question of how much any individual
makes in the long path of civilization over time. Yes. And it's a favorite topic of historians
and people to try to focus on individuals and how much of a difference they make. And certainly,
some individuals make a substantial difference in the modest term,
like certainly without Hitler being Hitler in the role he took, European history would have
taken a different path for a while there. But if we're looking over like many centuries,
longer term things, most individuals do fade in their individual influence.
So, maybe just... You and Einstein.
You and Einstein. No matter how sexy your hair is, you will also be forgotten in long arc of
history. So, you said at least 10 years. So, let's talk a little bit about this AI point of where,
how we achieve, how hard is the problem of solving intelligence by engineering artificial
intelligence that achieves human level, human-like qualities that we associate with intelligence?
How hard is this? What are the different trajectories that take us there?
One way to think about it is in terms of the scope of the technology space you're talking about.
So, let's take the biggest possible scope, all of human technology, right? The entire human economy.
So, the entire economy is composed of many industries, each of which have many products
with many different technologies supporting each one. At that scale, I think we can accept that
that most innovations are a small fraction of the total, that is usually has relatively gradual
overall progress. And that individual innovations that have a substantial effect that total are
rare and their total effect is still a small percentage of the total economy, right? There's
very few individual innovations that made a substantial difference to the whole economy,
right? What are we talking? Steam engine, shipping containers, a few things.
Shipping containers deserves to be up there with steam engines, honestly.
Can you say exactly why shipping containers revolutionized shipping? Shipping is very
important. But placing that at shipping containers. So, you're saying you wouldn't have some of the
magic of the supply chain, all that, without shipping containers?
Made a big difference. Absolutely. Interesting. That's something we're looking at.
We shouldn't take that tangent although I'm tempted to. But anyway, so there's a few, just a
few innovations. Right. So, at the scale of the whole economy, right? Now, as you move down to
a much smaller scale, you will see individual innovations having a bigger effect, right? So,
if you look at, I don't know, lawn mowers or something, I don't know about the innovations
lawn mower, but there are probably like steps where you just had a new kind of lawn mower and
that made a big difference to mowing lawns because you're focusing on a smaller part of the whole
technology space, right? So, and sometimes like military technology, there's a lot of
military technologies, a lot of small ones, but every once in a while, a particular military
weapon makes a big difference. But still, even so, mostly overall, they're making
modest differences to something that's increasing relatively, say like US military is the strongest
in the world consistently for a while. No one weapon in the last 70 years has made a big difference
in terms of the overall prominence of the US military, right? Because that's just saying,
even though every once in a while, even the recent Soviet hyper missiles or whatever they are,
they aren't changing the overall balance dramatically, right? So, when we get to AI,
now I can frame the question, how big is AI? Basically, if so, one way of thinking about
AI is it's just all mental tasks. And then you ask what fraction of tasks are mental tasks? And
then I go, a lot. And then if I think of AI as like half of everything, then I think, well,
it's got to be composed of lots of parts where anyone innovation is only a small impact, right?
Now, if you think, no, no, no, AI is like AGI. And then you think AGI is a small thing, right?
There's only a small number of key innovations that will enable it. Now, you're thinking there
could be a bigger chunk that you might find that would have a bigger impact. So, the way I would
ask you to frame these things in terms of the chunkiness of different areas of technology,
in part, in terms of how big they are. Now, if you take 10 chunky areas and you add them
together, the total is less chunky. Yeah. But don't you, are you able until you solve
the fundamental core parts of the problem just to meet the chunkiness of that problem?
Well, if you have a history of prior chunkiness, that could be your best estimate for future
chunkiness. So, for example, I mean, even at the level of the world economy, right,
we've had this, what, 10,000 years of civilization, well, that's only a short time,
you might say, oh, that doesn't predict future chunkiness. But it looks relatively steady and
consistent. We can say even in computer science, we've had seven years of computer science,
we have enough data to look at chunkiness of computer science. Like, when were there algorithms
or approaches that made a big chunky difference and how large a fraction of those that was that?
And I'd say mostly in computer science, most innovation has been relatively small chunks,
the bigger chunks have been rare. Well, this is the interesting thing,
this is about AI and just algorithms in general, is page rank. So, Google's, right. So,
sometimes it's a simple algorithm that by itself is not that useful, but the scale of context.
And in the context that's scalable, depending on the, yeah, depending on the context,
is all of a sudden the powers revealed. And there's something, I guess that's the nature of
chunkiness is that you get things that can reach a lot of people simply can be quite chunky.
So, one standard story about algorithms is to say algorithms have a fixed cost plus a marginal cost.
And so, in history, when you had computers that were very small, you tried all the algorithms
that had low fixed costs. And you look for the best of those. But over time, as computers got
bigger, you could afford to do larger fixed costs and try those. And some of those had
more effective algorithms in terms of their marginal cost. And that, in fact, that roughly
explains the long-term history where, in fact, the rate of algorithmic improvement is about the
same as the rate of hardware improvement, which is a remarkable coincidence. But it would be
explained by saying, well, there's all these better algorithms you can't try until you have
a big enough computer to pay the fixed cost of doing some trials to find out if that algorithm
actually saves you on the marginal cost. And so, that's an explanation for this relatively
continuous history where, so we have a good story about why hardware is so continuous, right?
And you might think, why would software be so continuous with the hardware? But if there's a
distribution of algorithms in terms of their fixed costs, and it's, say, spread out a little
wide log-normal distribution, then we could be sort of marching through that log-normal distribution,
trying out algorithms with larger fixed costs, and finding the ones that have lower marginal costs.
So, would you say AGI, human level, AI, even EM,
EM, emulated mines, is chunky? Like a few breakthroughs can take this.
So, an M is by its nature chunky in the sense that if you have an emulated brain and you're
25% effective at emulating it, that's crap. That's nothing. Okay. You pretty much need
to emulate a full human brain. Is that obvious? Is that obvious?
I think it's pretty obvious. I'm talking about like, you know, so the key thing is,
you're emulating various brain cells, and so you have to emulate the input-output pattern
of those cells. So, if you get that pattern somewhat close, but not close enough, then the whole
system just doesn't have the overall behavior you're looking for, right?
But it could have functionally some of the power of the overall system.
So, there'd be some threshold. The point is, when you get close enough,
then it goes over the threshold. It's like taking a computer chip and deleting every one
percent of the gates, right? No, that's very chunky. But the hope is that the emulating the
human brain, I mean, the human brain itself is not- Right. So, it has a certain level of redundancy
and a certain level of robustness. And so, there's some threshold when you get close to that level
of redundancy or robustness, then it starts to work. But until you get to that level,
it's just going to be crap, right? Yeah.
It's going to be just a big thing that isn't working for us. So, we can be pretty sure that
emulations is a big chunk in an economic sense, right? At some point, you'll be able to make one
that's actually effective in enable substituting for humans. And then, that will be this huge
economic product that people will try to buy crazy. Now,
if you bring a lot of algae to people's lives, they'll be willing to pay for it.
Right. But it could be that the first emulation costs a billion dollars each, right?
And then, we have them, but we can't really use them. They're too expensive. And then,
the cost slowly comes down. And now, we have less of a chunky adoption, right?
That as the cost comes down, then we use more and more of them in more and more contexts.
And that's a more continuous curve. So, it's only if the first emulations are relatively cheap
that you get a more sudden disruption to society. And that could happen if the algorithm is the
last thing you figure out how to do or something.
What about robots that capture some magic in terms of social connection?
The robots, like we have a robot dog on the carpet right there.
Robots that are able to capture some magic of human connection
as they interact with humans, but are not emulating the brain. What about those? How far away?
So, we're thinking about chunkiness or distance now. So, if you ask how chunky is the task of
making a emulatable robot or something. Which chunkiness and time are correlated?
Right. But it's about how far away it is or how suddenly it would happen.
Chunkiness is how suddenly and difficulty is just how far away it is. But it could be a
continuous difficulty. It could just be far away, but we'll slowly, steadily get there. Or there
could be these thresholds where we reach a threshold and suddenly we can do a lot better.
Yeah. That's a good question for both. I tend to believe that all of it, not just the M,
but AGI too is chunky. And human level intelligence. So, my best guess is...
Embodied in robots is also chunky. Because the history of computer science and chunkiness so far
seems to be my rough best guess for the chunkiness of AGI. That is, it is chunky.
It's modestly chunky. Not that chunky.
Right. Because our ability to use computers to do many things in a
economy has been moving relatively steadily. Overall, in terms of our use of computers in
society, they have been relatively steadily improving for 70 years.
No, but I would say that's the hard way. Okay.
Okay. I would have to really think about that. Because neural networks are quite surprising.
Sure. But every once in a while, we have a new thing that's surprising. But if you stand back,
we see something like that every 10 years or so. Something new.
Improvisation is gradual. That has a big effect. So, moderately chunky.
Yeah. The history of the level of disruption we've seen in the past would be a rough estimate
of the level of disruption in the future. Unless the future is, we're going to hit a chunky territory
much chunkier than we've seen in the past. Well, I do think there's... It's like
a Koonian revolution type. It seems like the data, especially on AI, is difficult to
reason with because it's so recent. It's such a recent field in this space.
AI has been around for 50 years.
I mean, 50, 60, 70, 80 years being recent. Okay.
It's enough time to see a lot of trends.
A lot of trends. A few trends. I think the internet, computing,
there's really a lot of interesting stuff that's happened over the past 30 years that
I think the possibility of revolutions is likelier than it was in the...
I think for the last 70 years, there have always been a lot of things that look like
they had a potential for revolution. So, we can't reason well about this.
Fair enough. I mean, we can reason well by looking at the past trends. I would say
the past trend is roughly your best guess for the future.
No, but if I look back at the things that might have looked like revolutions in the
70s and 80s and 90s, they are less like the revolutions that appear to be happening now
or the capacity of revolutions that appear to be there now.
First of all, there's a lot more money to be made.
So, there's a lot more incentive for markets to do a lot of kind of innovation,
it seems like in the AI space. But then again, there's a history of winters and summers and so
on. So, maybe we're just like riding a nice wave right now.
One of the biggest issues is the difference between impressive demos and commercial value.
Yes.
So, often through the history of AI, we saw very impressive demos
that never really translated much into commercial value.
Somebody who works on and cares about autonomous and semi-autonomous vehicles,
tell me about it. So, and there again, we return to the number of Elon Musk's per Earth,
per year generated. That's the M. Coincidentally, same initials as the M.
Very suspicious, very suspicious. We're going to have to look into that.
All right, two more fields that I would like to force and twist your arm to look for view
quakes and for beautiful ideas, economics. What is a beautiful idea to you about economics?
You've mentioned a lot of them.
Sure. So, as you said before, there's going to be the first view cake most people encounter
that makes the biggest difference on average in the world, because that's the only thing
most people ever see is the first one. And so, with AI, the first one is just how big the problem
is. But once you get past that, you'll find others. Certainly, for economics, the first one is just
the power of markets. You might have thought it was just really hard to figure out how to
optimize in a big complicated space, and markets just do a good first pass for an awful lot of
stuff. And they are really quite robust and powerful. And that's just quite the view cake,
where you just say, if you want to get in the ballpark, just let a market handle it and step
back. And that's true for a wide range of things. It's not true for everything, but it's a very
good first approximation. And most people's intuitions for how they should limit markets
are actually messing them up. They're that good in sense, right? Most people, when you go,
I don't know if we want to trust that. Well, you should be trusting that.
What about, what are markets? Just a couple of words.
So the idea is, if people want something, then let other companies form to try to supply that
thing, let those people pay for their cost of whatever they're making, and try to offer that
product to those people, let many people, many such firms enter that industry, and let the
customers decide which ones they want. And if the firm goes out of business, let it go bankrupt,
and let other people invest in whichever ventures they want to try to attract customers to their
version of the product. And that just works for a wide range of products and services.
And through all of this, there's a free exchange of information too. There's a hope that there's
no manipulation of information and so on. Even when those things happen, still just the simple
market solution is usually better than the things you'll try to do to fix it.
Than the alternative. That's a viewpoint. It's surprising. It's not what you would initially
thought. That's one of the great, I guess, inventions of human civilization that trust
the markets. Now, another viewpoint that I learned in my research that's not all of economics,
but something more specialized is the rationality of disagreement. That is,
basically people who are trying to believe what's true in a complicated situation would not actually
disagree. And of course, humans disagree all the time. So it was quite the striking fact
for me to learn in grad school that actually rational agents would not knowingly disagree.
And so that makes disagreement more puzzling and it makes you less willing to disagree.
Humans are to some degree rational and are able to...
Their priorities are different than just figuring out the truth, which might not be
the same as being irrational. That's another tangent that could take an hour.
In the space of human affairs, political science, what is a beautiful, foundational,
interesting idea to you, a viewpoint in the space of political science?
The main thing that goes wrong in politics is people not agreeing on what the best thing to do
is. That's a wrong thing. So that's what goes wrong. That is, when you say what's
fundamental behind most political failures, it's that people are ignorant of what the
consequences of policy is. And that's surprising because it's actually feasible to solve that
problem, which we aren't solving. So it's a bug, not a feature that there's an inability to arrive
at a consensus. So most political systems, if everybody looked to some authority, say, on a
question and that authority told them the answer, then most political systems are capable of just
doing that thing. And so it's the failure to have trust for the authorities that is sort of the
underlying failure behind most political failure. We invade Iraq, say, when we don't have an
authority to tell us that's a really stupid thing to do. And it is possible to create
more informative trust for the authorities. That's a remarkable fact about the world of
institutions that we could do that, but we aren't.
Yeah. So that's surprising. We could and we aren't.
Right. Another big view correct about politics is from the Elf in the Brain that most people,
when they're interacting with politics, they say they want to make the world better,
they make their city better, their country better, and that's not their priority.
What is it? They want to show loyalty to their allies. They want to show their people they're
on their side. Yes. They're various tribes they're in. That's their primary priority,
and they do accomplish that. Yeah. And the tribes are usually color coded conveniently enough.
What would you say, it's the Churchill question.
Democracy is the crappiest form of government, but it's the best one we got.
What's the best form of government for this, our 7 billion human civilization,
and maybe as we get farther and farther, you mentioned a lot of stuff that's fascinating
about human history as we become more forager like and looking out beyond what's the best
form of government in the next 50, 100 years as we become a multipartiary species.
So the key failing is that we have existing political institutions and related institutions
like media institutions and other authority institutions, and these institutions sit in
a vast space of possible institutions. And the key failing, we're just not exploring that space.
So I have made my proposals in that space, and I think I can identify many provinces
solutions, and many other people have made many other promising proposals in that space.
But the key thing is we're just not pursuing those proposals. We're not trying them out on
small scales. We're not doing tests. We're not exploring the space of these options.
That is the key thing we're failing to do. And if we did that, I am confident we would find much
better institutions than when we're using now, but we would have to actually try.
So there's a lot of those topics. I do hope we get a chance to talk again. You're a fascinating
human being. So I'm skipping a lot of tangents on purpose that I would love to take. You're
such a brilliant person with so many different topics. Let me take a stroll into the deep human
psyche of Robin Hansen himself. So first, may not be that deep. I might just be all on the
surface. What you see is what you get. There might not be much hiding behind it. Some of the fun is
on the surface. And I actually think this is true of many of the most successful, most interesting
people you see in the world. That is, they have put so much effort into the surface that they've
constructed. And that's where they put all their energy. So somebody might be a statesman or an
actor or something else. And people want to interview them and they want to say, what are you
behind the scenes? What do you do in your free time? Those people don't have free time. They
don't have another life behind the scenes. They put all their energy into that surface,
the one we admire, the one we're fascinated by. And they kind of have to make up the stuff
behind the scenes to supply it for you. But it's not really there.
Well, there's several ways of phrasing this. So one of it is authenticity, which is
if you become the thing you are on the surface, if the depths mirror the surface,
then that's what authenticity is. You're not hiding something. You're not concealing something.
To push back on the idea of actors, they actually have often a manufactured surface
that they put on and they try on different masks. And the depths are very different
from the surface. And that's actually what makes them very not interesting to interview.
If you're an actor who actually lives the role that you play, so like, I don't know,
a Clint Eastwood type character who clearly represents the cowboy, like at least rhymes
or echoes the person you play on the surface, that's authenticity.
Some people are typecasts and they have basically one persona. They play in all of their movies
and TV shows. And so those people, it probably is the actual persona that they are,
or it has become that over time. Clint Eastwood would be one, I think of Tom Hanks as an ever,
right? They just always play the same person. And you and I are just both surface players.
You're the fun, brilliant thinker and I am the suit wearing idiot full of silly questions.
All right. That said, let's put on your wise sage hat and ask you, what advice would you give to
young people today in high school and college about life, about how to live a successful life
in career or just in general that they can be proud of?
Most young people, when they actually ask you that question, what they usually mean is,
how can I be successful by usual standards? I'm not very good at giving advice about that,
because that's not how I tried to live my life. So I would more flip it around and say,
you live in a rich society. You will have a long life. You have many resources available to you.
Whatever career you take, you'll have plenty of time to make progress on something else.
Yes, it might be better if you find a way to combine your career and your interests in a way
that gives you more time and energy, but there are often big compromises there as well.
So if you have a passion about some topic or something that you think is worth pursuing,
you can just do it. You don't need other people's approval. And you can just start doing whatever
it is you think is worth doing. It might take you decades, but decades are enough to make
enormous progress on most all interesting things. And don't worry about the commitment of it.
I mean, that's a lot of what people worry about is, well, there's so many options,
and if I choose a thing and I stick with it, I sacrifice all the other paths I could have taken.
Right. So I switched my career at the age of 34 with two kids, age zero and two,
went back to grad school in social science after being a research software engineer.
Engineer. So it's quite possible to change your mind later in life.
How can you have an age of zero?
Less than one.
Okay. So oh, oh, you index was, I got it. Okay.
Right. That's like people also ask what to read. And I say textbooks.
And until you've read lots of textbooks or maybe review articles, I'm not so sure you should be
reading blog posts and Twitter feeds and even podcasts. I would say at the beginning, read the,
you know, this is our best, humanity's best summary of how to learn things is crammed into
textbooks. Especially the ones on like introduction to biology. Introduction to everything. Just
read all the algorithms. Read as many textbooks as you can stomach and then maybe if you want to
know more about a subject, find review articles. You don't need to read the latest stuff for most
topics. Yeah. And actually textbooks often have the, the prettiest pictures. There you go. And
depending on the field, if it's technical, then doing the homework problems at the end.
Yeah. It's actually extremely, extremely useful.
Extremely powerful way to understand something. If you allow it, you know, I actually think of
like high school and college, which you kind of remind me of people don't often think of it that
way, but you will almost not again get an opportunity to spend a time with a fundamental
subject and like, no, and everybody's forcing you, like everybody wants you to do it.
And like, you'll never get that chance again to sit there. Even though it's outside of your
interest, biology, like in high school, I took AP biology, AP chemistry. I'm thinking of subjects
I never again really visited seriously. And it was so nice to be forced into anatomy and physiology,
to be forced into that world, to stay with it, to look at the pretty pictures, to certain moments
to actually for a moment, enjoy the beauty of these of like how cell works and all those kinds
of things. And you're somehow that stays like the ripples of that fascination that stays with you,
even if you never utilize those learnings in your actual work.
A common problem, at least many young people I meet, is that they're like feeling idealistic
and altruistic, but in a rush. So, you know, the usual human tradition that goes back, you know,
hundreds of thousands of years is that people's productivity rises with time and maybe peaks
around the age of 40 or 50. The age of 40, 50 is when you will be having the highest income,
you'll have the most contacts, you will sort of be wise about how the world works.
Expect to have your biggest impact then. Before then, you can have impacts, but you're also
mainly building up your resources and abilities. That's the usual human trajectory. Expect that
to be true of you too. Don't be in such a rush to like accomplish enormous things at the age of 18
or whatever. I mean, you might as well practice trying to do things, but that's mostly about
learning how to do things by practicing. There's a lot of things you can't do unless you just
keep trying them. And when all else fails, try to maximize the number of offspring,
however way you can. That's certainly something I've neglected. I would tell my younger version
of myself, pay, try to have more descendants. Yes, absolutely. It matters more than I realized
at the time. Both in terms of making copies of yourself in mutated form and just the joy of
raising them. Sure. I mean, the meaning even. In the literature on the value people get out of life,
there's a key distinction between happiness and meaning. So happiness is how do you feel right
now about right now? And meaning is how do you feel about your whole life? And many things that
produce happiness don't produce meaning as reliably. And if you have to choose between them,
you'd rather have meaning. And meaning is more goes along with sacrificing happiness sometimes.
And children are an example of that. Do you get a lot more meaning out of children,
even if there are a lot more work? Why do you think kids, children are so magical,
like raising kids? Because I would love to have kids and whenever I work with robots,
there's some of the same magic when there's an entity that comes to life. And in that case,
I'm not trying to draw too many parallels, but there's some echo to it, which is when you program
a robot, there's some aspect of your intellect that is now instilled in this other moving being
that's kind of magical. Or why do you think that's magical? And you said happiness and meaning
as opposed to a short. Meaningful. Why is it meaningful?
It's over determined. Like I can give you several different reasons, all of which is sufficient.
And so the question is, we don't know which ones are the correct reasons.
Such a technical, it's over determined. Look it up. So I meet a lot of people interested in the
future, interested in thinking about the future. They're thinking about how can I influence the
future? But overwhelmingly in history so far, the main way people have influenced the future is by
having children overwhelmingly. And that's just not an incidental fact. You are built for that.
That is, you're the sequence of thousands of generations, each of which successfully
had a descendant. And that affected who you are. You just have to expect, and it's true,
that who you are is built to be expected to have a child, to want to have a child to have
that be a natural and meaningful interaction for you. And it's just true. It's just one of
those things you just should have expected. And it's not a surprise.
Well, to push back in terms of influencing the future as we get more and more technology,
more and more of us are able to influence the future in all kinds of other ways.
Right.
Being a teacher, educator.
Even so though, still most of our influence on the future has probably happened being kids,
even though we've accumulated more ways, other ways to do it.
You mean at scale. I guess the depth of influence, like really how much effort,
how much of yourself you really put another human being. Do you mean both the raising
of a kid, or do you mean raw genetic information?
Well, both, but raw genetics is probably more than half of it.
More than half. More than half. Even in this modern world.
Genetics. Let me ask some dark, difficult questions if I might. Let's take a stroll
into that place that may, may not exist according to you. What's the darkest place you've ever
gone to in your mind, in your life, a dark time, a challenging time in your life that you had to
overcome? Probably just feeling strongly rejected. And so I've been, I'm apparently somewhat
emotionally scarred by just being very rejection averse, which must have happened because some
rejections were just very scarring. At a scale in what kinds of communities,
on the individual scale? I mean, lots of different scales. Yeah.
All the different, many different scales, still that rejection stings.
Hold on a second, but you're a contrarian thinker. You're challenged at norms. Yeah. Why, if you,
if you were scarred by rejection, why welcome it in so many ways at a much larger scale,
constantly with your ideas? It could be that I'm just stupid,
or that I've just categorized them differently than I should or something.
You know, the most rejection that I've faced hasn't been because of my intellectual ideas.
So the intellectual ideas haven't been the thing to risk the rejection.
The one that the things that challenge your mind, taking you to a dark place are the more
psychological rejections. You just asked me what took me to a dark place you didn't
specify it as sort of an intellectual dark place, I guess. Yeah, I just meant like what?
So intellectual is disjoint or at least at a more surface level than something emotional?
Yeah, I would just think there are times in your life when you're just in a dark place
and that can have many different causes. And most intellectuals are still just people
and most of the things that will affect them are the kinds of things that affect people.
They aren't that different necessarily. I mean, that's going to be true for, like,
I presume most basketball players are still just people. And if you ask them what was the
worst part of their life, it's going to be this kind of thing that was the worst part of life
for most people. So rejection early in life? Yeah, I think, I mean, not in grade school,
probably, but yeah, sort of being a young, nerdy guy and feeling not in much demand or interest,
or, you know, later on, lots of different kinds of rejection. But yeah, but I think that's,
you know, most of us like to pretend we don't that much need other people. We don't care what
they think. You know, it's a common sort of stance if somebody rejects you or something. I
didn't care about them anyway. I, you know, I didn't. But I think to be honest, people really do
care. Yeah, we do seek that connection, that love. What do you think is the role of love
in the human condition? Opacity in part. That is, love is one of those things where we know at
some level it's important to us, but it's not very clearly shown to us exactly how or why or in what
ways. There are some kinds of things we want where we can just clearly see that we want and
why that we want it, right? We know when we're thirsty and we know why we were thirsty and
we know what to do about being thirsty and we know when it's over that we're no longer thirsty.
Love isn't like that.
Zach, what do we seek from this? We're drawn to it, but we do not understand why we're drawn
exactly because it's not just affection. Because if it was just affection, we don't seem to be
drawn to pure affection. We don't seem to be drawn to somebody who's like a servant. We don't seem to
be necessarily drawn to somebody that satisfies all your needs or something like that.
So it's clearly something we want or need, but we're not exactly very clear about it,
and that is kind of important to it. So I've also noticed there are some kinds of things
you can't imagine very well. So if you imagine a situation, there's some aspects of the situation
that you can clearly imagine it being bright or dim. You can imagine it being windy or you can
imagine it being hot or cold. But there's some aspects about your emotional stance in a situation
that's actually just hard to imagine or even remember. You can often remember an emotion
only when you're in a similar sort of emotion situation, and otherwise you just can't bring
the emotion to your mind and you can't even imagine it. So there's certain kinds of emotions
you can have, and when you're in that emotion, you can know that you have it and you can have a
name and it's associated. But later on, I tell you, remember joy and it doesn't come to mind.
Not able to replay it. Right. And it's sort of a reason why one of the reasons that pushes us
to re-consume it and reproduce it is that we can't reimagine it. Well, it's interesting because
there's a Daniel Kahneman type of thing of like reliving memories because I'm able to summon
some aspect of that emotion again by thinking of that situation from which that emotion came.
Right. So like a certain song, you can listen to it, and you can feel the same way you felt
the first time you remember that song associated with it. Right. But you need to remember that
situation in some sort of complete package. Yes. You can't just take one part off of it,
and then if you get the whole package again, if you remember the whole feeling.
Yes. Or some fundamental aspect of that whole experience from which the feeling arose. And
actually, the feeling is probably different in some way. It could be more pleasant or less
pleasant than the feeling you felt originally, and that more so over time, every time you replay
that memory. It is interesting. You're not able to replay the feeling perfectly. You don't remember
the feeling. You'll remember the facts of the events. So there's a sense of which over time we
expand our vocabulary as a community of language, and that allows us to sort of have more feelings
and know that we are feeling them. Because you can have a feeling but not have a word for it,
and then you don't know how to categorize it or even what it is, and whether it's the same as
something else. But once you have a word for it, you can sort of pull it together more easily.
And so I think over time, we are having a richer palette of feelings because we have more words
for them. What has been a painful loss in your life? Maybe somebody or something that's no longer
in your life but played an important part of your life? Youth? That's a concept. No, it has to be.
But I was once younger. I had health and I had vitality. I mean, I've lost that over time.
Do you see that as a different person? Maybe you've lost that person?
Certainly. Yes, absolutely. I'm a different person than I was when I was younger,
and I'm not who I don't even remember exactly what he was. So I don't remember as many things
from the past as many people do. So in some sense, I've just lost a lot of my history by not remembering
it. And I'm not that person anymore. That person's gone. Is that a painful loss?
Is it a painful loss though? Yeah. Or is it a, why is it painful? Because you're wiser,
you're, I mean, there's so many things that are beneficial to getting older. Right, but
are you just, I just was this person and I felt assured that I could continue to be that person.
And you're no longer that person. And he's gone. And I'm not him anymore. And he's,
he died without fanfare or a funeral. And that the person you are today talking to me, that person
will be changed too. Yes. And in 20 years, he won't be there anymore.
And a future person, we'll look back. With M's, for M's, this will be less of a problem.
For M's, they would be able to save an archived copy of themselves at each different age,
and they could turn it on periodically and go back and talk to it.
So we play, you think some of that will be, so with emulated minds, with M's,
there's a clone, there's a digital cloning that happens. And do you think that makes your,
you less special if you're cloneable? Like, does, does that make you
the experience of life, the experience of a moment, the scarcity of that moment,
the scarcity of that experience? Isn't that a fundamental part of what makes that experience
so delicious, so rich of feeling? I think if you think of a song that lots of people listen to
that are copies all over the world, we're going to call that a more special song.
Yeah. Yeah. So there's a perspective on copying and cloning where you're just scaling happiness
versus degrading. I mean, each copy of a song is less special if there are many copies,
but the song itself is more special if there are many copies.
In a mass, right, you're, you're actually spreading the happiness, even if it diminishes
over a large number of people at scale and that increases the overall happiness in the world.
And then you're able to do that with multiple songs.
Is a person who has an identical twin more or less special?
Well, the problem with identical twins is, you know, you, it's like just two with M's.
Right, but two is different than one. So, but I think an identical twin's life is richer for
having this other identical twin, somebody who understands them better than anybody else can.
From the point of view of an identical twin, I think they have a richer life
for being part of this couple, which each of which is very similar. Now, if you said,
will the world, you know, if we lose one of the identical twins, will the world
miss it as much because you've got the other one and they're pretty similar? Maybe from the
rest of the world's point of view, they are, they suffer less of a loss when they lose one of
the identical twins. But from the point of view of the identical twin themselves,
their life is enriched by having a twin. See, but the identical twin copying happens at the place
of birth. That's different than copying after you've done some of the environment, like the
nurture at the teenage or the in the 20s. That'll be an interesting thing for M's to find out all
the different ways that they can have different relationships to different people who have different
degrees of similarity to them in time. Yeah. Yeah, man.
But it seems like a rich space to explore. I don't feel sorry for them. This seems like
interesting world to live in. And there could be some ethical conundrums there.
There will be many new choices to make them. They don't make now. So we discussed that in
the book, Age of M. Like, say you have a lover and you make a copy of yourself,
but the lover doesn't make a copy. Well, now which one of you or are both still related to the lover?
Socially entitled to show up. So you'll have to make choices then when you
split yourself. Which of you inherit which unique things?
Yeah. And of course, there'll be an equivalent increase in lawyers. Well,
I guess you can clone the lawyers to help manage some of these negotiations of how to
split property. The nature of owning, I mean, property is connected to individuals.
You only really need lawyers for this with an inefficient awkward law that is not very
transparent and able to do things. So, for example, an operating system of a computer
is a law for that computer. When the operating system is simple and clean, you don't need to
hire a lawyer to make a key choice with the operating system. You don't need a human in the
loop. You just make a choice. Yeah. Right. So ideally we want a legal system that makes
the common choices easy and not require much overhead. And that's the digitization of things
further and further enables that. So the loss of a younger self. What about the loss of your
life overall? Do you ponder your death, your mortality? Are you afraid of it? I am a cryonics
customer. That's what this little tag around my deck says. It says that if you find me in a medical
situation, you should call these people to enable the cryonics transfer. So I am taking a long shot
chance at living a much longer life. Can you explain what cryonics is? So when medical science
gives up on me in this world, instead of burning me or letting worms eat me, they will freeze me
or at least freeze my head. And there's damage that happens in the process of freezing the head.
But once it's frozen, it won't change for a very long time. Chemically, it'll just be completely
exactly the same. So future technology might be able to revive me. And in fact, I would be mainly
counting on the brain emulation scenario, which doesn't require reviving my entire biological
body. It means I would be in a computer simulation. And so that's, I think I've got at least a 5%
shot at that. And that's immortality. Are you... Most likely it won't happen. And therefore,
I'm sad that it won't happen. Do you think immortality is something that you would like to have?
Well, I mean, just like infinity, I mean, you can't know until forever, which means never,
right? So all you can really, the better choices at each moment, do you want to keep going?
So I would like at every moment to have the option to keep going.
The interesting thing about human experience is that
the way you phrase it is exactly right. At every moment, I would like to keep going.
But the thing that happens, leave them wanting more of whatever that phrase is. The thing that
happens is over time, it's possible for certain experiences to become bland. And you become
tired of them. And that actually makes life really unpleasant. Sorry, it makes that experience
really unpleasant. And perhaps you can generalize that to life itself, if you have a long enough
horizon. And so... It might happen, but might as well wait and find out.
But then you're ending on suffering, you know? So in the world of brain emulations,
I have more options. You can return yourself. I can make copies of myself,
archive copies at various ages. And at a later age, I could decide that I'd rather replace
myself with a new copy from a younger age. So does a brain emulation still operate in physical
space? So can we do... What do you think about the metaverse and operating in virtual reality?
So we can conjure up, not just emulate, not just your own brain and body, but the entirety of
the environment? Well, most brain emulations will in fact, most of their time in virtual reality.
But they wouldn't think of it as virtual reality. They would just think of it as their usual reality.
I mean, the thing to notice, I think in our world, most of us spend most time indoors.
And indoors, we are surrounded by walls covered with paint and floors covered with tile or rugs.
Most of our environment is artificial. It's constructed to be convenient for us. It's not
the natural world that was there before. A virtual reality is basically just like that.
It is the environment that's comfortable and convenient for you.
But when it's the right environment for you, it's real for you. Just like the room you're in right
now most likely is very real for you. You're not focused on the fact that the paint is hiding
the actual studs behind the wall and the actual wires and pipes and everything else.
The fact that we're hiding that from you doesn't make it fake or unreal.
What are the chances that we're actually in the very kind of system that you're describing where
the environment and the brain is being emulated and you're just replaying an experience when you
first did a podcast with Lex. And now the person that originally launched this already
did hundreds of podcasts with Lex. This is just the first time and you like this time
because there's so much uncertainty. There's nerves. It could have gone in any direction.
At the moment, we don't have the technical ability to create
an emulation. So we have to be postulating that in the future we have that ability and then they
choose to evaluate this moment now to simulate it. Don't you think we could be in the simulation
of that exact experience right now? We wouldn't be able to know. So one scenario would be this
never really happened. This only happens as a reconstruction later on. That's different than
the scenario. This did happen the first time and now it's happening again as a reconstruction.
That second scenario is harder to put together because it requires this coincidence where
between the two times we produce the ability to do it. No, but don't you think replay of memories,
poor replay of memories is something that might be a possible thing in the future?
So you're saying it's harder than to conjure up things from scratch?
It's certainly possible. So the main way I would think about it is in terms of the demand
for simulation versus other kinds of things. So I've given this a lot of thought because
I first wrote about this long ago when Bostrom first wrote his papers about simulation argument
and I wrote about how to live in a simulation. So the key issue is the fraction of creatures
in the universe that are really experiencing what you appear to be really experiencing relative to
the fraction that are experiencing it in a simulation way, i.e. simulated. So then the key
parameter is at any one moment in time, creatures at that time, many of them, most of them are
presumably really experiencing what they're experiencing, but some fraction of them are
experiencing some past time where that past time is being remembered via their simulation.
So to figure out this ratio, what we need to think about is basically two functions. One is
how fast in time does the number of creatures grow? And then how fast in time does the interest
in the past decline? Because at any one time, people will be simulating different periods
in the past with different emphasis based on- I love the way you think so much.
That's exactly right. So if the first function grows slower than the second one declines,
then in fact, your chances of being simulated are low. So the key question is how fast does
interest in the past decline relative to the rate at which the population grows with time?
Does this correlate to you earlier suggested that the interest in the future increases over time?
Are those correlated? Interest in the future versus interest in the past?
Like why are we interested in the past? But the simple way to do is, as you know,
like Google Ngrams has a way to type in a word and see how interested declines or rises over
time. You can just type in a year and get the answer for that. If you type in a particular year
like 1900 or 1950, you can see with Google Ngram how interest in that year increased up until that
date and decreased after it. And you can see that interest in a date declines faster than does
the population grow with time. That is brilliant. And so it's so interesting.
You have the answer. Wow. And that was your argument against, not against,
to this particular aspect of the simulation, how much past simulation there will be,
replay of past memories. First of all, if we assume that like simulation of the past is a
small fraction of all the creatures at that moment, right? And then it's about how fast
now some people have argued plausibly that maybe most interest in the past falls with
this fast function, but some unusual category of interest in the past won't fall that fat quickly.
And then that eventually would dominate. So that's a other hypothesis.
Some category. So that very outlier specific kind of, yeah, okay. Yeah, yeah, yeah. Like
really popular kinds of memories, but like probably in a trillion years, there's some small
research institute that tries to randomly select from all possible people in history or something
to simulate. Yeah, yeah, yeah. So how big is this research institute and how big is the future
in a trillion years, right? And that would be hard to say. But if we just look at the ordinary
process by which people simulate recent here. So if you look at, it's also true for movies and
plays and video games, overwhelming, they're interested in the recent past. There's very
few video games where you play someone in the Roman Empire. Right. Even fewer where you play
someone in the Egyptian Empire. Yeah, just different. It's just declined very quickly.
But every once in a while, that's brought back. But yeah, you're right. I mean,
just if you look at the mass of entertainment movies and games, it's focusing on the present,
recent past. And maybe some, I mean, where does science fiction fit into this? Because
it's sort of, what is science fiction? I mean, it's a mix of the past and the present and some
kind of manipulation of that to make it more efficient for us to ask deep philosophical
questions about humanity. So the closest genre to science fiction is clearly fantasy,
fantasy and science fiction in many bookstores and even Netflix or whatever categories,
they're just lumped together. So clearly, they have a similar function. So the function of fantasy
is more transparent than the function of science fiction. So use that as your guide.
What's fantasy for is just to take away the constraints of the ordinary world and imagine
stories with much fewer constraints. That's what fantasy is. You're much less constrained.
What's the purpose to remove constraints? Is it to escape from the harshness of the constraints
of the real world? Or is it to just remove constraints in order to explore some,
get a deeper understanding of our world? What is it? I mean, why do people read fan?
I'm not a cheap fantasy reading kind of person. So one story that it sounds plausible to me is
that there are sort of these deep story structures that we love and we want to realize and then many
details of the world get in their way. Fantasy takes all those obstacles out of the way and
lets you tell the essential hero story or the essential love story, whatever essential story
you want to tell. The reality and constraints are not in the way. And so science fiction can be
thought of as like fantasy, except you're not willing to admit that it's not, can't be true.
So the future gives the excuse of saying, well, it could happen. And you accept some more reality
constraints for the illusion, at least, that it maybe it could really happen. Maybe it could happen
and that it stimulates the image. The imagination is something really interesting about human beings.
And it seems also to be an important part of creating really special things is to be able
to first imagine them. With you and Nick Bostrom, where do you land on the simulation and all the
mathematical ways of thinking it and just the thought experiment of it? Are we living in a
simulation? That was the just discussion we just had. That is, you should grant the possibility
of being a simulation. You shouldn't be 100% confident that you're not. You should certainly
grant a small probability. The question is, how large is that probability? Are you saying we would
be misunderstood because I thought our discussion was about replaying things that already happened.
Right. But the whole question is, right now, is that what I am? Am I actually a replay from
some distant future? But it doesn't necessarily need to be a replay. It could be a totally new.
You don't have to be an NPC. Right. But clearly, I'm in a certain era with a certain kind of world
around me, right? So either this is a complete fantasy or it's a past of somebody else in the
future. But no, it could be a complete fantasy, though. It could be, right. But then you might,
then you have to talk about what's the fraction of complete fantasies, right?
I would say it's easier to generate a fantasy than to replay a memory, right?
Sure. But if we just look at the entire history of everything, we just say, sure,
but most things are real. Most things aren't fantasies, right? Therefore, the chance that
my thing is real, right? So the simulation argument works stronger about sort of the
past. We say, ah, but there's more future people than there are today. So you being in the past
of the future makes you special relative to them, which makes you more likely to be in a simulation,
right? If we're just taking the full count and saying, in all creatures ever, what percentage
are in simulations? Probably no more than 10%. So what's the good argument for that? That most
things are real? Yeah, because it was awesome says the other way, right? In a competitive world,
in a world where people like have to work and have to get things done, then they have a limited
budget for leisure. And so, you know, leisure things are less common than work things like real
things, right? But if you look at the stretch of history in the universe, doesn't the ratio
of leisure increase? Isn't that the forgery? Right, but now we're looking at the fraction
of leisure, which takes the form of something where the person doing the leisure doesn't realize it.
And there could be some fraction, but that's much smaller, right? Yeah. Cool as foragers.
Or somebody is clueless in the process of supporting this, this leisure, right? It might
not be the person leisureing somebody, they're a supporting character or something. But still,
that's got to be a pretty small fraction of leisure. What you mentioned that children are one of the
things that are a source of meaning, broadly speaking, then let me ask the big question.
What's the meaning of this whole thing? The Robin meaning of life. What is the meaning of life?
We talked about alien civilizations, but this is the one we got. Where are the aliens? Where are
the human seem to be conscious, be able to introspect? Why are we here? This is the thing
I told you before about how we can predict that future creatures will be different from us.
We, our preferences are this amalgam of various sorts of random sort of patched together
preferences about thirst and sex and sleep and attention and all these sorts of things.
So we don't understand it very well. It's not very transparent and it's a mess, right?
That is the source of our motivation. That is how we were made and how we are induced to do things.
But we can't summarize it very well and we don't even understand it very well.
That's who we are. And often we find ourselves in a situation where we don't feel very motivated,
we don't know why. In other situations, we find ourselves very motivated and we don't know why
either. And so that's the nature of being a human of the sort that we are because even though we
can think abstractly and reason abstractly, this package of motivations is just opaque and a mess.
And that's what it means to be a human today and the motivation. We can't very well tell the
meaning of our life. It is this mess that our descendants will be different. They will actually
know exactly what they want. And it will be to have more descendants. That will be the meaning
for them. Well, it's funny that you have the certainty. You have more certainty. You have
more transparency about our descendants than you do about your own self. Right. So
it's really interesting to think, because you mentioned this about love, that something
that's fundamental about love is this opaqueness that we're not able to really
introspect what the heck it is or all the feelings, the complex feelings involved.
That's true about many of our motivations.
And that's what it means to be human of the 20th and the 21st century variety.
Why is that not a feature that we want, we'll choose to persist in civilization then? This
opaqueness put another way, mystery, maintaining a certain mystery about ourselves and about those
around us. Maybe that's a really nice thing to have. Maybe. So this is the fundamental issue
in analyzing the future. What will set the future? One theory about what will set the future is,
what do we want the future to be? So under that theory, we should sit and talk about what we want
to future. We have some conferences, have some conventions, discussion things, vote on it maybe,
and then hand out off to the implementation people to make the future the way we've decided it should
be. That's not the actual process that's changed the world over history up to this point. It has
not been the result of us deciding what we want and making it happen. In our individual lives,
we can do that. We might decide what career we want or where we want to live, who we want to live
with. In our individual lives, we often do slowly make our lives better according to our plan and
our things. But that's not the whole world. The whole world so far has mostly been a competitive
world where things happen if anybody anywhere chooses to adopt them and they have an advantage.
And then it spreads and other people are forced to adopt it by competitive pressures.
So that's the kind of analysis I can use to predict the future. And I do use that to predict
the future. It doesn't tell us it'll be a future we like. It just tells us what it'll be.
And it'll be one where we're trying to maximize the number of our descendants.
And we know that abstractly and directly, and it's not opaque.
With some probability that's non-zero, that will lead us to become grabby in expanding
aggressively out into the cosmos until we meet other aliens.
The timing isn't clear. We might become grabby. And then this happens. These are
grabbiness and this are both the results of competition, but it's less clear which happens
first. Does this future excite you or scare you? How do you feel about this whole thing?
Again, I told you compared to sort of a dead cosmology, at least it's
energizing and having a living story with real actors and characters and agendas, right?
Yeah. And that's one hell of a fun universe to live in.
Robin, you're one of the most fascinating, fun people to talk to, brilliant, humble,
systematic in your analysis.
Hold on to my wallet here. What's he looking for?
I already stole your wallet long ago. I really, really appreciate you spend your valuable time
with me. I hope we get a chance to talk many more times in the future.
Thank you so much for sitting down.
Thank you.
Thanks for listening to this conversation with Robin Hansen.
To support this podcast, please check out our sponsors in the description.
And now let me leave you with some words from Ray Bradbury.
We are an impossibility in an impossible universe.
Thank you for listening and hope to see you next time.