logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 9h 33m 5s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Nick Bostrom, a philosopher at University of Oxford and the
director of the Future of Humanity Institute. He has worked on fascinating and important ideas
in existential risk, simulation hypothesis, human enhancement ethics, and the risks of
superintelligent AI systems, including in his book, Superintelligence. I can see talking to Nick
multiple times in this podcast, many hours each time, because he has done some incredible work
in artificial intelligence, in technology space, science, and really philosophy in general. But
we'll have to start somewhere. This conversation was recorded before the outbreak of the coronavirus
pandemic that both Nick and I, I'm sure, will have a lot to say about next time we speak.
And perhaps that is for the best, because the deepest lessons can be learned only in retrospect
when the storm has passed. I do recommend you read many of his papers on the topic
of existential risk, including the technical report titled Global Catastrophic Risks Survey
that he coauthored with Anders Sandberg. For everyone feeling the medical, psychological,
and financial burden of this crisis, I'm sending love your way. Stay strong. We're in this together.
We'll beat this thing. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe
on YouTube, review it with five stars on Apple Podcasts, support on Patreon, or simply connect
with me on Twitter at Lex Freedman, spelled F-R-I-D-M-A-N. As usual, I'll do one or two minutes
of ads now and never any ads in the middle that can break the flow of the conversation. I hope
that works for you and doesn't hurt the listening experience. This show is presented by Cash App,
the number one finance app in the App Store. When you get it, use code Lex Podcast. Cash App lets
you send money to friends by Bitcoin and invest in the stock market with as little as $1. Since
Cash App does fractional share trading, let me mention that the order execution algorithm
that works behind the scenes to create the abstraction of fractional orders is an algorithmic
marvel. So big props to the Cash App engineers for solving a hard problem that in the end provides an
easy interface that takes a step up to the next layer of abstraction over the stock market,
making trading more accessible for new investors and diversification much easier. So again,
if you get Cash App from the App Store, Google Play and use the code Lex Podcast, you get $10
and Cash App will also donate $10 to first an organization that is helping to advance robotics
and STEM education for young people around the world. And now here's my conversation
with Nick Bostrom. At the risk of asking the Beatles to play yesterday or the Rolling Stones
to play satisfaction, let me ask you the basics. What is the simulation hypothesis?
That we are living in a computer simulation. What is a computer simulation? How are we supposed
to even think about that? Well, so the hypothesis is meant to be understood in a literal sense,
not that we can kind of metaphorically view the universe as an information processing
physical system, but that there is some advanced civilization who built a lot of computers and
that what we experience is an effect of what's going on inside one of those computers so that
the world around us, our own brains, everything we see and perceive and think and feel would exist
because this computer is running certain programs. So do you think of this computer as something
similar to the computers of today, these deterministic sort of touring machine type things?
Is that what we're supposed to imagine or we're supposed to think of something more
like a quantum mechanical system, something much bigger, something much more complicated,
something much more mysterious from our current perspective? The ones we have today would do
find them in bigger, certainly. You'd need more memory and more processing power. I don't think
anything else would be required. Now, it might well be that they do have, maybe they have quantum
computers and other things that would give them even more umph. It seems kind of plausible,
but I don't think it's a necessary assumption in order to get to the conclusion that a
technically mature civilization would be able to create these kinds of computer simulations with
conscious beings inside them. So do you think the simulation hypothesis is an idea that's most
useful in philosophy, computer science, physics? Where do you see it having valuable kind of starting
point in terms of the thought experiment of it? Is it useful? I guess it's more informative and
interesting and maybe important, but it's not designed to be useful for something else.
Well, okay, interesting, sure. But is it philosophically interesting or is there some
kind of implications of computer science and physics? I think not so much for computer science or
physics per se. Certainly it would be of interest in philosophy, I think also to say cosmology or
physics in as much as you're interested in the fundamental building blocks of the world and
the rules that govern it. If we are in a simulation, there is then the possibility that say physics
at the level where the computer running the simulation could be different from the physics
governing phenomena in the simulation. So I think it might be interesting from point of view of
religion or just for kind of trying to figure out what the heck is going on. So we mentioned the
simulation hypothesis so far. There is also the simulation argument, which I tend to make a
distinction. So simulation hypothesis, we are living in a computer simulation argument. This
argument that tries to show that one of three propositions is true. One of which is the simulation
hypothesis, but there are two alternatives in the original simulation argument, which we can get to.
Yeah, let's go there. By the way, confusing terms because people will, I think, probably naturally
think simulation argument equals simulation hypothesis, just terminology-wise. But let's
go there. So simulation hypothesis means that we are living in a simulation. The hypothesis that
we're living in a simulation, simulation argument has these three complete possibilities that cover
all possibilities. So what are they? Yeah, so it's like a disjunction. It says at least one of these
three is true, although it doesn't on its own tell us which one. So the first one is that almost all
civilizations at their current stage of technological development
go extinct before they reach technological maturity. So there is some great filter
that makes it so that basically none of the civilizations throughout, you know,
maybe vast cosmos will ever get to realize the full potential of technological development.
And this could be, theoretically speaking, this could be because most civilizations kill themselves
too eagerly or destroy themselves too eagerly or it might be super difficult to build a simulation.
So the span of time? Theoretically, it could be both. Now, I think it looks like we would
technologically be able to get there in a time span that is short compared to, say, the lifetime of
planets and other sort of astronomical processes. So your intuition is that the build simulation is
not? Well, so this is an interesting concept of technological maturity. It's kind of an
interesting concept to have other purposes as well. We can see, even based on our current limited
understanding, what some lower bound would be on the capabilities that you could realize
by just developing technologies that we already see are possible. So for example,
one of my research fellows, Eric Drexler back in the 80s, studied molecular manufacturing.
That is, you could analyze using theoretical tools and computer modeling, the performance of various
molecularly precise structures that we didn't then and still don't today have the ability to actually
fabricate. But you could say that, well, if we could put these atoms together in this way,
then the system would be stable and it would rotate at this speed and have these computational
characteristics. And he also outlined some pathways that would enable us to get to this kind
of molecularly manufacturing in the fullness of time. And you could do other studies we've done.
You could look at the speed at which, say, it would be possible to colonize the galaxy if you had
mature technology. We have an upper limit, which is the speed of light. We have sort of a lower
current limit, which is how fast current rockets go. We know we can go faster than that by just
making them bigger and have more fuel and stuff. And you can then start to
describe the technological affordances that would exist once a civilization has had enough
time to develop even at least those technologies we already know are possible. Then maybe they
would discover other new physical phenomena as well that we haven't realized that would enable
them to do even more. But at least there is this kind of basic set of capabilities.
Can you just linger on that? How do we jump from molecular manufacturing to deep space exploration
to mature technology? What's the connection? Well, so these would be two examples of
technological capability sets that we can have a high degree of confidence
or physically possible in our universe and that a civilization that was allowed to continue to
develop its science and technology would eventually attain. You can intuit. We can kind of see the
set of breakthroughs that are likely to happen. So you can see, what did you call it, the technological
set? With computers, maybe it's easier stuff. The one is we could just imagine bigger computers
using exactly the same parts that we have. So you can kind of scale things that way, right?
But you could also make processors a bit faster if you had this molecular nanotechnology that
the director described. He characterized a kind of crude computer built with these parts that
would perform at a million times the human brain while being significantly smaller the size of a
sugar cube. And he may not claim that that's the optimum computing structure. You could build a
faster computer that would be more efficient, but at least you could do that if you had the
ability to do things that were atomically precise. So you can then combine these two. You could have
this kind of nanomolecular ability to build things atom by atom and then say at this spatial scale
that would be attainable through space colonizing technology. You could then start, for example,
to characterize a lower bound on the amount of computing power that a technologically mature
civilization would have if it could grab resources in a planet and so forth and then use this
molecular nanotechnology to optimize them for computing. You'd get a very, very high lower
bound on the amount of compute. Sorry, I just need to define some terms. So technologically
mature civilization is one that took that piece of technology to its lower bound. What is it
technologically mature civilization? Well, okay. So that means it's a stronger concept that we
really need for the simulation hypothesis. I just think it's interesting in its own right.
So it would be the idea that there is some stage of technological development where you
basically maxed out that you developed all those general purpose widely useful technologies that
could be developed or at least kind of come very close to the, you're 99.9% there or something.
So that's an independent question. You can think either that there is such a ceiling or
you might think the technology tree just goes on forever. Where does your sense fall?
I would guess that there is a maximum that you would start to asymptote towards.
So new things won't keep springing up. New ceilings.
In terms of basic technological capabilities, I think there is like a finite set of those
that can exist in this universe. Moreover, I mean, I wouldn't be that surprised if we actually
reached close to that level fairly shortly after we have say machine superintelligence.
So I don't think it would take millions of years for a human originating civilization
to begin to do this. I think it's more likely to happen on historical timescales.
But that's an independent speculation from the simulation argument. I mean,
for the purpose of the simulation argument, it doesn't really matter whether it goes
indefinitely far up or whether there is a ceiling. As long as we know,
we can at least get to a certain level. And it also doesn't matter whether that's going to happen
in 100 years or 5,000 years or 50 million years. Like the timescales really don't
make any difference for the simulation. Can you look on that a little bit?
Like there's a big difference between 100 years and 10 million years. So it doesn't really not
matter because you just said, does it matter if we jump scales to beyond historical scales?
So we described that. So for the simulation argument, doesn't it matter that if it takes 10
million years, it gives us a lot more opportunity to destroy civilization in the meantime?
Yeah. Well, so it would shift around the probabilities between these three alternatives.
That is, if we are very, very far away from being able to create these simulations,
if it's like say the billions of years into the future, then it's more likely that we will fail
ever to get there. There's more time for us to kind of, you know, go extinct along the way.
And so it's similarly for other civilizations.
So it is important to think about how hard it is to build a simulation.
In terms of figuring out which of the disjuncts. But for the simulation argument itself,
which is agnostic as to which of these three alternatives is true.
Yeah. You don't have to. The simulation argument would be true whether or not
we thought this could be done in 500 years or it would take 500 million years.
Oh, for sure. The simulation argument stands. I mean, I'm sure there might be some people who
oppose it, but it doesn't matter. I mean, it's very nice those three cases cover it. But the
fun part is at least not saying what the probabilities are, but kind of thinking about,
kind of, intuiting reasoning about what's more likely, what are the kind of things that would
make some of the arguments less and more so. But let's actually, I don't think we went through
them. So number one is we destroy ourselves before we ever create simulation.
Right. So that's kind of sad, but we have to think not just what might destroy us.
I mean, so there could be some, whatever, disasters or meteors slamming the earth
a few years from now that could destroy us, right? But you'd have to postulate,
in order for this first disjunct to be true, that almost all civilizations throughout the cosmos
also failed to reach technological maturity. And the underlying assumption there is that there is
likely a very large number of other intelligent civilizations.
Well, if there are, yeah, then they would virtually all have to succumb in the same way.
I mean, then that leads off another. I guess there are a lot of little digressions
that are interesting. Let's go there. Let's go there. I'll keep dragging us back.
There is a set of basic questions that always come up in conversations with
interesting people, like the Fermi Paradox. You can almost define whether a person is
interesting, whether at some point the Fermi Paradox comes up.
Well, so for what it's worth, it looks to me that the universe is very big.
I mean, in fact, according to the most popular current cosmological theories, infinitely big.
And so then it would follow pretty trivially that it would contain a lot of other civilizations,
in fact, infinite the many. If you have some local stochasticity and infinite the many,
it's like, you know, infinite the many lumps of matter, one next to another,
there's kind of random stuff in each one, then you're going to get all possible outcomes
with probability one, infinitely repeated. So then certainly there would be a lot of
extraterrestrials out there. Even short of that, if the universe is very big,
there might be a finite but large number. If we were literally the only one, yeah, then of course,
if we went extinct, then all of civilizations at our current stage would have gone extinct before
becoming technological material. So then it kind of becomes trivially true that
a very high fraction of those went extinct. But if we think there are many, I mean,
it's interesting because there are certain things that possibly could kill us, like if you look at
existential risks. And it might be a different, like that the best answer to what would be most
likely to kill us might be a different answer than the best answer to the question, if there is
something that kills almost everyone, what would that be? Because that would have to be
some risk factor that was kind of uniform overall possible civilization.
Yeah, so in this for the for the Sega this argument, you have to think about not just us,
but like every civilization dies out before they create the simulation.
Yeah, or something very close to everybody. Okay, so what's number two in the...
Well, so number two is the convergence hypothesis that is that maybe like a lot of some of these
civilizations do make it through to technological maturity. But out of those who do get there,
they all lose interest in creating these simulations. So they just have the capability
of doing it, but they choose not to. Not just a few of them decide not to, but
out of a million, maybe not even a single one of them would do it.
And I think when you say lose interest, that sounds like unlikely because it's like
they get bored or whatever, but it could be so many possibilities within that.
I mean, losing interest could be
it could be anything from it being exceptionally difficult to do,
to fundamentally changing the sort of the fabric of reality. If you do it is ethical
concerns, all those kinds of things could be exceptionally strong pressures.
Well, certainly ethical concerns. I mean, not really too difficult to do. I mean,
in a sense, that's the first assumption that you get to technological maturity where you would
have the ability using only a tiny fraction of your resources to create many simulations.
So it wouldn't be the case that they would need to spend half of their GDP forever
in order to create one simulation. And they had this like difficult debate about whether they
should invest half of their GDP for this. It would more be like, well, if any little
fraction of the civilization feels like doing this at any point during maybe their
millions of years of existence, then there would be millions of simulations.
But certainly, there could be many conceivable reasons for why there would be this convert,
many possible reasons for not running ancestor simulations or other computer simulations.
Even if you could do so cheaply.
By the way, what's an ancestor simulation?
Well, that would be type of computer simulation that would contain people like those we think
have lived on our planet in the past and like ourselves in terms of the types of experiences
they have. And where those simulated people are conscious, not just simulated in the same sense
that a non-player character would be simulated in the current computer game where it kind of has
like an avatar body and then a very simple mechanism that moves it forward or backwards or
but something where the simulated being has a brain, let's say, that's simulated at the
sufficient level of granularity that it would have the same subjective experiences as we have.
So where does consciousness fit into this? Do you think simulation, like is there different
ways to think about how this can be simulated just like you're talking about now? Do we have to
simulate each brain within the larger simulation? Is it enough to simulate just the brain, just the
minds and not the simulation, not the big universe itself? Like is there different ways to think about
this? Yeah, I guess there is a kind of premise in the simulation argument rolled in from
philosophy of mind. That is that it would be possible to create a conscious mind in a computer
and that what determines whether some system is conscious or not is not like whether it's built
from organic biological neurons but maybe something like what the structure of the computation is
that it implements. So we can discuss that if we want but I think it would be more forward as
far as my view that it would be sufficient say if you had a computation that was identical to
the computation in the human brain down to the level of neurons. So if you had a simulation
with 100 billion neurons connected in the same way as the human brain and you then roll that forward
with the same kind of synaptic weights and so forth. So you actually had the same behavior
coming out of this as a human with that brain. Then I think that would be conscious. Now,
it's possible you could also generate consciousness without having that detailed
assimilation. There I'm getting more uncertain exactly how much you could simplify or abstract
away. Can you look on that? What do you mean? I missed where you're placing consciousness in
the second. Well, so if you are a computationalist, do you think that what creates consciousness is
the implementation of a computation? Some property, emergent property of the computation itself.
Yeah, you could say that. But then the question is what's the class of computations such that when
they are run, consciousness emerges. So if you just have something that adds one plus one plus
one plus one, like a simple computation, you think maybe that's not going to have any consciousness.
If on the other hand, the computation is one like our human brains are performing, where
as part of the computation, there is a global workspace, a sophisticated attention mechanism,
there is like self representations of other cognitive processes and a whole lot of other things,
that possibly would be conscious. And in fact, if it's exactly like ours, I think definitely it
would. But exactly how much less than the full computation that the human brain is performing
would be required is a little bit, I think, of an open question. Let me ask another
interesting question as well, which is, would it be sufficient to just have say the brain,
or would you need the environment in order to generate the same kind of experiences that we
have? And there is a bunch of stuff we don't know. I mean, if you look at say, current virtual reality
environments, one thing that's clear is that we don't have to simulate all details of them all
the time in order for say, the human player to have the perception that there is a full reality
in that you can have say procedurally generated, which might only render a scene when it's actually
within the view of the player character. And so similarly, if this environment that we perceive
is simulated, it might be that all of the parts that come into our view are rendered at any given
time. And a lot of aspects that never come into view, say the details of this microphone I'm talking
into exactly what each atom is doing at any given point in time might not be part of the
simulation, only a more coarse grain representation. So that to me is actually from an engineering
perspective why the simulation hypothesis is really interesting to think about is how much,
how difficult is it to fake sort of in a virtual reality context? I don't know, fake is the right
word, but to construct a reality that is sufficiently real to us to be immersive in the way that the
physical world is, I think that's actually probably an answerable question of psychology,
of computer science, of how, where's the line where it becomes so immersive
that you don't want to leave that world? Yeah, or that you don't realize while you're in it
that it is a virtual world. Yeah, those are two actually questions. Yours is the more sort of
the good question about the realism. But mine, from my perspective, what's interesting is it
doesn't have to be real, but how can we construct a world that we wouldn't want to leave? Yeah,
I mean, I think that might be too low a bar. I mean, if you think, say when people first had
the pong or something like that, I'm sure there were people who wanted to keep playing it for a
long time because it was fun and they wanted to be in this little world. I'm not sure we would
say it's immersive. I mean, I guess in some sense it is, but like an absorbing activity doesn't even
have to be. But they left that world though. So like, I think that bar is deceivingly high.
So you can play pong or Starcraft or would have more sophisticated games for hours,
for months, while the work could be in a big addiction, but eventually they escaped that.
So I mean, when it's absorbing enough that you would spend your entire,
it would choose to spend your entire life in there. And then thereby changing the concept of
what reality is, because your reality becomes the game, not because you're fooled, but because
you've made that choice. Yeah, I mean, different people might have different preferences regarding
that. Some might, even if you had any perfect virtual reality, might still prefer not to
spend the rest of their lives there. I mean, in philosophy, there's this experience machine,
thought experiment. Have you come across this? So Robert Nozick had this thought experiment
where you imagine some crazy, super duper neuroscientists of the future have created a machine
that could give you any experience you want if you step in there. And for the rest of your life,
you can kind of pre-programmed it in different ways. So your fondest dreams could come true.
You could, whatever you dream, you want to be a great artist, a great lover, like have a wonderful
life, all of these things, if you step into the experience machine, will be your experiences,
constantly happy. But would you kind of disconnect from the rest of reality and you would float
there in a tank? And so Nozick thought that most people would choose not to enter the experience
machine. I mean, many might want to go there for a holiday, but they wouldn't want us to check out
of existence permanently. And so he thought that was an argument against certain views of value,
according to what we value is a function of what we experience. And because in the experience
machine, you could have any experience you want. And yet, many people would think that would not
be much value. So therefore, what we value depends on other things than what we experience. So
Okay, can you take that argument further? What about the fact that maybe what we value is the
up and down of life? So you could have up and downs in the experience machine, right? But what
can't you have in the experience machine? Well, I mean, that then becomes an interesting question
to explore. But for example, real connection with other people, if the experience machine
is a solar machine, where it's only you, like that's something you wouldn't have there, you would
have this objective experience that would be like fake people. But when if you gave somebody flowers,
that wouldn't be anybody they who actually got happy, it would just be a little simulation
of somebody smiling. But the simulation would not be the kind of simulation I'm talking about
in the simulation argument where the simulated creature is conscious, it would just be a kind
of smiley face that would look perfectly real to you. So we're now drawing a distinction between
appear to be perfectly real and actually being real. Yeah. So that could be one thing. I mean,
like a big impact on history, maybe it's also something you won't have if you check into this
experience machine. So some people might actually feel the life I want to have for me is one where
I have a big positive impact on history unfolds. So if you could kind of explore these different
possible explanations for why it is, you wouldn't want to go into the experience machine if that's
what you feel. And one interesting observation regarding this Nozick thought experiment and
the conclusions he wanted to draw from it is how much is a kind of a status quo effect? So a lot
of people might not want to get this on current reality to plug into this dream machine. But
if they instead were told, well, what you've experienced up to this point was a dream.
Now, do you want to disconnect from this and enter the real world when you have no idea maybe
what the real world is? Or maybe you could say, well, you're actually a farmer in Peru growing,
you know, peanuts, and you could live for the rest of your life in this. Or would you want to
continue your dream life as Alex Friedman going around the world, making podcasts and doing research?
If the status quo was that they were actually in the experience machine, I think a lot of people
might then prefer to live the life that they are familiar with rather than sort of bail out into
some essentially the change itself, the leap. Yeah. So it might not be so much the reality itself
that we're after, but it's more that we are maybe involved in certain projects and relationships.
And we have, you know, a self identity and these things that our values are kind of connected
with carrying that forward. And then whether it's inside a tank or outside a tank in Peru,
or whether inside a computer, outside a computer, that's kind of less important to what we ultimately
care about. Yeah. So just to linger on it, it is interesting. I find maybe people are different,
but I find myself quite willing to take the leap to the farmer in Peru, especially as the
virtual reality system become more realistic. I find that possibility and I think more people
would take that leap. But so in this thought experiment, just to make sure we are understanding,
so in this case, the farmer in Peru would not be a virtual reality. That would be the real
your life before this whole experience machine started. Well, I kind of assumed from that
description, you're being very specific, but that kind of idea just like washes away the
concept of what's real. I mean, I'm still a little hesitant about your kind of distinction between
real and illusion. Because when you can have an illusion that feels, I mean, that looks real,
I don't know how you can definitively say something is real or not. What's a good way to
prove that something is real in that context? Well, so I guess in this case, it's more a
separation. In one case, you're floating in a tank with these wires by the super-duper neuroscientists
plugging into your head, giving you like Friedman experiences. In the other, you're actually
tilling the soil in Peru, growing peanuts, and then those peanuts are being eaten by other
people all around the world who buy the exports. And that's two different possible situations
in the one and the same real world that you could choose to occupy. But just to be clear,
when you're in a vat with the wires and the neuroscientists, you can still go farming in Peru,
right? No, well, you could, if you wanted to, you could have the experience of farming in Peru.
But that wouldn't actually be any peanuts grown. Well, what makes a peanut? So a peanut could
be grown and you could feed things with that peanut. And why can't all of that be done in a
simulation? I hope, first of all, that they actually have peanut farms in Peru. I guess
we'll get a lot of comments out of us from Ingrid. I was with you up until the point when
you started talking about that. You should know you can't grow peanuts in that climate.
No, I mean, I think, I mean, in the simulation, I think there is a sense, the important sense,
in which it would all be real. Nevertheless, there is a distinction between inside a simulation
and outside a simulation, or in the case of no six thought experiment, whether you're in the vat
or outside the vat. And some of those differences may or may not be important. I mean, that comes
down to your values and preferences. So if the experience machine only gives you the experience
of growing peanuts, but you're the only one in the experience machines. You can within the
experience machine, others can plug in. Well, there are versions of the experience machine.
So in fact, you might want to have this thing with different thought experiments, different
versions of it. So in like in the original thought experiment, maybe it's only you, right?
Just you. And you think, I wouldn't want to go in there. Well, that tells you something
interesting about what you value and what you care about. Then you could say, well, what if
you add the fact that there would be other people in there and you would interact with them? Well,
it starts to make it more attractive, right? Then you could add in, well, what if you could also
have important long term effects on human history and the world and you could actually do something
useful, even though you were in there, that makes it maybe even more attractive, like you could
actually have a life that had a purpose and consequences. And so as you sort of add more
into it, it becomes more similar to the baseline reality that you were comparing it to.
Yeah, but I just think inside the experience machine, and without taking those steps you just
mentioned, you still have an impact on long term history of the creatures that live inside that,
of the quote unquote, fake creatures that live inside that experience machine. And that,
at a certain point, if there's a person waiting for you inside that experience machine, maybe
your newly found wife and she dies, she has fear, she has hopes, and she exists in that machine.
When you unplug yourself and plug back in, she's still there going on about her life.
Well, in that case, yeah, she starts to have more of an independent existence.
An independent existence. But it depends, I think, on how she's
implemented in the experience machine. Take one limit case where all she is is a static picture
on the wall of photograph. So you think, well, I can look at her, right? But that's it. There's no,
that then you think, well, it doesn't really matter much what happens to that. And any more than
a normal photograph, if you tear it up, right? It means you can't see it anymore, but you haven't
harmed the person whose picture you tore it up. But if she's actually implemented, say, at
a neural level of detail, so that she's a fully realized digital mind with the same behavioral
repertoire as you have, then very possibly she would be a conscious person like you are.
And then what you do in this experience machine would have real consequences for how this other
mind felt. So you have to specify which of these experience machines you're talking about. I think
it's not entirely obvious that it would be possible to have an experience machine that gave you
a normal set of human experiences, which include experiences of interacting with other people,
without that also generating consciousnesses corresponding to those other people. That is,
if you create another entity that you perceive and interact with, that to you looks entirely
realistic. Not just when you say hello, they say hello back, but you have a rich interaction
many days, deep conversations. Like it might be that the only possible way of implementing that
would be one that also has a side effect, instantiated this other person in a nasty
detail that you would have a second consciousness there. I think that's to some extent an open
question. So you don't think it's possible to fake consciousness and fake intelligence?
Well, it might be. I mean, I think you could certainly fake, if you have a very limited
interaction with somebody, you could certainly fake that. If all you have to go on is somebody
said hello to you, that's not enough for you to tell whether that was a real person there
or a pre-recorded message or like a very superficial simulation that has no consciousness.
Because that's something easy to fake. We could already fake it. Now you can record a voice
recording. But if you have a richer set of interactions where you're allowed to
ask open-ended questions and probe from different angles, you couldn't give canned answer to all
of the possible ways that you could probe it, then it starts to become more plausible that the only
way to realize this thing in such a way that you would get the right answer from any which angle
you probe it, would be a way of instantiating it where you also instantiated a conscious mind.
Yeah, I'm with you on the intelligence part, but there's something about me that says consciousness
is easier to fake. Like I've recently gotten my hands on a lot of rubas. Don't ask me why or how.
And I've made them, this is just a nice robotic mobile platform for experiments. And I made them
scream and or moan and pain and so on, just to see when they're responding to me. And it's just a
sort of psychological experiment on myself. And I think they appear conscious to me pretty quickly.
Like to me, at least my brain can be tricked quite easily. So if I introspect, it's harder for me to
be tricked that something is intelligent. So I just have this feeling that inside this experience
machine, just saying that you're conscious and having certain qualities of the interaction,
like being able to suffer, like being able to hurt, like being able to wonder about the essence
of your own existence, not actually, I mean, you know, the creating the illusion that you're wondering
about it is enough to create the feeling of consciousness and the illusion of consciousness.
The illusion of consciousness. And because of that, create a really immersive experience
to where you feel like that is the real world. So you think there's a big gap between
appearing conscious and being conscious? Or is it that you think it's very easy to be conscious?
I'm not actually sure what it means to be conscious. All I'm saying is the illusion of
consciousness is enough for this to create a social interaction that's as good as if the thing was
conscious, meaning I'm making it about myself. Right. Yeah. I mean, I guess there are a few
difficulties. One is how good the interaction is, which might, I mean, if you don't really care about
like probing hard for whether the thing is conscious, maybe it would be a satisfactory
interaction, whether or not you really thought it was conscious. Now, if you really care about it
being conscious inside this experience machine, how easy would it be to fake it? And you say
it sounds fairly easy. But then the question is, would that also mean it's very easy to
instantiate consciousness? It's much more widely spread in the world than we have thought. It
doesn't require a big human brain with 100 billion neurons. All you need is some system that exhibits
basic intentionality and can respond and you already have consciousness. Like in that case,
I guess you still have a close coupling. I guess that case would be where they can come apart,
where you could create the appearance of there being a conscious mind without actually not
being another conscious mind. I'm somewhat agnostic exactly where these lines go. I think one
observation that makes it plausible that you could have very realistic appearances
relatively simply, which also is relevant for the simulation argument. And in terms of thinking
about how realistic would a virtual reality model have to be in order for the simulated creature
not to notice that anything was awry. Well, just think of our own humble brains during the
wee hours of the night when we are dreaming. Many times, well, dreams are very immersive,
but often you also don't realize that you're in a dream. And that's produced by simple primitive
three pound lumps of neural matter effortlessly. So if a simple brain like this can create the
virtual reality that seems pretty real to us, then how much easier would it be for a super
intelligent civilization with planetary sized computers optimized over the eons
to create a realistic environment for you to interact with?
Yeah. And by the way, behind that intuition is that our brain is not that impressive relative
to the possibilities of what technology could bring. It's also possible that the brain is the
epitome is the ceiling. The ceiling. How is that possible? Meaning this is the smartest possible
thing that the universe could create. So that seems unlikely to me. Yeah. I mean, for some of
these reasons we alluded to earlier in terms of the designs we already have for computers that would
be faster by many orders of magnitude than the human brain. Yeah. But it could be that the
constraints, the cognitive constraints in themselves is what enables the intelligence.
So the more powerful you make the computer, the less likely it is to become super intelligent.
This is where I say dumb things to push back on. Yeah. I'm not sure I follow.
No. I mean, so there are different dimensions of intelligence. A simple one is just speed.
Like, if you could solve the same challenge faster in some sense, you're smarter. So there,
I think we have very strong evidence for thinking that you could have a computer in this universe
that would be much faster than the human brain and therefore have speed super-intelligence,
like be completely superior, maybe a million times faster. Then maybe there are other ways
in which you could be smarter as well, maybe more qualitative ways. The concepts are a little
bit less clear-cut. So it's harder to make a very crisp, neat, firmly logical argument for why
that could be qualitative super-intelligence as opposed to just things that were faster.
Although I still think it's very plausible and for various reasons that are less than
watertight arguments. But when you can sort of, for example, if you look at animals and
brain size and even within humans, there seems to be Einstein versus random person. It's not
just that Einstein was a little bit faster. But how long would it take a normal person to
invent general relativity? It's not 20% longer than it took Einstein or something like that.
It's like, I don't know whether they would do it at all or it would take millions of years or some
totally bizarre. But your intuition is that the compute size will get you go.
So increasing the size of the computer and the speed of the computer might create some
much more powerful levels of intelligence that would enable some of the things we've been talking
about with the simulation, being able to simulate an ultra-realistic environment,
ultra-realistic reception of reality. Yeah. Strictly speaking, it would not be
necessary to have super-intelligence in order to have, say, the technology to make these
simulations, ancestor simulations or other kinds of simulations.
As a matter of fact, I think if we are in a simulation, it would most likely be one built
by a civilization that had super-intelligence. It certainly would help. I mean, you could
build more efficient, larger-scale structures if you had super-intelligence. I also think that if
you had the technology to build these simulations, that's a very advanced technology. It seems kind
of easier to get the technology to super-intelligence. So I'd expect by the time that could make these
fully realistic simulations of human history with human brains in there, before that,
they got to that stage that would have figured out how to create machine super-intelligence or
maybe biological enhancements of their own brains if there were biological creatures to start with.
So we talked about the three parts of the simulation argument. One, we destroy ourselves
before we ever create the simulation. Two, we somehow everybody somehow loses interest in
creating simulation. And three, we're living in a simulation. So you've kind of, I don't know if
your thinking has evolved on this point, but you kind of said that we know so little that these
three cases might as well be equally probable. So probabilistically speaking, where do you stand
on this? Yeah, I mean, I don't think equal necessarily would be the most supported probability
assignment. So how would you, without assigning actual numbers, what's more or less likely in
your view? Well, I mean, I've historically tended to punt on the question of between these three.
So maybe you asked me another way is which kind of things would make each of these more or less
likely? What kind of? I mean, certainly in general terms, if you think anything that say increases
or reduces the probability of one of these would tend to slush probability around on the other.
So if one becomes less probable, like the other would have to, because it's going to add up to
one. So if we consider the first hypothesis, the first alternative that there's this
filter that makes it so that virtually no civilization reaches technical maturity.
In particular, our own civilization, if that's true, then it's like very unlikely that we would
reach technical maturity, just because if almost no civilization at our stage does it, then it's
unlikely that we do it. Sorry, can you linger on that for a second? Well, if it's the case that
almost all civilizations at our current stage of technical maturity fail, at our current stage
of technical development fail to reach maturity, that would give us very strong reason for thinking
we will fail to reach technical maturity. Oh, and also sort of the flip side of that is the fact
that we've reached it means that many other civilizations have reached this point. Yeah,
so that means if we get closer and closer to actually reaching technical maturity,
there's less and less distance left where we could go extinct before we are there.
And therefore the probability that we will reach increases as we get closer,
and that would make it less likely to be true that almost all civilizations
at our current stage failed to get there. Like we would have this, the one case we had started
ourselves would be very close to getting there. That would be stronger when I say it's not so hard
to get to technical maturity. So to the extent that we feel we are moving nearer to technical
maturity, that would tend to reduce the probability of the first alternative and increase the probability
of the other two. It doesn't need to be a monotonic change. Like if every once in a while some new
threat comes into view, some bad new thing you could do with some novel technology, for example,
you know, that could change our probabilities in the other direction. But that technology,
again, you have to think about as that technology has to be able to equally in an even way affect
every civilization out there. Yeah, pretty much. I mean, strictly speaking, it's not true. I mean,
there could be two different existential risks in every civilization, you know, one of one or
the other. But none of them kills more than 50%. But incidentally, some of my other work
mean on machine superintelligence like some existential risks related to sort of super
intelligent AI and how we must make sure to handle that wisely and carefully. It's not the right kind
of existential catastrophe to make the first alternative true though. Like it might be bad for
us if the future lost a lot of value as a result of it being shaped by some process that optimized
for some completely non human value. But even if we got killed by machine superintelligence is that
machine superintelligence might still attain technological maturity. So I see. So you're
not very you're not human exclusive. This could be any intelligent species that achieves like it's
all about the technological maturity. It's not that the humans have to attain it. Right. So like
superintelligence because it replaced us. And that's just as well for the simulation. Yeah,
yeah, I mean, it could interact with the second hypothesis alternative. Like if the thing that
replaced us was either more likely or less likely, then we would be to have an interest in creating
ancestor simulations. You know, that that could affect probabilities. But yeah, to a first order,
like if we all just die, then yeah, we won't produce any simulations because we are dead. But if we
all die and get replaced by some other intelligent thing that then gets to technical maturity,
the question remains, of course, if my not that thing, then use some of its resources to do this
stuff. So can you reason about this stuff? So given how little we know about the universe,
is it reasonable to reason about these probabilities? So like how little, well,
maybe you can disagree. But to me, it's not trivial to figure out how difficult it is to
build a simulation. We kind of talked about it a little bit. We've also don't know,
like as we try to start building it, like start creating virtual worlds and so on,
how that changes the fabric of society. Like there's all these things along the way that can
fundamentally change just so many aspects of our society about our existence that we don't know
anything about. Like the kind of things we might discover when we understand to a greater degree
the fundamental, the physics, like the theory, if we have a breakthrough, have a theory and
everything, how that changes, how that changes deep space exploration and so on. So is it still
possible to reason about probabilities given how little we know?
Yes, I think there will be a large residual of uncertainty that we'll just have to acknowledge.
And I think that's true for most of these big picture questions that we might wonder about.
It's just we are small, short-lived, small-brained, cognitively very limited humans with little
evidence and it's amazing. We can figure out as much as we can really about the cosmos.
But okay, so there's this cognitive trick that seems to happen when I look at the simulation
argument, which for me it seems like case one and two feel unlikely. I want to say feel unlikely
as opposed to sort of like it's not like I have too much scientific evidence to say
that either one or two are not true. It just seems unlikely that every single civilization
destroys itself and it seems like feels unlikely that the civilizations lose interest. So naturally
without necessarily explicitly doing it, but the simulation argument basically says
it's very likely we're living in a simulation. To me, my mind naturally goes there. I think
the mind goes there for a lot of people. Is that the incorrect place for it to go?
Well, not necessarily. I think the second alternative which has to do with the
motivations and interests of technologically mature civilizations. I think there is much we
don't understand about that. Yeah, can you talk about that a little bit? What do you think? I mean
this question that pops up when you build an AGI system or build a general intelligence or
how does that change your motivations? Do you think it'll fundamentally transform our motivations?
Well, it doesn't seem that implausible that once you take this leap to technological maturity.
I mean, I think it involves creating machine superintelligence possibly that would be sort
of on the path for basically all civilizations maybe before they are able to create large
numbers of ancestor simulations. That possibly could be one of these things that quite radically
changes the orientation of what a civilization is in fact optimizing for. There are other things
as well. So at the moment, we have not perfect control over our own being, our own mental states,
our own experiences are not under our direct control. So for example, if you want to experience
a pleasure and happiness, you might have to do a whole host of things in the external world to
try to get into the stage, into the mental state where you experience pleasure. Like when people
get some pleasure from eating great food, well, they can't just turn that on. They have to kind
of actually go to a nice restaurant and then they have to make money. So there's like all this kind
of activity that maybe arises from the fact that we are trying to ultimately produce mental states,
but the only way to do that is by a whole host of complicated activities in the external world.
Now, at some level of technological development, I think we'll become auto potent in the sense of
gaining direct ability to choose our own internal configuration and enough knowledge and insight
to be able to actually do that in a meaningful way. So then it could turn out that there are a lot of
instrumental goals that would drop out of the picture and be replaced by other instrumental
goals because we could now serve some of these final goals in more direct ways. And who knows
how all of that shakes out after civilizations reflect on that and converge and different
attractors and so on and so forth. And that could be new, new instrumental
considerations that come into view as well that we are just oblivious to that would maybe have a
strong shaping effect on actions, like very strong reasons to do something or not to do
something. And we just don't realize they are there because we are so dumb, bumbling through
the universe. But if almost inevitably on route to attaining the ability to create many answers
to simulations, you do have this cognitive enhancement or advice from superintelligence,
this or you yourself, then maybe there's like this additional set of considerations coming
into view. And just to add, it's obvious that the thing that makes sense is to do X. Whereas
right now it seems you could X, Y or Z and different people will do different things and
we are kind of random in that sense. Yeah, because at this time with our limited technology,
the impact of our decisions is minor. I mean, that's starting to change in some ways, but
well, I'm not sure how it follows that the impact of our decisions is minor.
Well, it's starting to change. I mean, I suppose 100 years ago was minor. It's starting to
Well, it depends on how you view it. But people did 100 years ago still have effects on the world
today. Oh, as I see, as a civilization in the togetherness. Yeah, so it might be that
the greatest impact of individuals is not at technological maturity or very far down. It might
be earlier on when there are different tracks, civilization could go down. I mean, maybe the
population is smaller. Things still haven't settled out. If you count indirect effects that
those could be bigger than the direct effects that people have later on.
So part three of the argument says that, so that leads us to a place where
eventually somebody creates a simulation. I think you had a conversation with Joe Rogan.
I think there's some aspect here where you got stuck a little bit. How does that lead to
where likely living in a simulation? So this kind of probability argument,
if somebody eventually creates a simulation, why does that mean that we're now in a simulation?
What you get to if you accept alternative three first is there would be more
simulated people with our kinds of experiences than non-simulated ones. If you look at the world
as a whole, by the end of time, as it were, you just count it up, that would be more simulated
ones than non-simulated ones. Then there is an extra step to get from that. If you assume that,
suppose for the sake of the argument that that's true, how do you get from that to the statement
we are probably in a simulation? So here you're introducing an indexical statement like it's
that this person right now is in a simulation. There are all these other people that are in
simulations and some that are not in the simulation. But what probability should you have that you
yourself is one of the simulated ones in this setup. So I call it the bland principle of
indifference, which is that in cases like this, when you have two sets of observers,
one of which is much larger than the other, and you can't from any internal evidence you have
tell which set you belong to. You should assign a probability that's proportional to the size
of these sets so that if there are 10 times more simulated people with your kinds of experiences,
you would be 10 times more likely to be one of those. Is that as intuitive as it sounds?
I mean, that seems kind of, if you don't have enough information, you should rationally just
assign the same probability as the size of the set. It seems pretty plausible to me.
Where are the holes in this? Is it at the very beginning, the assumption that everything stretches
sort of you have infinite time essentially? You don't need infinite time.
You just need how long does the time you need? However long it takes, I guess, for a universe
to produce an intelligent civilization that then attains the technology to run some
ancestry simulations. Got you. When the first simulation is created, that stretch of time
just a little longer than they'll all start creating simulations, kind of like order of
matters. Well, I mean, it might be different. If you think of there being a lot of different
planets and some subset of them have life and then some subset of those get intelligent life
and some of those maybe eventually start creating simulations, they might get started at quite
different times. Like maybe on some planet, it takes a billion years longer before you get
like monkeys or before you get even bacteria than on another planet. So this might happen
when kind of at different cosmological epochs. Is there a connection here to the doomsday
argument and that sampling there? Yeah, there is a connection in that they both
involve an application of anthropic reasoning that is reasoning about these kind of indexical
propositions. But the assumption you need in the case of the simulation argument
is much weaker than the assumption you need to make the doomsday argument go through.
What is the doomsday argument? And maybe you can speak to the anthropic reasoning in more
general. Yeah, that's a big and interesting topic in its own right, anthropics. But the
doomsday argument is this really first discovered by Brandon Carter, who was a theoretical physicist
and then developed by philosopher John Leslie. I think it might have been discovered initially
in the 70s or 80s. And Leslie wrote this book, I think in 96. And there are some other versions
as well by Richard Gott, his physicist, but let's focus on the Carter Leslie version where
it's an argument that we have systematically underestimated the probability that
he might not be able to go extinct soon. Now, I should say most people probably
think at the end of the day, there is something wrong with this doomsday argument that it doesn't
really hold. It's like there's something wrong with it. But it's proved hard to say exactly what
is wrong with it. And different people have different accounts. My own view is it seems
inconclusive. But I can say what the argument is. Yeah, that would be great. Yeah, so maybe
it's easy to explain via an analogy to sampling from urns. So imagine you have a big, imagine you
have two urns in front of you, and they have balls in them that have numbers. The two urns
look the same, but inside one, there are 10 balls. Ball number one, two, three, up to ball number 10.
And then in the other urn, you have a million balls numbered one to a million. And now somebody
puts one of these urns in front of you and asks you to guess what's the chance it's the 10 ball
urn. And you say, well, 50-50, I can't tell which urn it is. But then you're allowed to
reach in and pick a ball at random from the urn. And that's suppose you find that it's ball number
seven. So that's strong evidence for the 10 ball hypothesis. Like it's a lot more likely that you
would get such a low numbered ball if there are only 10 balls in the urn. Like it's in fact 10%
done, right? Then if there are a million balls, it would be very unlikely you would get number seven.
So you perform a Bayesian update. And if your prior was 50-50 that it was the 10 ball urn,
you become virtually certain after finding the random sample was seven that it only has 10
balls in it. So in the case of the urns, this is uncontroversial elementary probability theory.
The Doomsday argument says that you should reason in a similar way with respect to
different hypotheses about how many balls there will be in the urn of humanity, I said,
for how many humans there will ever be by the time we go extinct. So to simplify, let's suppose we
only consider two hypotheses, either maybe 200 billion humans in total or 200 trillion humans
in total. You could fill in more hypotheses, but it doesn't change the principle here. So
it's easiest to see if we just consider these two. So you start with some prior based on ordinary
empirical ideas about threats to civilization and so forth. And maybe you say it's a 5% chance that
we will go extinct by the time there will have been 200 billion only. You're kind of optimistic,
let's say. I think probably we'll make it through colonized universe. But then according to this
Doomsday argument, you should take off your own birth rank as a random sample. So your birth rank
is your sequence in the position of all humans that have ever existed. And it turns out you're
about a human number of 100 billion. That's roughly how many people have been born before you.
That's fascinating because we each have a number. We would each have a number in this. I mean,
obviously, the exact number would depend on where you started counting, like which
ancestors was human enough to count as human. But those are not really important. They're
relatively few. So yeah, so you're roughly 100 billion. Now, if they're only going to be 200
billion in total, that's a perfectly unremarkable number. You're somewhere in the middle.
One of the male human, completely unsurprising. Now, if they're going to be 200 trillion,
you would be remarkably early. It's like, what are the chances out of these 200 trillion human
that you should be human number 100 billion? That seems it would have a much lower conditional
probability. And so, analogously to how in the urn case, you thought after finding this
low number random sample, you updated in favor of the urn having few balls. Similarly, in this
case, you should update in favor of the human species having a lower total number of members.
That is doom soon. You said doom soon? Yeah. Well, that would be the hypothesis in this case
that it will end up 100 billion. I just like that term for the hypothesis.
Yeah. So what it kind of crucially relies on, the doom statement, is the idea that
you should reason as if you were a random sample from the set of all humans that will
ever have existed. If you have that assumption, then I think the rest kind of follows.
The question then is why should you make that assumption? In fact, you know you're 100 billion,
so where do you get this prior? And then there is like a literature on that with different
ways of supporting that assumption. That's just one example of anthropographic reasoning, right?
Yeah. That seems to be kind of convenient when you think about humanity, when you think about
sort of even like existential threats and so on, as it seems that quite naturally that you should
assume that you're just an average case. Yeah. That you're kind of a typical randomly sample.
Now, in the case of the doomsday argument, it seems to lead to what intuitively we think is
the wrong conclusion. Or at least many people have this reaction that there's got to be something
fishy about this argument. Because from very, very weak premises, it gets this very striking
implication that we have almost no chance of reaching size 200 trillion humans in the future. And
how could we possibly get there just by reflecting on when we were born? It seems you would need
sophisticated arguments about the impossibility of space colonization, blah, blah. So one might
be tempted to reject this key assumption. I call it the self-sampling assumption. The idea that
you should reason as if you were a random sample from all observers or in your some reference class.
However, it turns out that in other domains, it looks like we need something like this self-sampling
assumption to make sense of bona fide scientific inferences. In contemporary cosmology, for example,
you have these multiverse theories. And according to a lot of those, all possible human observations
are made. I mean, if you have a sufficiently large universe, you will have a lot of people
observing all kinds of different things. So if you have two competing theories, say about the value
of some constant, it could be true according to both of these theories that there will be some
observers observing the value that corresponds to the other theory. Because there will be some
observers that have hallucinations. So there's a local fluctuation or a statistically anomalous
measurement. These things will happen. And if enough observers make enough different observations,
there will be some that sort of by chance make these different ones. And so what we would want
to say is, well, many more observers, a larger proportion of the observers will observe as it
were the true value. And a few will observe the wrong value. If we think of ourselves as a random
sample, we should expect with a very high probability to observe the true value. And that
will then allow us to conclude that the evidence we actually have is evidence for the theories we
think are supported. It kind of then is a way of making sense of these inferences that failures
seem correct, that we can make various observations and infer what the temperature of the cosmic
background is and the fine structure constant and all of this. But it seems that without rolling
in some assumption similar to the self sampling assumption, this inference doesn't go through.
And there are other examples. So there are these scientific contexts where it looks like this
kind of anthropic reasoning is needed and makes perfect sense. And yet, in the case of the Doobster
argument, it has this weird consequence. And people might think there's something wrong with it there.
So there's then this project that would consistent try to figure out what are the legitimate ways
of reasoning about these indexical facts when observer selection effects are in play. In other
words, developing a theory of anthropics. And there are different views of looking at that.
And it's a difficult methodological area. But to tie it back to the simulation argument,
the key assumption there, this bland principle of indifference, is much weaker than the self
sampling assumption. So if you think about in the case of the Doobster argument, it says you
should reason as if you are a random sample from all humans that would have lived, even though in
fact you know that you are about number 100 billionth human and you're alive in the year 2020,
whereas in the case of the simulation argument, it says that, well, if you actually have no way
of telling which one you are, then you should assign this kind of uniform probability.
Yeah, your role as the observer in the simulation argument is different, it seems like.
Who's the observer? I keep assigning the individual consciousness.
Well, there are a lot of observers in the simulation, in the context of the simulation
argument, the relevant observers would be A, the people in original histories and B,
the people in simulations. So this would be the class of observers that we need. I mean,
there are also maybe the simulators, but we can set those aside for this. So the question is,
given that class of observers, a small set of original history observers and the large class
of simulated observers, which one should you think is you? Where are you amongst this set of observers?
I'm maybe having a little bit of trouble wrapping my head around the intricacies of what it means
to be an observer in this, in the different instantiations of the anthropic reasoning cases
that we mentioned. Yeah. I mean, does it have to be? It's like the observer. No, I mean,
it may be an easier way of putting it is just like, are you simulated or are you not simulated?
Given this assumption that these two groups of people exist. Yeah, in the simulation case,
it seems pretty straightforward. Yeah. So the key point is the methodological assumption you need
to make to get the simulation argument to where it wants to go is much weaker and less problematic.
Then the methodological assumption you didn't make to get the doomsday argument to its conclusion.
Maybe the doomsday argument is sound or unsound, but you need to make a much stronger and more
controversial assumption to make it go through. In the case of the doomsday argument, sorry,
the simulation argument, I guess one maybe way intuition popped to support this bland principle
of indifference is to consider a sequence of different cases where the fraction of people
who are simulated to non-simulated approaches one. So in the limiting case, where everybody is
simulated, obviously, you can deduce with certainty that you are simulated. If everybody
with your experiences is simulated, then you know you're got to be one of those.
You don't need a probability at all. You just kind of logically
conclude it. So then as we move from a case where say, 90% of everybody is simulated,
99.9%, it should seem plausible that the probability assigned should approach one,
certainty, as the fraction approaches the case where everybody is in a simulation.
You wouldn't expect that to be a discrete. Well, if there's one non-simulated person,
then it's 50-50, but if we move that, then it's 100%. It should kind of...
There are other arguments as well. One can use to support this bland principle of indifference,
but that might be nice too. But in general, when you start from time equals zero and go into the
future, the fraction of simulated, if it's possible to create simulated worlds, the fraction
simulated worlds will go to one. Well, it won't probably go all the way to one. In reality,
there would be some ratio, although maybe a technological mature civilization could run
a lot of simulations using a small portion of its resources. It probably wouldn't be able to
run infinitely many. I mean, if we take, say, the observed... The physics in the observed
universe, if we assume that that's also the physics at the level of the simulators, that would be
limited to the amount of information processing that any one civilization could perform in its
future trajectory. Right. Well, first of all, there's limited amount of matter you can get
your hands off because with the positive cosmological constant, the universe is accelerating,
there is a finite sphere of stuff, even if you travel with the speed of light that you could
ever reach. You have a finite amount of stuff. Then if you think there is a lower limit to the
amount of loss you get when you perform an erasure of a computation, or if you think, for example,
just matter gradually over cosmological timescales, decay, maybe protons decay, other things,
you radiate out gravitational waves. There's all kinds of seemingly unavoidable losses that
occur. Eventually, we'll have something like a heat death of the universe or a cold death or
whatever. It's fine, but of course, we don't know if there's many ancestral simulations,
we don't know which level we are. Could there be an arbitrary number of simulations that spawned
ours and those had more resources in terms of physical universe to work with?
Sorry, what do you mean that that could be?
Okay. If simulations spawn other simulations, it seems like each new spawn has fewer resources
to work with, but we don't know at which step along the way we are at. Anyone observer doesn't
know whether we're in level 42 or 100 or is that not matter for the resources?
I mean, it's true that there would be uncertainty asked. You could have stacked simulations.
And that could then be uncertainty as to which level we are at. As you remarked also,
all the computations performed in a simulation within a simulation also have to be expanded
at the level of the simulation. The computer in basement reality, where all these simulations
with the simulations with the simulations are taking place, that computer ultimately,
it's CPU or whatever it is that has to power this whole tower. So if there is a finite
compute power in basement reality, that would impose a limit to how tall this tower can be.
And if each level kind of imposes a large extra overhead, you might think maybe the
tower would not be very tall, that most people would be low down in the tower.
I love the term basement reality. Let me ask one of the popularizers. You said there's many
when you look at sort of the last few years of the simulation hypothesis, just like you said,
it comes up every once in a while, some new community discovers it and so on. But I would
say one of the biggest popularizers of this idea is Elon Musk. Do you have any kind of intuition
about what Elon thinks about when he thinks about simulation? Why is this of such interest? Is it all
the things we've talked about, or is there some special kind of intuition about simulation that
he has? I mean, you might have a better, I think, I mean, why it's of interest. I think it seems
pretty obvious why to the extent that one think the argument is credible, why it would be of
interest. If it's correct, tell us something very important about the world in one way or the other,
whichever of the three alternatives for a simulation that seems like arguably one of the most
fundamental discoveries. Now, interestingly, in the case of somebody like Elon, so there's
like the standard arguments for why you might want to take the simulation hypothesis seriously,
the simulation argument. In the case that if you're actually Elon Musk, let's say,
there's a kind of an additional reason in that what are the chances you would be Elon Musk?
Like, it seems like maybe there would be more interest in simulating the lives of very unusual
and remarkable people. So if you consider not just simulations where all of human history or the
whole of human civilization are simulated, but also other kinds of simulations which only
include some subset of people, like in those simulations that only include a subset,
it might be more likely that they would include subsets of people with unusually interesting
or consequential lives. You're Elon Musk. You got to wonder, right?
It's more likely. Or if you are Donald Trump, or if you are Bill Gates, or you're like some
particularly distinctive character, you might think that that ad, I mean, if you just think of
yourself into the shoes, right, it's got to be like an extra reason to think. That's kind of...
So interesting. So on a scale of like farmer and Peru to Elon Musk,
the more you get towards the Elon Musk, the higher the probability.
You'd imagine that would be some extra boost from that.
There's an extra boost. So he also asked the question of what he would ask an AGI saying,
the question being, what's outside the simulation? Do you think about the answer to this question,
if we are living a simulation, what is outside the simulation? So the programmer of the simulation?
Yeah. I mean, I think it connects to the question of what's inside the simulation in that,
if you had views about the creators of the simulation, it might help you make predictions
about what kind of simulation it is, what might happen, what happens after the simulation,
if there is some after, but also the kind of setup. So these two questions would be quite
closely intertwined. But do you think it would be very surprising to... Is the stuff inside the
simulation, is it possible for it to be fundamentally different than the stuff outside?
Yeah. Another way to put it, can the creatures inside the simulation be smart enough to even
understand or have the cognitive capabilities or any kind of information processing capabilities
enough to understand the mechanism that's created them?
They might understand some aspects of it. I mean, it's a level of explanation,
like degrees to which you can understand. So does your dog understand what it is to be human?
Well, it's got some idea. Humans are these physical objects that move around and do things. And a
normal human would have a deeper understanding of what it is to be a human. And maybe some very
experienced psychologists or great novelists might understand a little bit more about what it is to
be human. And maybe superintelligence could see right through your soul. So similarly, I do think
that we are quite limited in our ability to understand all of the relevant aspects of the
larger context that we exist in. But there might be hope for some.
I think we understand some aspects of it. But how much good is that if there's one key aspect
that changes the significance of all the other aspects? So we understand maybe seven out of
10 key insights that you need. But the answer actually varies completely, depending on what
number eight, nine, and 10 insight is. It's like whether you want to...
Suppose that the big task were to guess whether a certain number was odd or even,
like a 10-digit number. And if it's even, the best thing for you to do in life is to go north.
And if it's odd, the best thing for you to go south.
Now, we are in a situation where maybe through our science and philosophy, we figured out what
the first seven digits are. So we have a lot of information, right? Most of it we figured out.
But we are clueless about what the last three digits are. So we are still completely clueless
about whether the number is odd or even, and therefore whether we should go north or go south.
I feel that's an analogy, but I feel we're somewhat in that predicament. We know a lot
about the universe. We've come maybe more than half of the way there to kind of fully understanding
it, but the parts we are missing are possibly ones that could completely change the overall
upshot of the thing. And including change our overall view about what the scheme of priorities
should be or which strategic direction would make sense to pursue.
Yeah, I think your analogy of us being the dog, trying to understand human beings is an entertaining
one and probably correct. The closer the understanding tends from the dog's viewpoint to us human
psychologist's viewpoint, the steps along the way there will have completely transformative ideas
of what it means to be human. So the dog has a very shallow understanding. It's interesting to think
that, to analogize that a dog's understanding of a human being is the same as our current
understanding of the fundamental laws of physics in the universe. Oh, man. Okay, we spent an hour
or 40 minutes talking about the simulation. I like it. Let's talk about superintelligence,
at least for a little bit. And let's start at the basics. What to you is intelligence?
Yeah, I tend not to get too stuck with the definitional question. I mean, the common sense
understand, like the ability to solve complex problems, to learn from experience, to plan,
to reason, some combination of things like that. Is consciousness mixed up into that or no? Is
consciousness mixed up into that? Well, I don't think, I think it could be fairly intelligent,
at least without being conscious, probably. So then what is superintelligence? Yeah,
that would be like something that was much more of that, had much more general cognitive capacity
than we humans have. So if we talk about general superintelligence, it would be much faster learner,
be able to reason much better, make plans that are more effective at achieving its goals,
say in a wide range of complex, challenging environments. In terms of, as we turn our eye
to the idea of existential threats from superintelligence, do you think superintelligence
has to exist in the physical world or can it be digital only? We think of our general intelligence
as us humans, as an intelligence that's associated with the body that's able to interact with the
world, that's able to affect the world directly with physically. I mean, digital only is perfectly
fine, I think. I mean, it's physical in the sense that obviously the computers and the memories are
physical. But it's capable to affect the world, sort of? Could be very strong, even if it has a
limited set of actuators. If it can type text on the screen or something like that, that would be,
I think, ample. So in terms of the concerns of existential threat of AI, how can an AI system
that's in the digital world have existential risk? What are the attack vectors for a digital system?
Well, I mean, I guess maybe to take one step back. I should emphasize that I also think there's this
huge positive potential from machine intelligence, including superintelligence. I want to stress that
because some of my writing has focused on what can go wrong. When I wrote the book,
Superintelligence, at that point, I felt that there was a kind of neglect of what would happen
if AI succeeds. And in particular, a need to get a more granular understanding of where the pitfalls
are so we can avoid them. I think that since the book came out in 2014, there has been a much
wider recognition of that. And a number of research groups are now actually working on
developing, say, AI alignment techniques and so on and so forth. So I think now it's important to
make sure we bring back onto the table the upside as well.
And there's a little bit of a neglect now on the upside, which is, I mean, if you look at,
talking to a friend, if you look at the amount of information that is available,
or people talking, or people being excited about the positive possibilities of general
intelligence, that's not, it's far outnumbered by the negative possibilities in terms of our
public discourse. Possibly, yeah. It's hard to measure.
Can you link that for a little bit? What are some, to you, possible big positive impacts of
general intelligence, superintelligence? Well, I mean, super, because I tend to
also want to distinguish these two different contexts of thinking about AI and AI impacts,
the kind of near-term and long-term, if you want, both of which I think are legitimate things to
think about. And people should discuss both of them, but they are different. And they often get
mixed up. And then you get confusion. I think you get, simultaneously, maybe an overhyping
of the near-term and an underhyping of the long-term. And so I think as long as we keep them apart,
we can have two good conversations, or we can mix them together and have one bad conversation.
Can you clarify just the two things we're talking about, the near-term and the long-term?
Yeah. And what are the distinctions? Well, it's a blur distinction. But say the things
I wrote about in this book, superintelligence, long-term, things people are worrying about
today with, I don't know, algorithmic discrimination or even things, self-driving cars and drones and
stuff, more near-term. And then, of course, you could imagine some medium term where they kind
of overlap and one evolves into the other. But I don't know. I think both, yeah, the issues look
kind of somewhat different depending on which of these contexts. So I think it would be nice
if we can talk about the long-term and think about a positive impact or a better world because of
the existence of the long-term superintelligence. Do you have views of such a world? Yeah. I guess
it's a little hard to articulate because it seems obvious that the world has a lot of
problems as it currently stands. And it's hard to think of any one of those which
it wouldn't be useful to have a friendly aligned superintelligence working on.
So from health to the economic system, to be able to sort of improve the investment and trade
and foreign policy decisions, all that kind of stuff. All that kind of stuff and a lot more.
I mean, what's the killer app? Well, I don't think there is one. I think AI,
especially artificial general intelligence, is really the ultimate general purpose technology.
So it's not that there is this one problem, this one area where it will have a big impact.
But if and when it succeeds, it will really apply across the board in all fields where
human creativity and intelligence and problem solving is useful, which is pretty much all
fields. The thing that it would do is give us a lot more control over nature. It wouldn't
automatically solve the problems that arise from conflict between humans,
fundamentally political problems. Some subset of those might go away if you just had more
resources and cooler tech, but some subset would require coordination that is not automatically
achieved just by having more technical capability. But anything that's not of that sort, I think
you just get an enormous boost with this kind of cognitive technology once it goes all the way.
Again, that doesn't mean I'm thinking, oh, people don't recognize what's possible with
current technology and sometimes things get over-hyped. But I mean, those are perfectly
consistent views to hold the ultimate potential being enormous. And then it's a very different
question of how far are we from that or what can we do with near-term technology?
Yeah. So what's your intuition about the idea of intelligence explosion? So there's this,
you know, when you start to think about that leap from the near-term to the long-term,
the natural inclination, like for me, sort of building machine learning systems today,
it seems like it's a lot of work to get the general intelligence. But there's some intuition
of exponential growth, of exponential improvement, of intelligence explosion. Can you maybe
try to elucidate, to try to talk about what's your intuition about the possibility of an
intelligence explosion? There won't be this gradual slow process. There might be a phase shift.
Yeah. I think it's, we don't know how explosive it will be. I think for what it's worth,
it seems fairly likely to me that at some point there will be some intelligence
explosion, like some period of time where progress in AI becomes extremely rapid,
roughly in the area where you might say it's kind of human-ish equivalent in
core cognitive faculties, that the concept of human equivalent starts to break down when
you look too closely at it. And just how explosive does something have to be for it to
be called an intelligence explosion? Like does it have to be like overnight, literally,
or a few years? But overall, I guess if you plotted the opinions of different people in
the world, I guess I would be somewhat more probability towards the intelligence explosion
scenario than probably the average AI researcher, I guess.
So, and then the other part of the intelligence explosion, or just forget explosion, just progress,
is once you achieve that gray area of human-level intelligence, is it obvious to you that we
should be able to proceed beyond it to get to superintelligence?
Yeah, that seems, I mean, as much as any of these things can be obvious. Given we've never had one,
when people have different views, smart people have different views, it's like some degree of
uncertainty that always remains for any big futuristic philosophical grand question that
just we realize humans are fallible, especially about these things. But it does seem, as far
as I'm judging things based on my own impressions, it seems very unlikely that there would be a
ceiling at or near human cognitive capacity. And that's such a, I don't know, that's such a
special moment. It's both terrifying and exciting to create a system that's beyond our intelligence.
So, maybe you can step back and say, how does that possibility make you feel that we can create
something? It feels like there's a line beyond which it steps, it'll be able to outsmart you.
And therefore, it feels like a step where we lose control.
Well, I don't think the latter follows that you could imagine, and in fact, this is what
a number of people are working towards, making sure that we could ultimately
project higher levels of problem-solving ability while still making sure that they are aligned,
like they are in the service of human values. So, losing control, I think, is not a given
that would happen. I asked how it makes me feel. I mean, to some extent, I've lived with this for
so long since as long as I can remember being an adult or even a teenager. It seemed to me
obvious that at some point, AI will succeed. And so, I actually misspoke. I didn't mean control.
I meant, because the control problem is an interesting thing, and I think the hope is,
at least we should be able to maintain control over systems that are smarter than us,
but we do lose our specialness. We lose our place as the smartest, coolest thing
on earth. And there's an ego involved with that, that humans are very good at dealing with.
I mean, I value my intelligence as a human being. It seems like a big transformative
step to realize there's something out there that's more intelligent. I mean, you don't see that
as such a fundamentally... Well, yeah, I think, yes, a lot. I think it would be small. I mean,
I think there are already a lot of things out there that are... I mean, certainly, if you
think the universe is big, there's going to be other civilizations that already have super
intelligences or that just naturally have brains the size of beach balls and are completely
leaving us in the dust. And we haven't come face to face with this. We haven't come face to face,
but I mean, that's an open question. What would happen in a kind of post-human world,
like how much day-to-day would these super intelligences be involved in the lives of...
You could imagine some scenario where it would be more like a background thing that would help
protect against some things, but they wouldn't be this intrusive kind of making you feel bad,
by making clever jokes on your experience. There's like all sorts of things that maybe in
the human context would feel awkward about that. You don't want to be the dumbest kid in your class,
everybody picks it. A lot of those things maybe you need to abstract away from if you're thinking
about this context where we have infrastructure that is in some sense beyond any of our old humans.
I mean, it's a little bit like, say, the scientific community as a whole, if you think of that as a
mind. It's a little bit of a metaphor, but I mean, obviously it's got to be way more
capacious than any individual. So in some sense, there is this mind-like thing already out there
that's just vastly more intelligent than any individual is, and we think, okay, that's...
You just accept that as a fact. That's the basic fabric of our existence,
is the super intelligent. Yeah, you get used to a lot of... I mean, there's already Google
and Twitter and Facebook, these recommender systems that are the basic fabric of our...
I could see them becoming... I mean, do you think of the collective intelligence of these
systems as already perhaps reaching super intelligence level? Well, I mean, so here it
comes to the concept of intelligence and the scale and what human level means. The kind of vagueness
and the determinacy of those concepts starts to dominate how we would answer that question.
So, say, the Google search engine has a very high capacity of a certain kind,
like remembering and retrieving information, particularly like text or images that are...
You have a kind of string, a word string key, obviously superhuman at that, but a vast set
of other things it can't even do at all, not just not do well, but... So you have these current AI
systems that are superhuman in some limited domain and then radically subhuman in all other
domains. Same with a chess... A simple computer that can multiply really large numbers, right?
So it's going to have this one spike of super intelligence and then a kind of a zero level
of capability across all other cognitive fields. Yeah, I don't necessarily think the generalness...
I mean, I'm not so attached with it, but I could sort of... It's a gray area and it's a feeling,
but to me, sort of Alpha zero is somehow much more intelligent, much, much more intelligent
than D blue. And to say which domain... Well, you could say, well, these are both just board
game. They're both just able to play board games. Who cares if they're going to do better or not?
But there's something about the learning and the self play that makes it...
It crosses over into that land of intelligence that doesn't necessarily need to be general.
And the same way Google is much closer to D blue currently in terms of its search engine than it
is to sort of the Alpha zero. And the moment it becomes... And the moment these recommender systems
really become more like Alpha zero, but being able to learn a lot without the constraints of
being heavily constrained by human interaction, that seems like a special moment in time.
I mean, certainly learning ability seems to be an important facet of general intelligence,
that you can take some new domain that you haven't seen before and you weren't specifically
pre-programmed for and then figure out what's going on there and eventually become really good
at it. So that's something Alpha zero has much more of than D blue had. And in fact, I mean,
systems like Alpha zero can learn not just go, but other, in fact, probably beat D blue in chess
and so forth, right? So you do see this general and it matches the intuition. We feel it's more
intelligent and it also has more of this general purpose learning ability. And if we get systems
that have even more general purpose learning ability, it might also trigger an even stronger
intuition that they're actually starting to get smart. So if you were to pick a future,
what do you think a utopia looks like with AGI systems? Is it the neural link brain computer
interface world where we're kind of really closely interlinked with AI systems? Is it possibly where
AGI systems replace us completely while maintaining the values and the consciousness?
Is it something like it's a completely invisible fabric? Like you mentioned a society where it
just aids in a lot of stuff that we do like curing diseases and so on. What is the utopia if you get
to pick? Yeah, I mean, it is a good question and a deep and difficult one. I'm quite interested
in it. I don't have all the answers yet, but might never have. But I think there are some
different observations one can make. One is if this scenario actually did come to pass,
it would open up this vast space of possible modes of being. On one hand, material and resource
constraints would just be expanded dramatically. So there would be a lot of a big pie,
let's say. Also, it would enable us to do things including to ourselves or like that.
It would just open up this much larger design space and option space than we have ever had
access to in human history. So I think two things follow from that. One is that we probably would
need to make a fairly fundamental rethink of what ultimately we value. Think things through more
from first principles. The context would be so different from the familiar that we could just
take what we've always been doing and then like, oh, well, we have this cleaning robot that
cleans the dishes in the sink and a few other small things. I think we would have to go back
to first principles. So even from the individual level, go back to the first principles of what
is the meaning of life, what is happiness, what is fulfillment.
Yeah. And then also connected to this large space of resources is that it
would be possible and I think something we should aim for is to do well by the lights
of more than one value system. That is, we wouldn't have to choose only one value criterion and say,
we're going to do something that scores really high on the metric of, say, heathenism and then
is like a zero by other criteria, like kind of wireheaded brains in a vat. And it's like a lot
of pleasure. That's good. But then like, no beauty, no achievement like that. I think to some
significant, not unlimited sense, but the significant sense, it would be possible to do
very well by many criteria. Like maybe you could get like 98% of the best according to
several criteria at the same time, given this great expansion of the option space.
So have competing value systems, competing criteria as sort of forever, just like our
Democrat versus Republican, there seems to be this always multiple parties that are useful
for our progress in society, even though it might seem dysfunctional inside the moment, but
having the multiple value system seems to be beneficial for, I guess, a balance of power.
So that's, yeah, not exactly what I have in mind that it's, well, although it may be an
indirect way it is, but that if you had the chance to do something that scored well
on several different metrics, our first instinct should be to do that rather than
immediately leap to the thing, which ones of these value systems are we going to screw over?
Let's first try to do very well by all of them. Then it might be that you can't get 100% of all
and you would have to then have the hard conversation about which one will only get 97%.
There you go. There's my cynicism that all of existence is always a trade-off. But you say,
maybe it's not such a bad trade-off. Let's first at least try that.
Well, this would be a distinctive context in which at least some of the constraints would be
removed. That's probably still a bit trade-offs in the end. It's just that we should first make
sure we at least take advantage of this abundance. So in terms of thinking about this, one should
think in this frame of mind of generosity and inclusiveness to different value systems and
see how far one can get there first. I think one could do something that would be very good
according to many different criteria. We talked about AGI fundamentally transforming
the value system of our existence, the meaning of life. But today, what do you think is the
meaning of life? The silliest or perhaps the biggest question? What's the meaning of life?
What's the meaning of existence? What gives your life fulfillment, purpose, happiness, meaning?
Yeah, I think these are, I guess, a bunch of different but related questions in there that
one can ask. Happiness, meaning. Yeah. I mean, it could imagine somebody getting a lot of happiness
from something that they didn't think was meaningful. Like mindless, watching reruns of
some television series, waiting junk food, maybe some people that gives pleasure, but they wouldn't
think it had a lot of meaning. Whereas, conversely, something that might be quite loaded with meaning
might not be very fun always. Like some difficult achievement that really helps a lot of people,
maybe requires self-sacrifice and hard work. And so these things can, I think, come apart,
which is something to bear in mind. Also, when you're thinking about these Utopia questions that
you might actually start to do some constructive thinking about that, you might have to isolate
and distinguish these different kinds of things that might be valuable in different ways.
Make sure you can sort of clearly perceive each one of them, and then you can think about how you
can combine them. And just as you said, hopefully come up with a way to maximize all of them together.
Yeah, or at least get, I mean, maximize or get like a very high score on a wide range of them,
even if not literally all. You can always come up with values that are exactly opposed to one
another, right? But I think for many values, they're kind of opposed with, if you place them
within a certain dimensionality of your space, like there are shapes that are kind of, you can't
untangle like in a given dimensionality. But if you start adding dimensions, then it might in
many cases just be that they are easy to pull apart and you could. So we'll see how much space
there is for that. But I think that there could be a lot in this context of radical abundance,
if ever we get to that. I don't think there's a better way to end it, Nick. You've influenced a
huge number of people to work on what could very well be the most important problems of our time.
So it's a huge honor. Thank you so much for talking to me. Well, thank you for coming by,
Lex. That was fun. Thank you. Thanks for listening to this conversation with Nick Bostrom. And thank
you to our presenting sponsor, Cash App. Please consider supporting the podcast by downloading
Cash App and using code lexpodcast. If you enjoy this podcast, subscribe on YouTube,
review it with five stars on Apple podcast, support on Patreon, or simply connect with me
on Twitter at Lex Friedman. And now let me leave you with some words from Nick Bostrom.
Our approach to existential risks cannot be one of trial and error. There's no opportunity to learn
from errors. The reactive approach, see what happens, limit damages, and learn from experience
is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate
new types of threats and a willingness to take decisive preventative action and to bear the
costs, moral, and economic of such actions. Thank you for listening and hope to see you next time.