This graph shows how many times the word ______ has been mentioned throughout the history of the program.
The following is a conversation with Yosha Bach, VP of Research at the AI Foundation
with a history of research positions at MIT and Harvard. Yosha is one of the most unique
and brilliant people in the artificial intelligence community, exploring the workings of the human
mind, intelligence, consciousness, life on earth, and the possibly simulated fabric of our universe.
I can see myself talking to Yosha many times in the future.
Quick summary of the ads. Two sponsors, ExpressVPN and Cash App. Please consider supporting
the podcast by signing up at expressvpn.com slash lexpod and downloading Cash App and using code LEX
podcast. This is the artificial intelligence podcast. If you enjoy it, subscribe on YouTube,
review it with 5 stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter
at Lex Freedman. Since this comes up more often than I ever would have imagined, I challenge you
to try to figure out how to spell my last name without using the letter E. And it'll probably be
the correct way. As usual, I'll do a few minutes of ads now and never hear yads in the middle that
can break the flow of the conversation. This show is sponsored by ExpressVPN. Get it at expressvpn.com
slash lexpod to support this podcast and to get an extra three months free on a one year package.
I've been using ExpressVPN for many years. I love it. I think ExpressVPN is the best VPN out there.
They told me to say it, but I think it actually happens to be true. It doesn't log your data,
it's crazy fast, and it's easy to use. Literally, just one big power on button.
Again, for obvious reasons, it's really important that they don't log your data.
It works on Linux and everywhere else too. Shout out to my favorite flavor of Linux,
Ubuntu Mate 2004. Once again, get it at expressvpn.com slash lexpod to support this podcast and to get
an extra three months free on a one year package. This show is presented by Cash App, the number
one finance app in the App Store. When you get it, use code lexpodcast. Cash App lets you send
money to friends by Bitcoin and invest in the stock market with as little as $1. Since Cash App
does fractional share trading, let me mention that the order execution algorithm that works behind
the scenes to create the abstraction of the fractional orders is an algorithmic marvel.
So big props to the Cash App engineers for taking a step up to the next layer of abstraction over
the stock market, making trading more accessible for new investors and diversification much easier.
So again, if you get Cash App from the App Store, Google Play and use the code lexpodcast,
you get $10 and Cash App will also donate $10 to First, an organization that is helping
advance robotics and STEM education for young people around the world. And now here's my conversation
with Yosha Bach. As you've said, you grew up in a forest in East Germany,
just as what we're talking about off mic, to parents who were artists. And now I think,
at least to me, you've become one of the most unique thinkers in the AI world. So can we try to
reverse engineer your mind a little bit? What were the key philosophers, scientists, ideas,
maybe even movies, or just realizations that impact on you when you're growing up that kind of
led to the trajectory? Or were the key sort of crossroads in the trajectory of your intellectual
development? My father came from a long tradition of architects, distant branch of the Bach family.
And so basically, he was technically a nerd. And nerds need to interface in society with
non-standard ways. Sometimes I define a nerd as somebody who thinks that the purpose of communication
is to submit your ideas to peer review. And normal people understand that the primary
purpose of communication is to negotiate alignment. And these purposes tend to conflict,
which means that nerds have to learn how to interact with society at large.
Who is the reviewer in the nerd's view of communication?
Everybody who you consider to be a peer. So whatever hapless individual is around,
well, you would try to make him or her the gift of information.
Okay. So you're not, by the way, my research will mal-informed me. So you're
architect or artist? So he did study architecture. But basically, my grandfather made the wrong
decision. He married an aristocrat and was drawn into the war. And he came back after 15 years.
So basically, my father was not parented by a nerd, but by somebody who tried to tell him what
to do and expected him to do what he was told. And he was unable to. He's unable to do things
if he's not intrinsically motivated. So in some sense, my grandmother broke her son.
And her son responded when he became an architect to become an artist. So he built
100-water architecture. He built houses without right angles. He'd built lots of things that
didn't work in the more brutalist traditions of Eastern Germany. And so he bought an old
water mill, moved out to the countryside, and did only what he wanted to do, which was art.
Eastern Germany was perfect for Bohem, because you had complete material safety. Putt was heavily
subsidized. Haskell was free. You didn't have to worry about rent or pensions or anything.
So it's a socialized communist side of the memory.
And the other thing is it was almost impossible not to be in political disagreement with your
government, which is very productive for artists. So everything that you do is intrinsically meaningful
because it will always touch on the deeper currents of society, of culture, and be in
conflict with it, and tension with it. And you will always have to define yourself with respect
to this. So what impact did your father, this outside of the box thinker against the government,
against the world artists, have on your life? He was actually not a thinker. He was somebody who
only got self-aware to the degree that he needed to make himself functional. So in some sense,
he was also in the late 1960s. And he was, in some sense, a hippie. So he became a one person
cult. He lived out there in his kingdom. He built big sculpture gardens and started many avenues
of art and so on, and convinced women to live with him. She was also an architect, and she adored
him and decided to share her life with him. And I basically grew up in a big cave full of books.
I'm almost feral. And I was bored out there. It was very, very beautiful, very quiet, and quite
lonely. So I started to read. And by the time I came to school, I've read everything until fourth
grade, and then some. And there was not a real way for me to relate to the outside world. And I
couldn't quite put my finger on why. And today I know it was because I was a nerd, obviously,
and it was the only nerd around. So there were no other kids like me. And there was nobody
interested in physics or computing or mathematics and so on. And this village school that I went to
was basically a nice school. Kids were nice to me. I was not beaten up, but I also didn't make many
friends or build deep relationships. They only happened in starting from ninth grade when I went
into a school for mathematics and physics. Do you remember any key books from this moment?
Yes, I basically read everything. So I went to the library and I worked my way through the
children's and young adult sections. And then I read a lot of science fiction, for instance,
Danislav Lem, basically the great author of cybernetics, has influenced me. Back then,
I didn't see him as a big influence because everything that he wrote seemed to be so natural
to me. And it's only later that I contrasted it with what other people wrote. Another thing
that was very influential on me were the classical philosophers and also the literature of romanticism.
So German poetry and art, Droster Hilzhoff and Heine up to Hesse and so on.
Hesse, I love Hesse. So at which point do the classical philosophers end? At this point,
when the 21st century, what's the latest classical philosopher? Does this stretch through
even as far as Nietzsche or is this, are we talking about Plato and Aristotle?
I think that Nietzsche is the classical equivalent of a shit poster.
So he's very smart and easy to read, but he's not so much trolling others. He's trolling
himself because he was at odds with the world. Largely, his romantic relationships didn't work
out. He got angry and he basically became a nihilist. Isn't that a beautiful way to be as
an intellectual is to constantly be trolling yourself, to be in that conflict and in that
attention? I think it's a lack of self-awareness. At some point, you have to understand the
comedy of your own situation. If you take yourself seriously and you are not functional,
it ends in tragedy as it did for Nietzsche. I think you think he took himself too seriously
in that tension. And if you find the same thing in Hesse and so on, the Steppenwolf syndrome is
classic adolescence, where you basically feel misunderstood by the world and you don't understand
that all the misunderstandings are the result of your own lack of self-awareness because you
think that you are a prototypical human and the others around you should behave the same way
as you expect them based on your innate instincts and it doesn't work out. And you become a
transcendentalist to deal with that. So it's very, very understandable and have great sympathies for
this to the degree that I can have sympathy for my own intellectual history, but you have to
draw out of it. So as an intellectual, a life well lived, a journey well traveled is one where
you don't take yourself seriously. No, I think that you are neither serious or not serious yourself
because you need to become unimportant as a subject. That is, if you are a philosopher,
belief is not a verb. You don't do this for the audience. You don't do it for yourself. You have
to submit to the things that are possibly true and you have to follow wherever your inquiry
relates, but it's not about you. It has nothing to do with you. So do you think then people like
Ayn Rand believed sort of an idea of there's an objective truth? So what's your sense in the
philosophical? If you remove yourself that's objective from the picture, do you think it's
possible to actually discover ideas that are true or are we just in a mesh of relative concepts
that are neither true nor false? It's just a giant mess. You cannot define objective truths
without understanding the nature of truths in the first place. So what does the brain mean by
saying that it covers something as truth? So for instance, a model can be predictive or not
predictive. Then there can be a sense in which a mathematical statement can be true because
it's defined as true under certain conditions. So it's basically a particular state that
a variable can have in a simple game. And then you can have a correspondence between systems
and talk about truth, which is again a type of model correspondence. And there also seems to be
a particular kind of ground truth. So for instance, you're confronted with the enormity of something
existing at all, right? That's stunning when you realize something exists rather than nothing.
And this seems to be true, right? There's an absolute truth in the fact that something seems
to be happening. Yeah, that to me is a showstopper. I could just think about that idea and be amazed
by that idea for the rest of my life and not going any farther, because I don't even know the answer
to that. Why does anything exist at all? Well, the easiest answer is existence is the default,
right? So this is the lowest number of bits that you would need to encode this.
Whose answer? The simplest answer to this is that existence is the default.
What about non-existence? I mean, that seems... Non-existence might not be a meaningful notion
in this sense. So in some sense, if everything that can exist exists for something to exist,
it probably needs to be implementable. The only thing that can be implemented is finite
automata. So maybe the whole of existence is the superposition of all finite automata.
And we are in some region of the fractal that has the properties that it can contain us.
What does it mean to be a superposition of finite automata? So any superposition of like
all possible rules? Imagine that every automaton is basically an operator that acts on some
substrate. And as a result, you get emergent patterns. What's the substrate? I have no idea
to know. But some substrate? It's something that can store information.
Something that can store information, there's a automaton. Something that can hold state?
Still, doesn't make sense to me the why that exists at all. I could just sit there with a
beer or a vodka and just enjoy the fact, pondering the why. It may not have a why.
This might be the wrong direction to ask into this. So there could be no relation in the
why direction without asking for a purpose or for a cause. It doesn't mean that everything
has to have a purpose or a cause, right? So we mentioned some philosophers in that early
just taking a brief step back into that. Okay, so we asked ourselves when did classical philosophy
end? I think for Germany, it largely ended with the first revolution. That's basically when we
chose that. This was when we entered the monarchy and started a democracy. And at this point,
we basically came up with a new form of government that didn't have a good sense of
this new organism that society wanted to be. And in a way, it decapitated the universities.
So the universities went on to modernism like a headless chicken. At the same time,
democracy failed in Germany and we got fascism as a result. And it burned down things in a similar
way as Stalinism burned down intellectual traditions in Russia. And Germany, both
Germanies have not recovered from this. Eastern Germany had this Valger dialectic materialism
and Western Germany didn't get much more edgy than Habermas. So in some sense,
both countries lost their intellectual traditions and killing off and driving out the Jews didn't
have. Yeah, so that was the end. That was the end of really rigorous what you would say is
classical philosophy. There's also this thing that in some sense, the low-hanging fruits in
philosophy were mostly wrapped. And the last big things that we discovered was the constructivist
turn in mathematics. So to understand that the parts of mathematics that work are computation.
There was a very significant discovery in the first half of the 20th century. And it hasn't
fully permeated philosophy and even physics yet. Physicists checked out the code libraries
for mathematics before constructivism became universal. What's constructivism? What are
you referring to girls and completeness there and that kind of those kinds of ideas?
So basically, Gödel himself, I think, didn't get it yet. Hilbert could get it. Hilbert saw that,
for instance, countries set theoretic experiments and mathematics led into contradictions. And
he noticed that with the current semantics, we cannot build a computer in mathematics that
runs mathematics without crashing. And Gödel could prove this. And so what Gödel could show
is using classical mathematical semantics, you run into contradictions. And because Gödel strongly
believed in these semantics, and more than what he could observe and so on, he was shocked.
It basically shook his world to the core, because in some sense, he felt that the world
has to be implemented in classical mathematics. And for Turing, it wasn't quite so bad. I think
that Turing could see that the solution is to understand that mathematics was computation
all along, which means, for instance, pi in classical mathematics is a value. It's also a
function, but it's the same thing. And in computation, a function is only a value when
you can compute it. And if you cannot compute the last digit of pi, you only have a function.
You can plug this function into your local sun, let it run until the sun burns out. This is it.
This is the last digit of pi you will know. But it also means there can be no process
in the physical universe or in any physically realized computer that depends on having known
the last digit of pi. Which means there are parts of physics that are defined in such a way that
cannot strictly be true, because assuming that this could be true leads into contradictions.
So I think putting computation at the center of the worldview is actually the right way to think
about it. Yes. And Wittgenstein could see it. And Wittgenstein basically preempted the logitist
program of AI that Minsky started later, like 30 years later. Turing was actually a pupil of
Wittgenstein. Really? So I didn't know there's any connection between Turing and Wittgenstein.
And Wittgenstein even canceled some classes when Turing was not present, because he thought it was
not worth spending the time on. Oh, interesting. Interesting. And if you read the attractardos,
it's a very beautiful book. Like you once saw it on 75 pages. It's very non-typical for philosophy,
because it doesn't have arguments in it, and it doesn't have references in it. It's just one thought
that is not intending to convince anybody. This says it's mostly for people that had the same
insight as me. Just spell it out. And this insight is there is a way in which mathematics and
philosophy ought to meet. Mathematics tries to understand the domain of all languages by starting
with those that are so formalizable that you can prove all the properties of the statements that
you make. But the price that you pay is that your language is very, very simple. So it's very hard to
say something meaningful in mathematics. And it looks complicated to people, but it's far less
complicated than what our brain is casually doing all the time, and it makes sense of reality.
And philosophy is coming from the top. So it's mostly starting from natural languages with vaguely
defined concepts. And the hope is that mathematics and philosophy can meet at some point. And
Wittgenstein was trying to make them meet. And he already understood that, for instance,
you could express everything with the nant calculus, that you could reduce the entire logic
to nant gates as we do in our modern computers. So in some sense, he already understood Turing
universality before Turing spelled it out. I think when he wrote the tractatus, he didn't
understand yet that the idea was so important and significant. And as a spectant, when Turing
wrote it out, nobody cared that much. Turing was not that famous. When he lived, it was mostly his
work in decrypting the German quotes that made him famous or gave him some notoriety. But
the same status that he has to computer science right now, and yeah, something that I think he
could acquire later. That's kind of interesting. And do you think of computation and computer
science? And you kind of represent that to me is maybe that's the modern day. You, in a sense,
are the new philosopher by sort of the computer scientist who dares to ask the bigger questions
that philosophy originally started. Is the new philosopher? Certainly not me, I think. I'm mostly
still this child that grows up in a very beautiful valley and looks at the world from the outside
and tries to understand what's going on. And my teachers tell me things and they largely don't
make sense. So I have to make my own models. I have to discover the foundations of what the
others are saying. I have to try to fix them, to be charitable. I try to understand what they
must have thought originally over their teachers or their teacher's teachers must have thought
until everything got lost in translation and how to make sense of the reality that we are in.
And whenever I have an original idea, I'm usually late to the party by say 400 years.
And the only thing that's good is that the parties get smaller and smaller,
the older I get and the more I explore. The parties get smaller and more exclusive and
more exclusive. So it seems like one of the key qualities of your upbringing was that you were
not tethered, whether it's because of your parents or in general, maybe you're something
within your mind, some genetic material, they were not tethered to the ideas of the general
populace, which is actually a unique property where kind of the education system and whatever,
not education system, just existing in this world forces certain sets of ideas onto you.
Can you disentangle that? Why are you not so tethered? Even in your work today,
you seem to not care about perhaps a best paper in Europe. Being tethered to particular things
that current today, in this year, people seem to value as a thing you put on your CV and resume.
You're a little bit more outside of that world, outside of the world of ideas that people are
especially focusing on the benchmarks of today, the things. Can you disentangle that? Because
I think that's inspiring. And if there were more people like that, we might be able to solve some
of the bigger problems that AI dreams to solve. There's a big danger in this, because in a way
you are expected to marry into an intellectual tradition and visit this tradition into a
particular school. If everybody comes up with their own paradigms, their whole thing is not
cumulative as an enterprise. So in some sense, you need a healthy balance, you need
paradigmatic thinkers, and you need people that work within given paradigms. Basically,
scientists today define themselves largely by methods. And it's almost a disease that we think
as a scientist, somebody who was convinced by their guidance counselor that they should
join a particular discipline, and then they find a good mentor to learn the right methods,
and then they are lucky enough and privileged enough to join the right team, and then their
name will show up on influential papers. But we also see that there are diminishing returns with
this approach. And when our field computer science and AI started, most of the people that joined
this field had interesting opinions. And today's thinkers in AI either don't have interesting
opinions at all, or these opinions are inconsequential for what they're actually doing,
because what they're doing is they apply the state-of-the-art methods with a small epsilon.
And this is often a good idea if you think that this is the best way to make progress. And for
me, it's first of all, very boring. If somebody else can do it, why should I do it? If the current
methods of machine learning lead to strong AI, why should I be doing it? Well, just wait until
they're done, and wait until they do this on the beach, or read interesting books, or write some,
and have fun. But if you don't think that we are currently doing the right thing, if we are missing
some perspectives, then it's required to think outside of the box. It's also required to understand
the boxes. But it's necessary to understand what worked, and what didn't work, and for what reasons.
So you have to be willing to ask new questions and design new methods whenever you want to answer
them. And you have to be willing to dismiss the existing methods if you think that they're not
going to yield the right answers. It's very bad career advice to do that. So maybe to briefly
stay for one more time in the early days, when would you say for you was the dream
before we dive into the discussions that we just almost started? When was the dream to understand
or maybe to create human level intelligence born for you? I think that you can see AI largely today
as advanced information processing. If you would change the acronym of AI into that,
most people in the field would be happy. It would not change anything what they're doing.
We're automating statistics, and many of the statistical models are more advanced than what
statistics have said in the past. And it's pretty good work. It's very productive. And the other
aspect of AI is philosophical project. And this philosophical project is very risky.
And very few people work on it, and it's not clear if it succeeds.
So first of all, you keep throwing a lot of really interesting ideas, and I have to pick
which ones we go with. But first of all, you use the term information processing,
just information processing, as if it's the mere, it's the muck of existence,
as if it's the epitome that the entirety of the universe might be information processing,
that consciousness and intelligence might be information processing. So that maybe you can
comment on if the advanced information processing is a limiting kind of round of ideas. And then
the other one is, what do you mean by the philosophical project? So I suspect that general
intelligence is the result of trying to solve general problems. So intelligence, I think,
is the ability to model. It's not necessarily goal directed rationality or something. Many
intelligent people are bad at this. But it's the ability to be presented with a number of patterns
and see a structure in those patterns and be able to predict the next set of patterns
to make sense of things. And some problems are very general. Usually intelligence serves control,
so you make these models for a particular purpose of interacting as an agent with the world and
getting certain results. But the intelligence itself is in the sense instrumental to something.
But by itself, it's just the ability to make models. And some of the problems are so general
that the system that makes them needs to understand what itself is and how it relates to the environment.
So as a child, for instance, you notice you do certain things despite you perceiving yourself as
wanting different things. So you become aware of your own psychology. You become aware of the fact
that you have complex structure in yourself and you need to model yourself to reverse engineer
yourself to be able to predict how you will react to certain situations and how you deal with yourself
in relationship to your environment. And this process, this project, if you reverse engineer
yourself in your relationship to reality in the nature of a universe that can contain you,
if you go all the way, this is basically the project of AI, or you could say the project of AI
is a very important component in it. The true Turing test in a way is you ask a system, what
is intelligence? If that system is able to explain what it is, how it works, then you should assign
it the property of being intelligent in this general sense. So the test that Turing was
administering in a way, I don't think that he couldn't see it, but he didn't express it yet in the
original 1950 paper, is that he was trying to find out whether he was generally intelligent.
Because in order to take this test, the rap is, of course, you need to be able to understand what
that system is saying. And we don't yet know if we can build an AI. We don't yet know if we are
generally intelligent. Basically, you win the Turing test by building an AI.
Yes. So in a sense, hidden within the Turing test is a recursive test.
Yes, it's a test on us. The Turing test is basically a test of the conjecture whether
people are intelligent enough to understand themselves.
Okay, but you also mentioned a little bit of a self-awareness. In the project of AI,
do you think this kind of emergent self-awareness is one of the fundamental aspects of intelligence?
So, as opposed to goal-oriented, as you said, kind of puzzle-solving,
as coming to grips with the idea that you're an agent in the world.
I find that many highly intelligent people are not very self-aware. So self-awareness and
intelligence are not the same thing. And you can also be self-aware if you have good priors,
especially, without being especially intelligent. So you don't need to be very good at solving
puzzles if the system that you are already implements the solution.
But I do find intelligence. You kind of mentioned children, right? Is that the fundamental project
of AI is to create the learning system that's able to exist in the world? So you kind of drew a
difference in self-awareness and intelligence. And yet you said that the self-awareness seems
to be important for children. So I call this ability to make sense of the world and your own
place and make you able to understand what you're doing in this world's sentience.
And I would distinguish sentience from intelligence because sentience is
possessing certain classes of models. And intelligence is a way to get to these models
if you don't already have them. I see. So can you maybe pause a bit and try to answer the question
that we just said we may not be able to answer and it might be a recursive meta question of what
is intelligence? I think that intelligence is the ability to make models. So models.
I think it's useful as examples. Very popular now. Neural networks form representations of
large-scale data set. They form models of those data sets. When you say models and look at today's
neural networks, what are the difference of how you're thinking about what is intelligent
in saying that intelligence is the process of making models?
Two aspects to this question. One is the representation as the representation adequate
for the domain that we want to represent. And the other one is the type of the model that you
arrive at adequate. So basically, are you modeling the correct domain? And I think in both of these
cases, modern AI is lacking still. And I think that I'm not saying anything new here. I'm not
criticizing the field. Most of the people that design our paradigms are aware of that. And so
one aspect that we are missing is unified learning. When we learn, we at some point discover that
everything that we sense is part of the same object, which means we learn it all into one model.
And we call this model the universe. So an experience of the world that we are embedded on
is not a secret direct via to physical reality. Physical reality is a weird quantum graph that
we can never experience or get access to. But it has these properties that it can create certain
patterns that our systemic interface to the world. And we make sense of these patterns,
and the relationship between the patterns that we discover is what we call the physical universe.
So at some point in our development as a nervous system, we discover that everything that we relate
to in the world can be mapped to a region in the same three-dimensional space by and large. We
now know in physics that this is not quite true. The world is not actually three-dimensional,
but the world that we are entangled with at the level which we are entangled with
is largely a flat three-dimensional space. And so this is the model that our brain is intuitively
making. And this is, I think, what gave rise to this intuition of res extensor of this material
world, this material domain. It's one of the mental domains, but it's just the class of all
models that relate to this environment, this three-dimensional physics engine in which we are
embedded. Physics engine in which we're embedded. I love that. Right? Just slowly pause. So
the quantum graph, I think you called, which is the real world which we can never get access to,
there's a bunch of questions I want to sort of disentangle that. But maybe one useful one,
one of your recent talks I looked at, can you just describe the basics? Can you talk about
what is dualism, what is idealism, what is materialism, what is functionalism, and what connects
with you most in terms of, because he just mentioned there's a reality we don't have access to. Okay.
What does that even mean? And why don't we get access to it? Are we part of that reality? Why
can't we access it? So the particular trajectory that mostly exists in the West is the result of
our indoctrination by a cult for 2,000 years. A cult, which one? The Catholic cult mostly. And
for better or worse, it has created or defined many of the modes of interaction that we have,
that has created this society, but it has also in some sense scarred our rationality. And the
intuition that exists, if you would translate the mythology of the Catholic Church into the modern
world is that the world in which you and me interact is something like a multiplayer role-playing
adventure. And the money and the objects that we have in this world, this is all not real.
Or Eastern philosophers would say it's my eye. It's just stuff that appears to be meaningful,
and this embedding in this meaning, and people believe in it, is samsara. It's basically the
identification with the needs of the mundane secular everyday existence. And the Catholics
also introduced the notion of higher meaning, the sacred. And this existed before, but eventually
the natural shape of God is the platonic form of the civilization that you are part of. It's basically
the super-organism that is formed by the individuals as an intentional agent. And basically the
Catholics used a relatively crude mythology to implement software on the minds of people and
get the software synchronized to make them walk on lockstep, to basically get this God online
and to make it efficient and effective. And I think God technically is just a self that spends
multiple brains as opposed to you're and myself, which mostly exists just on one brain. And so
in some sense, you can construct a self functionally as a function that is implemented by brains
that exists across brains. And this is a God with a small g. That's one of the, if you have all Harari
kind of talking about, this is one of the nice features of our brains, it seems to that we can
all download the same piece of software like God in this case and kind of share it. Yeah, so basically
you give everybody a spec and the mathematical constraints that are intrinsic to information
processing, make sure that given the same spec, you come up with a compatible structure.
Okay, so that's, there's the space of ideas that we all share. And we think that's kind of the mind.
But that's separate from the idea is from Christianity for from religion is that there's
a separate thing between the mind. There is a real world. And this real world is the world in
which God exists. God is the quarter of the multiplayer adventure, so to speak. And we are all
players in this game. And that's dualism, you would say. But the dualist aspect is because the
mental realm exists in a different implementation than a physical realm. And the mental realm is
real. And a lot of people have this intuition that there is this real room in which you and me
talk and speak right now, then comes a layer of physics and abstract rules and so on. And then
comes another real room where our souls are. And our true form isn't the thing that gives us
phenomenal experience. And this is, of course, a very confused notion that you would get.
And it's basically, it's the result of connecting materialism and idealism in the wrong way.
So okay, I apologize, but I think it's really helpful if we just try to define,
try to define terms like, what is dualism? What is idealism? What is materialism for
people who don't know? So the idea of dualism in our cultural tradition is that there are two
substances, a mental substance and a physical substance. And they interact by different rules.
And the physical world is basically causally closed and is built on a low-level causal structure.
So they're basically a bottom level that is causally closed that's entirely mechanical
and mechanical in the widest sense. So it's computational. There's basically a physical
world in which information flows around. And physics describes the laws of how information
flows around in this world. Would you compare it to like a computer where you have a hardware and
software? The computer is a generalization of information flowing around. Basically,
but during discovery that there is the universal principle, you can define this universal machine
that is able to perform all the computations. So all these machines have the same power. This
just means that you can always define a translation between them as long as they have
unlimited memory to be able to perform each other's computations. So would you then
say that materialism is this whole world is just the hardware and idealism is this whole world is
just the software? Not quite. I think that most idealists don't have a notion of software yet
because software also comes down to information processing. So what you notice is the only thing
that is real to you and me is this experiential world in which things matter, in which things have
taste, in which things have color, phenomenal content and so on. And you realize that. You are
bringing up consciousness. Okay. And this is distinct from the physical world in which things
have values only in an abstract sense. And you only look at cold patterns moving around. So
how does anything feel like something? And this connection between the two things is
very puzzling to a lot of people, and of course, too many philosophers. So idealism starts out
with the notion that mind is primary, materialism thinks that matter is primary. And so for the
idealist, the material patterns that we say playing out are part of the dream that the mind is dreaming
and we exist in the mind on a higher plane of existence, if you want. And for the materialist,
there is only this material thing and that generates some models and we are the result
of these models. And in some sense, I don't think that we should understand, if you understand it
properly, materialism and idealism is a dichotomy, but as two different aspects of the same thing.
So the weird thing is we don't exist in the physical world. We do exist inside of a story
that the brain tells itself. Okay. Let me, my information processing, take that in. We don't
exist in the physical world. We exist in the narrative. Basically, your brain cannot feel
anything. New Yorker cannot feel anything. They're physical things, physical systems are unable to
experience anything. But it would be very useful for the brain or for the organism to know what
it would be like to be a person and to feel something. So the brain creates a simulacrum
of such a person that it uses to model the interactions of the person. It's the best model
of what that brain, this organism thinks it is in relationship to its environment. So it creates
that model. It's a story, a multimedia novel that the brain is continuously writing and updating.
But you also kind of said that, you said that we kind of exist in that story, in that story.
What is real in any of this? So like, there's a, again, these terms are, you kind of said there's
a quantum graph. I mean, what is, what is this whole thing running on then? Is the story,
and is it completely fundamentally impossible to get access to it? Because isn't the story supposed to,
isn't the brain in something, in existing in some kind of context?
So what we can identify as computer scientists, we can engineer systems and test our theories this
way that might have the necessary insufficient properties to produce the phenomena that we
are observing, which is there is a self in a virtual world that is generated in somebody's
neocortex that is contained in the skull of this primate here. And when I point at this,
this indexicality is of course wrong. But I do create something that is likely to give rise to
patterns on your retina that allow you to interpret what I'm saying, right? But we both know that
the world that you and me are seeing is not the real physical world. What we are seeing is a virtual
reality generated in your brain to explain the patterns on your retina.
How close is it to the real world? That's kind of the question. Is it when you have,
when you have like people like Donald Hoffman that say that like that you're really far away,
the thing we're seeing you and I now, that interface we have is very far away from anything,
like we don't even have anything close like to the sense of what the real world is,
or is it a very surface piece of architecture?
Imagine you look at a Mandelbrot tractor, right? This famous thing that Bernard Mandelbrot discovered,
if you see an overall shape in there, right? But you know, if you truly understand it,
you know it's two lines of code. It's basically in a series that is being tested for complex
numbers in the complex number plane for every point. And for those where the series is diverging,
you paint this black. And where it's converging, you don't. And you get the intermediate colors
by taking how far it diverges. This gives you this shape of this fractal. But imagine you live
inside of this fractal and you don't have access to where you are in the fractal, or you have not
discovered the generator function even. So what you see is, all I can see right now is this viral
and this viral moves a little bit to the right. Is this an accurate model of reality? Yes,
it is. It is an adequate description. You know that there is actually no spiral in the Mandelbrot
fractal. It only appears like this to an observer that is interpreting things as a two-dimensional
space and then defines certain regularities in there at a certain scale that it currently observes,
because if you zoom in, the spiral might disappear. It turns out to be something different at the
different resolution, right? So at this level, you have the spiral. And then you discover the
spiral moves to the right and at some point it disappears. So you have a singularity. At this
point, your model is no longer valid. You cannot predict what happens beyond the singularity.
But you can observe again and you will see it hit another spiral and at this point,
it disappeared. So we now have a second order law. And if you make 30 layers of these laws,
then you have a description of the world that is similar to the one that we come up with
when we describe the reality around us. It's reasonably predictive. It does not cut to the
core of it. So you explain how it's being generated, how it actually works. But it's
relatively good to explain the universe that we are entangled with.
But you don't think the tools of computer science or the tools of physics could get,
could step outside, see the whole drawing and get it the basic mechanism of how the pattern,
the spirals, is generated? Imagine you would find yourself embedded into a model
board fractal and you try to figure out what works. And you have somehow of a Turing machine.
There's enough memory to think. And as a result, you come to this idea, it must be some kind of
automaton. And maybe you just enumerate all the possible automata until you get to the one that
produces your reality. So you can identify necessary and sufficient condition. For instance,
we discover that mathematics itself is the domain of all languages. And then we see that most of
the domains of mathematics that we have discovered are in some sense describing the same fractals.
This is what category theory is obsessed about, that you can map these different domains to each
other. So they're not that many fractals. And some of these have interesting structure and
symmetry breaks. And so you can discover what region of this global fractal you might be embedded
in from first principles. But the only way you can get there is from first principles. So basically,
your understanding of the universe has to start with automata and then number theory and then
spaces and so on. Yeah, I think like Stephen Wolfram still dreams that he's that he'll be able to
arrive at the fundamental rules of the cellular automata or the generalization of which is behind
our universe. You've said on this topic, you said in a recent conversation that quote,
some people think that a simulation can't be conscious and only a physical system can.
But they got it completely backward. A physical system cannot be conscious. Only a simulation
can be conscious. Consciousness is a simulated property, the simulated self. Just like you said,
the mind is kind of the we call it story narrative. There's a simulation or so our mind is essentially
a simulation. Usually, I try to use the terminology so that the mind is basically the
principles that produce the simulation. It's the software that is implemented by your brain.
And the mind is creating both the universe that we are in and the self, the idea of a person that
is on the other side of attention and is embedded in this world. Why is that important, that idea
of a self? Why is that an important feature in the simulation? It's basically a result of the
purpose that the mind has. It's a tool for modeling. We are not actually monkeys. We are
side effects of the regulation needs of monkeys. And what the monkey has to regulate is the
relationship of an organism to an outside world that is in large part also consisting of other
organisms. And as a result, it basically has regulation targets that it tries to get to.
These regulation targets start with priors. They're basically like unconditional reflexes
that we are more or less born with. And then we can reverse engineer them to make them more
consistent. And then we get more detailed models about how the world works and how to interact
with it. And so these priors that you commit to are largely target values that our needs
should approach, set points. And this deviation to the set point creates some urge, some tension.
And we find ourselves living inside of feedback loops, right? The consciousness emerges over
dimensions of disagreements with the universe, things where you care, things are not the way
there should be, where you need to regulate. And so in some sense, the sense itself is the result
of all the identifications that you're having. And identification is a regulation target that
you're committing to. It's a dimension that you care about. What you think is important.
And this is also what locks you in. If you let go of these commitments of these identifications,
you get free. There's nothing that you have to do anymore. And if you let go of all of them,
you're completely free and you can enter Nirvana because you're done.
And actually, this is a good time to pause and say thank you to a sort of a friend of mine,
Gustav Sorastrum, who introduced me to your work. I want to give him a shout out.
He's a brilliant guy. And I think the AI community is actually quite amazing. And Gustav is a good
representative of that. You are as well. So I'm glad. First of all, I'm glad the internet exists,
YouTube exists, where I can watch your talks and then get to your book and study your writing and
think about, you know, that's amazing. Okay, but you've kind of described instead of this
emergent phenomenon of consciousness from the simulation. So what about the hard problem of
consciousness? Can you just linger on it? Like, why does it still feel like I understand you're
kind of the self is an important part of the simulation. But why does the simulation feel
like something? So if you look at a book by, say, George R. R. Martin, where the characters
have plausible psychology and they stand on a hill because they want to conquer the city below
the hill and they're done in it and they look at the color of the sky and they are apprehensive
and feel empowered and all these things. Why do they have these emotions? It's because it's
written into the story, right? And it's written to the story because it's an adequate model of the
person that predicts what they're going to do next. And the same thing has happened too far. So
it's basically a story that our brain is writing. It's not written in words. It's written in
perceptual content, basically multimedia content. And it's a model of what the person would feel
if it existed. So it's a virtual person. And you and me happen to be this virtual person. So this
virtual person gets access to the language center and talks about the sky being blue. And this is us.
But hold on a second. Do I exist in your simulation? You do exist. I mean, almost
similar way as me. So there are internal states that are less accessible for me
that you have and so on. And my model might not be completely adequate. There are also
things that I might perceive about you that you don't perceive. But in some sense, both you and
me are some puppets, two puppets that enact this play in my mind. And I identify with one of them
because I can control one of the puppet directly. And with the other one, I can create things in
between. So for instance, we can go on an interaction that even leads to a coupling to a feedback
loop. So we can sync things together in a certain way or feel things together. But this coupling
is itself not a physical phenomenon. It's entirely a software phenomenon. It's a result of two
different implementations interacting with each other. So that's interesting. So are you suggesting
like the way you think about it is the entirety of existence, the simulation and where kind of
each mind is a little sub simulation that like, why don't you why doesn't your mind have access
to my mind's full state? Like for the same reason that my mind hasn't have access to its own full
state. So what I mean, there is no trick involved. So basically, when I say know something about
myself, it's because I made a model. Yes, part of your brain is tasked with modeling what other
parts of your brain are doing. Yes. But there seems to be an incredible consistency about this world
in the physical sense that there's repeatable experiments and so on. How does that fit into
our silly the Senate over apes simulation of the world? So why is it so repeat? Why is everything so
repeatable in not everything? There's a lot of fundamental physics experiments that are repeatable
for a long time all over the place and so on laws of physics. How does that fit in? It seems that
the parts of the world that are not deterministic are not long lived. So if you build a system,
any kind of automaton, so if you build simulations of something, you'll notice that
the phenomena that endure are those that give rise to stable dynamics. So basically, if you see
anything that is complex in the world, it's the result of usually of some control of some feedback
that keeps it stable around certain attractors. And the things that are not stable that don't give
rise to certain harmonic patterns and so on, they tend to get weeded out over time. So if we are in
a region of the universe that sustains complexity, which is required to implement minds like ours,
this is going to be a region of the universe that is very tightly controlled and controllable.
So it's going to have lots of interesting symmetries and also symmetry breaks that allow the creation
of structure. But they exist where? So there's such an interesting idea that our minus simulation
that's constructing the narrative. My question is just to try to understand how that fits with the
entirety of the universe. You're saying that there's a region of this universe that allows
enough complexity to create creatures like us. But what's the connection between the brain,
the mind, and the broader universe? Which comes first? Which is more fundamental? Is the mind,
the starting point, the universe is emergent? Is the universe, the starting point, the minds
are emergent? I think quite clearly the latter. That's at least a much easier explanation because
it allows us to make causal models. And I don't see any way to construct an inverse causality.
So what happens when you die to your mind's simulation?
My implementation ceases. So basically the thing that implements myself will no longer be present,
which means if I am not implemented on the minds of other people, the thing that I identify with.
The weird thing is I don't actually have an identity beyond the identity that I construct.
If I was the Dalai Lama, he identifies as a form of government. So basically the Dalai Lama gets
reborn not because he's confused, but because he is not identifying as a human being. He runs on a
human being. He's basically a governmental software that is instantiated in every new generation and
you. So his advice is to pick someone who does this in the next generation. So if you identify
with this, you are no longer a human and you don't die in the sense that what dies is only the body
of the human that you run on. To kill the Dalai Lama, you would have to kill his tradition.
And if we look at ourselves, we realize that we are to a small part like this, most of us. So
for instance, if you have children, you realize something lives on in them. Or if you spark an
idea in the world, something lives on. Or if you identify with a society around you, because you
are part that you're not just this human being. Yeah. So in a sense, you are kind of like a Dalai
Lama in the sense that you, Josh Abaak is just a collection of ideas. So like you have this
operating system on which a bunch of ideas live and interact. And then once you die, they kind of
part some of them jump off the ship. Put it the other way. Identity is a software state,
it's a construction. It's not physically real. Identity is not a physical concept.
It's basically a representation of different objects on the same world line.
But identity lives and dies. Are you attached? What's the fundamental thing? Is it the ideas
that come together to form identity? Or is each individual identity actually a fundamental thing?
It's a representation that you can get agency over if you care. So basically,
you can choose what you identify with if you want to. No, but it just seems
if the mind is not real, that the birth and death is not a crucial part of it. Well, maybe I'm
silly. Maybe I'm attached to this whole biological organism, but it seems that the physical,
being a physical object in this world is an important aspect of birth and death. It feels
like it has to be physical to die. It feels like simulations don't have to die.
The physics that we experience is not the real physics. There is, for instance,
no color and sound in the real world. Color and sound are types of representations that you get
if you want to model reality with oscillators. So colors and sound in some sense have octaves,
and it's because they are represented probably with oscillators. So that's why colors form a
circle of use. And colors have harmonics, sounds have harmonics as a result of synchronizing
oscillators in the brain. So the world that we subjectively interact with is fundamentally
the result of the representation mechanisms in our brain. They are mathematically, to some degree
universal. There are certain regularities that you can discover in the patterns and not others.
But the patterns that we get, this is not the real world. The world that we interact with is
always made of too many parts to count. So when you look at this table and so on,
it's consisting of so many molecules and atoms that you cannot count them. So you only look at
the aggregate dynamics at limit dynamics. If you had almost infinitely many particles,
what would be the dynamics of the table? And this is roughly what you get. So geometry that we
are interacting with is the result of discovering those operators that work in the limit that you
get by building an infinite series that converges. For those parts where it converges is geometry.
For those parts where it doesn't converge, it's chaos.
All of that is filtered through the consciousness that's emergent in our narrative.
So the consciousness gives it color, gives it feeling, gives it flavor.
So I think the feeling, flavor, and so on is given by the relationship that a feature has to
all the other features. It's basically a giant relational graph that is our subjective universe.
The color is given by those aspects of the representation or this experiential color
where you care about, where you have identifications, where something means something,
where you are the inside of a feedback loop and the dimensions of caring are basically
dimensions of this motivational system that we emerge over.
The meaning of the relations, the graph, can you elaborate that a little bit?
Maybe you can even step back and ask the question of what is consciousness to be more
systematic? How do you think about consciousness? I think that consciousness is largely a model
of the contents of your attention. It's a mechanism that has evolved for a certain type of learning.
At the moment, our machine learning systems largely work by building chains of weighted
sums of real numbers with some nonlinearity. And you learn by piping an error signal through
these different chain layers and adjusting the weights in these weighted sums. And you can
approximate most polynomials with this if you have enough training data. But the price is,
you need to change a lot of these weights. Basically, the error is piped backwards into
the system until it accumulates at certain junctures in the network. And everything else
evens out statistically. And only at these junctures, this is where you had the actual
error in the network. You make the change there. This is a very slow process. And our brains don't
have enough time for that because we don't get old enough to play go the way that our machines
learn to play go. So instead, what we do is an attention-based learning. We pinpoint the probable
region in the network where we can make an improvement. And then we store this binding state
together with the expected outcome in a protocol. And there's the ability to make indexed memories
for the purpose of learning to revisit these commitments later. This requires a memory of
the contents of our attention. Another aspect is when I construct my reality and make mistakes.
So I see things that turn out to be reflections or shadows and so on,
which means I have to be able to point out which features of my perception gave rise to
present construction of reality. So the system needs to pay attention to
the features that are currently in its focus. And it also needs to pay attention to whether it
pays attention itself, in part because the attentional system gets trained with the same
mechanism. So it's reflexive, but also in part because your attention lapses if you don't pay
attention to the attention itself. So it's the thing that I'm currently seeing, just a dream
that my brain has spun off into some kind of daydream. Or am I still paying attention to my
percept? So you have to periodically go back and see whether you're still paying attention. And if you
have this loop and you make it tight enough between the system, becoming aware of the contents of
its attention and the fact that it's paying attention itself and makes attention, the object
of its attention, I think this is the loop over which we wake up. So there's this attentional
mechanism that's somehow self-referential, that's fundamental to what consciousness is.
So just to ask you a question, I don't know how much you're familiar with the recent breakthroughs
in natural language processing. They use attentional mechanism, use something called
transformers to learn patterns and sentences by allowing them at work to focus its attention
to particular parts of the sentence at each individual. So like parametrize and make it
learnable, the dynamics of a sentence by having like a little window into the sentence. Do you
think that's like a little step towards that eventually will take us to the intentional
mechanisms from which consciousness can emerge? Not quite. I think it models only one aspect of
attention. In the early days of automated language translation, there was an example that I found
particularly funny, where somebody tried to translate a text from English into German
and it was a bed broke the window. And the translation in German was
eine Fledermaus zerbracht das Fenster mit einem baseball Schläger. So to translate back into
English a bed, this flying mammal broke the window with a baseball bed. And it seemed to be
the most similar to this program because it somehow maximized the possibility of translating
the concept bed into German in the same sentence. And this is some mistake that the transformer
model is not doing because it's tracking identity. And the attentional mechanism in the
transformer model is basically putting its finger on individual concepts and make sure that these
concepts pop up later in the text and tracks basically the individuals through the text.
And this is why the system can learn things that other systems couldn't before it, which makes
for instance possible to write a text where it talks about the scientist, then the scientist
has a name and has a pronoun. And it gets a consistent story about that thing. What it
does not do, it doesn't fully integrate this. So this meaning falls apart at some point.
It loses track of this context. It does not yet understand that everything that it says has to
refer to the same universe. And this is where this thing falls apart. But the attention in
the transformer model does not go beyond tracking identity. And tracking identity is an important
part of attention, but it's a different, very specific attentional mechanism. And it's not
the one that gives rise to the type of consciousness that we have.
Okay, just to link on it, what do you mean by identity in the context of language?
So when you talk about language, we have different words that can refer to the same concept.
Got it. And in the sense that...
It's a space of concepts.
So... Yes. And it can also be in a nominal sense or in an inexical sense that you say
this word does not only refer to this class of objects, but it refers to a definite object,
to some kind of agent that waves their way through the story and is only referred by
different ways in the language. So the language is basically a projection from a conceptual
representation from a scene that is evolving into a discrete string of symbols. And what the
transformer is able to do, it learns aspects of this projection mechanism that other models couldn't
learn. So have you ever seen an artificial intelligence or any kind of construction idea
that allows for, unlike neural networks or perhaps within neural networks, that's able to form
something where the space of concepts continues to be integrated? So what you're describing,
building a knowledge base, building this consistent, larger and larger sets of ideas that would then
allow for a deeper understanding.
Wittgenstein thought that we can build everything from language, from basically a logical grammatical
construct. And I think to some degree, this was also what Minsky believed. So that's why
he focused so much on common sense reasoning and so on. And a project that was inspired by him was
Psyche. That was basically- That's still going on. Yes. Of course, ideas don't die, only people die.
That's true, but- And Psyche is a productive project. It's just probably not one that is going to
converge to general intelligence. The thing that Wittgenstein couldn't solve. And he looked at this
in his book at the end of his life, Philosophical Investigations, was the notion of images. So
images play an important role in tractatus, the tractatus in attempt to basically turn philosophy
into logical probing language, to design a logical language in which you can do actual
philosophy that's rich enough for doing this. And the difficulty was to deal with perceptual
content. And eventually, I think he decided that he was not able to solve it. And I think this
preempted the failure of the logitist program in AI. And the solution, as we see it today, is we need
more general function approximation. There are functions, geometric functions, that we learn
to approximate, that cannot be efficiently expressed and computed in a grammatical language.
We can, of course, build automata that go via number theory and so on and to learn in algebra
and then compute an approximation of this geometry. But to equate language and geometry
is not an efficient way to think about it. So functional, you kind of just said that neural
networks are sort of the approach that neural networks takes is actually more general than
what can be expressed through language. Yes. So what can be efficiently expressed
through language at the data rates at which we process grammatical language?
Okay, so you don't think languages, so you disagree with Wittgenstein that language is not
fundamental to? I agree with Wittgenstein. I just agree with the late Wittgenstein. And
I also agree with the beauty of the early Wittgenstein. I think that the Traktatus itself is
probably the most beautiful philosophical text that was written in the 20th century.
But language is not fundamental to cognition and intelligence and consciousness?
So I think that language is a particular way or the natural language that we're using is a
particular level of abstraction that we use to communicate with each other. But the languages
in which we express geometry are not grammatical languages in the same sense. So they work slightly
different. They're more general expressions of functions. And I think the general nature of a
model is you have a bunch of parameters. These have a range. These are the variances of the world.
And you have relationships between them, which are constraints, which say if certain parameters
have these values, then other parameters have to have the following values. And this is a very
early insight in computer science. And I think some of the earliest formulations is the Boltzmann
machine. And the problem with the Boltzmann machine is that while it has a measure of whether it's
good, this is basically the energy on the system, the amount of tension that you have left in the
constraints where the constraints don't quite match. It's very difficult to, despite having this
global measure, to train it. Because as soon as you add more than trivially few elements,
parameters into the system, it's very difficult to get it settled in the right architecture.
And so the solution that Hinton and Zanowski found was to use a restricted Boltzmann machine,
which uses the hidden links, the internal links in the Boltzmann machine and only
has basically input and output layer. But this limits the expressivity of the Boltzmann machine.
So now he builds a network of small of these primitive Boltzmann machines. And in some sense,
you can see almost continuous development from this to the deep learning models that we're using
today, even though we don't use Boltzmann machines at this point. But the idea of the
Boltzmann machine is you take this model, you clamp some of the values to perception,
and this forces the entire machine to go into a state that is compatible with the states that
you currently perceive. And this state is your model of the world. I think it's a very general
way of thinking about models. But we have to use a different approach to make it work. And this is,
we have to find different networks that train the Boltzmann machine. So the mechanism that trains
the Boltzmann machine and the mechanism that makes the Boltzmann machine settle into its state
are distinct from the constrained architecture of the Boltzmann machine itself.
The kind of mechanism that we want to develop, you're saying?
Yes. So there's the direction in which I think our research is going to go.
It's going to, for instance, what you notice in perception is our perceptual models of the world
are not probabilistic, but possibleistic, which means you should be able to perceive
things that are improbable but possible. Perceptual state is valid, not if it's probable,
but if it's possible, if it's coherent. So if you see a tiger coming after,
you should be able to see this even if it's unlikely. And the probability is
necessary for convergence of the model. So given the state of possibilities
that is very, very large and a set of perceptual features,
how should you change the states of the model to get it to converge with your perception?
But the space of ideas that are coherent with the context that you're sensing
is perhaps not as large. I mean, that's perhaps pretty small.
The degree of coherence that you need to achieve depends, of course,
how deep your models go. That is, for instance, politics is very simple when you know very little
about game theory and human nature. So the younger you are, the more obvious it is how
politics should work. Because you get a coherent aesthetics from relatively few inputs. And the
more layers you model, the more layers you model reality, the harder it gets to satisfy all the
constraints. So the current neural networks are a fundamentally supervised learning system with
a feed-forward neural network. You use back propagation to learn. What's your intuition
about what kind of mechanisms might we move towards to improve the learning procedure?
I think one big aspect is going to be meta learning. And architecture search starts in this
direction. In some sense, the first wave of classical AI worked by identifying a problem
and a possible solution and implementing the solution, program that plays chess.
Right now, we are in the second wave of AI. So instead of writing the algorithm that
implements the solution, we write an algorithm that automatically searches
for an algorithm that implements the solution. So the learning system, in some sense,
is an algorithm that itself discovers the algorithm that solves the problem, like Go.
Go is too hard to implement the solution by hand, but we can implement an algorithm that finds the
solution. Let's move to the third stage. The third stage would be meta learning. Find an
algorithm that discovers a learning algorithm for the given domain. Our brain is probably not a
learning system, but a meta learning system. This is one way of looking at what we are doing.
There is another way, if you look at the way our brain is, for instance, implemented. There is no
central control that tells all the neurons how to wire up. Instead, every neuron is an individual
reinforcement learning agent. Every neuron is a single celled organism that is quite complicated
and, in some sense, quite motivated to get fat. And it gets fat if it fires on average at the right
time. And the right time depends on the context that the neuron exists in, which is the electrical
and chemical environment that it has. So it basically has to learn a function over its environment
that tells us when to fire to get fat. Or if you see it as a reinforcement learning agent,
every neuron is, in some sense, making a hypothesis when it sends a signal and tries to pipe a signal
through the universe and tries to get positive feedback for it. And the entire thing is set up
in such a way that it's robustly self-organizing into a brain, which means you start out with
different neuron types that have different priors on which hypothesis to test and how to get its
reward. And you put them into different concentrations in a certain spatial alignment,
and then you will train it in a particular order. And as a result, you get a well-organized brain.
Yeah. So the brain is a meta-learning system with a bunch of reinforcement learning agents.
And what I think you said, but just to clarify, there's no centralized
government that tells you, here's a loss function, here's a loss function, here's a loss function.
Like who says what's the objective? There are also governments which impose loss functions
on different parts of the brain. So we have differential attention. Some areas in your
brain get especially rewarded when you look at faces. If you don't have that, you will get
prosopagnosia, which basically means the inability to tell people apart by their faces.
And the reason that happens is because it had an evolutionary advantage. So evolution comes
into play here. But it's basically an extraordinary attention that we have for faces. I don't think
that people with a prosopagnosia have a perceived defective brain. The brain just has an average
attention for faces. So people with prosopagnosia don't look at faces more than they look at cups.
So the level at which they resolve the geometry of faces is not higher than the one for cups.
And people that don't have prosopagnosia look obsessively at faces. For you and me,
it's impossible to move through a crowd without scanning the faces. And as a result,
we make insanely detailed models of faces that allow us to discern mental states of people.
So obviously we don't know 99% of the details of this meta-learning system that's our mind.
Okay. But still we took a leap from something much dumber to that from through the evolutionary
process. Can you first of all maybe say how big of a leap is that from our brain,
from our ape ancestors to multi-cell organisms? And is there something we can think about
about as we start to think about how to engineer intelligence? Is there something we
can learn from evolution? In some sense, life exists because of the market opportunity
of controlled chemical reactions. We compete with damp chemical reactions and we win in some
areas against this damp combustion because we can harness those entropy gradients where you
need to add a little bit of energy in a specific way to harvest more energy.
So we all competed combustion. Yes, in many regions we do. We try very hard because
when we are in direct competition, we lose. Because the combustion is going to close the
entropy gradients much faster than we can run. Yeah. So basically we do this because every cell
has a Turing machine built into it. It's like literally a read-write head on a tape.
And so everything that's more complicated than a molecule that just is a vortex around attractors
that needs a Turing machine for its regulation. And then you bind cells together and you get
next-level organizational organism where the cells together implement some kind of software.
And for me, a very interesting discovery in the last year was the word spirit because I
realized that what spirit actually means is an operating system for an autonomous robot.
And when the word was invented, people needed this word. But they didn't have
robots that they built themselves. The only autonomous robots that were known were people,
animals, plants, ecosystems, cities, and so on. And they all had spirits. And it makes sense to
say that the plant is an operating system, right? If you pinch the plant in one area,
then it's going to have repercussions throughout the plant. Everything in the plant is in some
sense connected into some global aesthetics like in other organisms. An organism is not a collection
of cells, it's a function that tells cells how to behave. And this function is not implemented as some
kind of supernatural thing, like some morphogenetic field. It is an emergent result of the interactions
of each cell with each other cell, right? Oh my God. So what you're saying is the organism
is a function that tells what to do. And the function emerges from the interaction of the cells.
Yes. So it's basically a description of what the plant is doing in terms of macro states.
And the micro states, the physical implementation are too many of them to describe them. So
the software that we use to describe what the plant is doing, the spirit of the plant is the
software, the operating system of the plant, right? This is a way in which we, the observers,
make sense of the plant. And the same is true for people. So people have spirits,
which is their operating system in a way, right? And there's aspects of that operating system that
relate to how your body functions and others, how you socially interact, how you interact with
yourself and so on. And we make models of that spirit. And we think it's a loaded term because
it's from a pre-scientific age. But it took the scientific age a long time to rediscover a term
that is pretty much the same thing. And I suspect that the differences that we still see between
the old word and the new word are translation errors that have been over the centuries.
Can you actually linger on that? Why do you say that spirit, just to clarify,
because I'm a little bit confused. So the word spirit is a powerful thing. But why did you say
in the last year or so that you discovered this? Do you mean the same old traditional idea of a
spirit? Or do you mean- I tried to find out what people mean by spirit. When people say
spirituality in the US, it usually refers to the phantom limb that they develop in the absence of
culture. And a culture is, in some sense, you could say the spirit of a society that is long game.
This thing that becomes self-aware at a level above the individuals where you say,
if you don't do the following things, then the grand-grand-grandchildren of our children will
not have nothing to eat. So if you take this long scope where you try to maximize the length
of the game that you are playing as a species, you realize that you're part of a larger thing
that you cannot fully control, you probably need to submit to the ecosphere instead of trying to
completely control it. There needs to be a certain level at which we can exist as a species if you
want to endure. And our culture is not sustaining this anymore. We basically made this bet with
the industrial revolution that we can control everything. And the modernist societies with
basically unfettered growth led to a situation in which we depend on the ability to control the
entire planet. And since we are not able to do that, as it seems, this culture will die. And we
realize that it doesn't have a future. We call our children generations that. It's such a very
optimistic thing to do. Yeah, so you have this kind of intuition that our civilization, you said
culture, but you really mean the spirit of the civilization, the entirety of the civilization
may not exist for long. Yeah. Can you untangle that? What's your intuition behind that? So
you kind of offline mentioned to me that the industrial revolution was kind of the moment
we agreed to accept the offer, sign on the paper, on the dotted line with the industrial
revolution, we doomed ourselves. Can you elaborate on that? This is suspicion. I of course don't know
how it plays out, but it seems to me that in a society in which you leverage yourself very far
over an entropic abyss without land on the other side, it's relatively clear that your
cantilever is at some point going to break down into this entropic abyss. And you have to pay the
bill. Okay. Russia is my first language. And I'm also an idiot. This is just two apes instead of
playing with the banana trying to have fun by talking. Okay. Anthropic what? And what's
anthropic? Entropic. Entropic. So entropic in the sense of entropy. Entropic, got it. Yes.
And entropic, what was the other word you used? Abyss. What's that? It's a big gorge. Oh, abyss.
Abyss, yes. Entropic abyss. So many of the things you say are poetic. It's hearty. It's
amazing, right? It's mispronounced, which makes you do more poetic. Wittgenstein would be proud.
So entropic abyss. Okay, let's rewind then the Industrial Revolution. So how does that get us
into the entropic abyss? So in some sense, we burned 100 million years worth of trees
to get everybody plumbing. Yes. And the society that we had before that had a very limited number
of people. So basically, since zero BC, we hovered between 300 and 400 million people.
Yes. And this only changed with the Enlightenment and the subsequent Industrial Revolution.
And in some sense, the Enlightenment freed our rationality and also freed our norms
from the preexisting order gradually. It was a process that basically happened in feedback
loops. So it was not that just one caused the other. It was a dynamic that started.
And the dynamic worked by basically increasing productivity to such a degree that we could
feed all our children. And I think the definition of poverty is that you have as many children as
you can feed before they die, which is in some sense the state that all animals on earth are in.
The definition of poverty is having enough. So you can have only so many children as you can
feed and if you have more, they die. And in our societies, you can basically have as many children
as you want and they don't die. So the reason why we don't have as many children as we want is
because we also have to pay a price in terms of we have to insert ourselves in the lower source
of dread if we have too many. So basically, everybody in the under middle and lower upper
class has only a limited number of children because having more of them would mean a big
economic hit to their individual families because children, especially in the US,
super expensive to have. And you only are taken out of this if you are basically super rich
or if you are super poor. If you're super poor, it doesn't matter how many kids you have because
your status is not going to change. And these children are largely not going to die of hunger.
So how does this lead to self-destruction? So there's a lot of unpleasant properties about
this process. So basically, what we try to do is we try to let our children survive even if they
have diseases like I would have died before my mid-20s without modern medicine. And most of
my friends would have as well. And so many of us wouldn't live without the advantages of modern
medicine and modern industrialized society. We get our protein largely by sub-doing the entirety
of nature. Imagine there would be some very clever microbe that would live in our organisms and would
completely harvest them and change them into a thing that is necessary to sustain itself. And
it would discover that, for instance, brain cells are kind of edible, but they're not quite nice.
So you need to have more fat in them and you turn them into more fat cells. And basically,
this big organism would become a vegetable that is barely alive and it's going to be very brittle
and not resilient when the environment changes. Yeah, but some part of that organism, the one
that's actually doing all the using, there'll still be somebody thriving.
So it relates back to this original question. I suspect that we are not the smartest thing
on this planet. I suspect that basically every complex system has to have some complex regulation
if it depends on feedback loops. And so, for instance, it's likely that we should describe
a certain degree of intelligence to plants. The problem is that plants don't have a nervous system.
So they don't have a way to telegraph messages over large distances almost instantly in the plant.
And instead, they will rely on chemicals between adjacent cells, which means the signal processing
speed depends on the signal processing with a rate of a few millimeters per second. And as a
result, if the plant is intelligent, it's not going to be intelligent at similar timescales.
Yeah, the ability to put the time scale is different. So you suspect we might not be the most
intelligent, but we're the most intelligent in this spatial scale and our time scale.
So basically, if you would zoom out very far, we might discover that there have been intelligent
ecosystems on the planet that existed for thousands of years in an almost undisturbed state.
And it could be that these ecosystems actively related the environment. So basically,
change the course of the evolution within this ecosystem to make it more efficient and less
brittle. So it's possible something like plants is actually a set of living organisms,
an ecosystem of living organisms that are just operating a different time scale and are far
superior in intelligence to human beings. And then human beings will die out and plants will
still be there and they'll be there. Yeah, they also, there's an evolutionary adaptation playing
a role at all of these levels. For instance, if mice don't get enough food and get stressed,
the next generation of mice will be more sparse and more scrawny. And the reason for this is because
they in a natural environment, the mice have probably hidden a drought or something else.
And if they're overgrays, then all the things that sustain them might go extinct. And there will be
no mice a few generations from now. So to make sure that there will be mice in five generations
from now, basically the mice scale back. And a similar thing happens with the predators of mice,
they should make sure that the mice don't completely go extinct. So in some sense,
if the predators are smart enough, they will be tasked with shepherding their food supply.
Maybe the reason why lions have much larger brains than antelopes is not so much because
it's so hard to catch an antelope as opposed to run away from the lion. But the lions need to
make complex models of their environment, more complex than the antelopes. So first of all,
just describing that there's a bunch of complex systems and human beings may not even be the most
special or intelligent of those complex systems, even on earth, makes me feel a little better about
the extinction of human species that we're talking about. Yes, maybe we're just guy as ploy to put
the carbon back into the atmosphere. Yeah, this is just a nice, we tried it out. The big stain on
evolution is not as it was trees. First evolved trees before they could be digested again, right?
There were no insects that could break all of them apart. Cellulose is so robust that you cannot get
all of it with microorganisms. So many of these trees fell into swamps. And all this carbon
became inert and could no longer be recycled into organisms. And we are this species that is destined
to take care of that. So this is kind of to get out of the ground, put it back into the atmosphere
and the earth is already greening. So within a million years or so, when the ecosystems have
recovered from the rapid changes that they're not compatible with right now, there's going to be
awesome again. And there won't be even a memory of us little apes. I think there will be memories
of us. I suspect we are the first generally intelligent species in this sense. We are the
first species with an industrial society because we will leave more bones than bones in the stratosphere.
Oh, see, I have bones than bones. I like it. But then let me push back. You've kind of suggested
that we have a very narrow definition of, I mean, why aren't trees a higher level of general
intelligence? If trees were intelligent, then they would be at different timescales, which
means within a hundred years, the tree is probably not going to make models that are as complex as
the ones that we make in 10 years. But maybe the trees are the ones that made the phones, right?
Right. You could say the entirety of life did it. The first cell never died. The first cell
only split, right? And every divided. And every cell in our body is still an instance of the first
cell that split off from that very first cell. There was only one cell on this planet as far as
we know. And so the cell is not just a building block of life. It's a hypoorganism, right? And
we are part of this hypoorganism. So nevertheless, this hypoorganism, no, this little particular
branch of it, which is us humans, because of the industrial revolution, and maybe the exponential
growth of technology might somehow destroy ourselves. So what do you think is the most
likely way we might destroy ourselves? So some people worry about genetic manipulation. Some
people, as we've talked about, worry about either dumb artificial intelligence or super
intelligent artificial intelligence destroying us. Some people worry about nuclear weapons and
weapons of war in general. What do you think? If you had to, if you were a betting man,
what would you bet on in terms of self-destruction? And then would it be higher than 50%?
Would it be higher than 50%? So it's very likely that nothing that we bet on matters
after we win our bets. So I don't think that bets are literally the right way to go about this.
I mean, once you're dead, you won't be there to collect the weighings.
So it's also not clear if we as a species go extinct. But I think that our present civilization is
not sustainable. So the thing that will change is there will be probably fewer people on the
planet than there are today. And even if not, then still most of people that are alive today
will not have offspring in 900 years from now because of the geographic changes and so on and
the changes in the food supply. It's quite likely that many areas of the planet will only be livable
with a close cooling chain in 900 years from now. So many of the areas around the equator and in
subtropical climates that are now quite pleasant to live in will stop to be inhabitable without
air conditioning. So you honestly, wow, cooling chain, close-knit cooling chain communities.
So you think you have a strong worry about the effects of global warming that we're seeing?
By itself, it's not the big issue. If you live in Arizona right now, you have basically three months
in the summer in which you cannot be outside. And so you have a close cooling chain. You have
air conditioning in your car and in your home and you're fine. And if the air conditioning would stop
for a few days, then in many areas, you would not be able to survive, right?
Can we just pause for a second? You say so many brilliant, poetic things like,
what is a, is that, do people use that term closed cooling chain?
I imagine that people use it when they describe how they get meat into a supermarket, right?
If you break the cooling chain and this thing starts to thaw, you're in trouble and you have
to throw it away. That's such a beautiful way to put it. It's like calling a city a closed
social chain or something like that. I mean, that's right. I mean, the locality of it is really
important. It basically means you wake up in a climatized room, you go to work in a climatized
car, you work in the office, you shop in a climatized supermarket. And in between, you
have very short distance which you run from your car to the supermarket, but you have to make sure
that your temperature does not approach the temperature of the environment. The crucial
thing is the wet bulb temperature. The what? The wet bulb temperature. It's what you get
when you take a wet clothes and you put it around your thermometer and then you will move it very
quickly through the air. So you get the evaporation heat. And as soon as you can no longer cool your
body temperature via evaporation to a temperature below something like I think 35 degrees, you die.
And which means if the outside world is dry, you can still cool yourself down by sweating.
But if it has a certain degree of humidity or if it goes over a certain temperature,
then sweating will not save you. And this means even if you're a healthy fit individual within a
few hours, even if you try to be in the shade and so on, you'll die. Unless you have some
climatizing equipment. And this itself, if you as long as you maintain civilization and you have
energy supply and you have food trucks coming to your home that are climatized, everything is fine.
But what if you lose a large scale open agriculture at the same time? So basically you run into food
insecurity because climate becomes very irregular or weather becomes very irregular. And you have a
lot of extreme weather events. So you need to roll most of your food maybe indoor or you need to
import your food from certain regions. And maybe you're not able to maintain the civilization
throughout the planet to get the infrastructure to get the food to your home.
Right. But there could be so there could be significant impacts in the sense that people
begin to suffer. There could be wars over resources and so on. But ultimately, do you
not have a, not a faith, but what do you make of the capacity of technological innovation
to help us prevent some of the worst damages that this condition can create? So as an example,
as a almost out there example is the work that SpaceX and Elon Musk is doing of trying to
also consider our propagation throughout the universe in deep space to colonize other planets.
That's one technological step. But of course, what Elon Musk is trying on Mars is not to
save us from global warming because Mars looks much worse than Earth will look like after the
worst outcomes of global warming imaginable, right? Mars is essentially not habitable.
That's exceptionally harsh environment. Yes. But what he is doing, what a lot of people throughout
history since the industrial revolution are doing, are just doing a lot of different technological
innovation with some kind of target. And when ends up happening is totally unexpected new
things come up. So trying to, trying to terraform or trying to colonize Mars extremely harsh
environment might give us totally new ideas of how to expand or increase the power of this
closed cooling circuit that empowers the community. So like, it seems like there's
a little bit of a race between our open-ended technological innovation of this communal
operating system that we have and our general tendency to want to overuse resources and
thereby destroy ourselves. You don't think technology can win that race?
I think the probability is relatively low given that our technology is, for instance,
the US is stagnating since the 1970s roughly. In terms of technology, most of the things that
we do are the result of incremental processes. What about Intel? What about Moore's law?
It's basically, it's very incremental. The things that we're doing is, so the invention
of the microprocessor was a major thing, right? The miniaturization of transistors was really
major. But the things that we did afterwards largely were not that innovative. So we had
gradual changes of scaling things into GPUs and things like that. But I don't think that there
are, basically, there are not many things. If you take a person that died in the 70s and was
at the top of their game, they would not need to read that many books to be current again.
But it's all about books. Who cares about books? There might be things that are beyond books might
be a very... Or say papers or... No, papers. Forget papers. There might be things that are...
So papers and books and knowledge, that's a concept of a time when you were sitting there by candle
light and individual consumers of knowledge. What about the impact that we're not in the
middle of? We might not be understanding of Twitter, of YouTube. The reason you and I are
sitting here today is because of Twitter and YouTube. So the ripple effect, and there's two
minds, sort of two dumb apes, are coming up with a new, perhaps a new clean insights. And there's
200 other apes listening right now, 200,000 other apes listening right now. And that effect,
it's very difficult to understand what that effect will have. That might be bigger than any of the
advancement of the microprocessor or the industrial revolution, the ability of spread knowledge.
And that knowledge, it allows good ideas to reach millions much faster. And the effect of that,
that might be the new, that might be the 21st century, is the multiplying of ideas,
of good ideas. Because if you say one good thing today, that will multiply across
huge amounts of people. And then they will say something, and then they will have
another podcast, and they'll say something, and then they'll write a paper. That could be a huge,
and you don't think that... Yeah, we should have billions of von Neumanns right now in
two rings, and we don't for some reason. I suspect the reason is that we destroy our attention
span. Also the incentives, of course, different. But the reason why we are sitting here and doing
this as a YouTube video is because you and me don't have the attention span to write a book
together right now. And you guys probably don't have the attention span to read it. So let me tell
you... But I guarantee you, they're still listening. It's burst, take care of your attention. It's very
short. But we're an hour and 40 minutes in, and I guarantee you that 80% of the people are still
listening. So there is an attention span. It's just the form. Who said that the book is the
optimal way to transfer information? That's still an open question. I mean, that's what we're...
Something that social media could be doing, that other forms could not be doing. I think the end
game of social media is a global brain. And Twitter is, in some sense, a global brain that is
completely hooked on dopamine, doesn't have any kind of inhibition. And as a result, it's
caught in a permanent seizure. It's also, in some sense, a multiplayer role-playing game.
And people use it to play an avatar that is not like them, as they were in the sane world. And
they look through the world through the lens of their phones and think it's the real world. But
it's the Twitter world that is distorted by the popularity incentives of Twitter.
Yeah, the incentives and just our natural biological, the dopamine rush of a like,
no matter how... I try to be very kind of zen-like and minimalist and not be influenced by likes
and so on. But it's probably very difficult to avoid that, to some degree. Speaking of a small
tangent of Twitter, how can Twitter be done better? I think it's an incredible mechanism that has a
huge impact on society by doing exactly what you're doing. Sorry, doing exactly what you described,
which is having this... We're like, this is some kind of game and we're kind of individual RL agents
in this game. And it's uncontrollable because there's not really a centralized control. Neither
Jack Dorsey nor the engineers at Twitter seem to be able to control this game. Or can they?
That's sort of a question. Is there any advice you would give on how to control this game?
Advice because I am certainly not an expert, but I can give my thoughts on this. And our brain
has solved this problem to some degree, right? Our brain has lots of individual agents that
manage to play together in a way. And we have also many contexts in which other organisms
have found ways to solve the problems of cooperation that we don't solve on Twitter.
And maybe the solution is to go for an evolutionary approach. So imagine that you
have something like Reddit or something like Facebook and something like Twitter. And do
you think about what they have in common? What they have in common? They're companies
that in some sense own a protocol. And this protocol is imposed on a community. And the
protocol has different components for monetization, for user management, for user display, for rating,
for anonymity, for import of other content, and so on. And now imagine that you take these
components of the protocol apart and you do it in some sense like communities visit this
social network. And these communities are allowed to mix and match their protocols and design new
ones. So for instance, the UI and the UX can be defined by the community. The rules for sharing
content across communities can be defined. The monetization can be redefined. The way you reward
individual users for what can be redefined, the way users can represent themselves and to each
other can redefine. Who could be the redefiner? So can individual human beings build enough
intuition to redefine those things? It itself can become part of the protocol. So for instance,
it could be in some communities, it will be a single person that comes up with these things.
And others, it's a group of friends. Some might implement a voting scheme that has some interesting
weighted voting. Who knows? Who knows what will be the best self-organizing principle for this?
But the process can't be automated. I mean, it seems like the brain can be automated so people
can write software for this. And eventually, the idea is, let's not make an assumption about this
thing if you don't know what the right solution is. In most areas, we have no idea whether the
right solution will be people designing this ad hoc or machines doing this, whether you want to
enforce compliance by social norms like Wikipedia or with software solutions or with AI that goes
through the posts of people or with a legal principle and so on. This is something maybe you
need to find out. And so the idea would be if you let the communities evolve and you just control
it in such a way that you are incentivizing the most sentient communities, the ones that produce
the most interesting behaviors that allow you to interact in the most helpful ways to the
individuals. You have a network that gives you information that is relevant to you. It helps
you to maintain relationships to others in healthy ways. It allows you to build teams. It allows you
to basically bring the best of you into this thing and goes into a coupling into a relationship
with others in which you produce things that you would be unable to produce alone.
Yes, beautifully put. But the key process of that with incentives and evolution is things that don't
adopt themselves to effectively get the incentives have to die. And the thing about social media
is communities that are unhealthy or whatever you want to define as the incentives really don't
like dying. One of the things that people really get aggressive protest aggressively is when they're
censored, especially in America. I don't know. I don't know much about the rest of the world, but
the idea of freedom of speech, the idea of censorship is really painful in America. And so
what do you think about that have been grown up in East Germany? Do you think censorship is an
important tool in our brain and the intelligence and in social networks? So basically, if you're
not a good member of the entirety of the system, they should be blocked away, well, locked away,
blocked. An important thing is who decides that you're a good member.
Who? Is it distributed? And what is the outcome of the process that decides it?
Both for the individual and for society at large. For instance, if you have a high trust
society, you don't need a lot of surveillance. And the surveillance is even in some sense undermining
trust because it's basically punishing people that look suspicious when surveyed but do the
right thing anyway. And the opposite, if you have a low trust society, then surveillance can be a
better trade-off. And the US is currently making a transition from a relatively high trust or mixed
trust society to a low trust society. So surveillance will increase. Another thing is that
beliefs are not just inert representations. There are implementations that run code on your brain
and change your reality and change the way you interact with each other at some level.
And some of the beliefs are just public opinions that we use to display our alignment. So for
instance, people might say all cultures are the same and equally good, but still they prefer to
live in some cultures over others, very, very strongly so. And it turns out that the cultures
are defined by certain rules of interaction. And these rules of interaction lead to different
results when you implement them. So if you adhere to certain rules, you get different outcomes in
different societies. And this all leads to very tricky situations when people do not have a commitment
to shared purpose. And our societies probably need to rediscover what it means to have a shared
purpose and how to make this compatible with a non-totalitarian view. So in some sense,
the US is caught in a conundrum between totalitarianism and diversity and how to resolve
this. And the solutions that the US has found so far are very crude because it's a very young
society that is also under a lot of tension. It seems to me that the US will have to reinvent
itself. What do you think? Just philosophizing, what kind of mechanisms of government do you think
we as a species should be evolving with US or broadly? What do you think will work well
as a system? Of course, we don't know. It all seems to work pretty crappily. Some things worse
than others. Some people argue that communism is the best. Others say, yeah, look at the Soviet
Union. Some people argue that anarchy is the best and then completely discarding the positive
effects of government. There's a lot of arguments. US seems to be doing pretty damn well in the span
of history. There's respect for human rights, which seems to be a nice feature, not a bug.
And economically, a lot of growth, a lot of technological development, people seem to be
relatively kind on the grand scheme of things. What lessons do you draw from that? What kind
of government system do you think is good? Ideally, government should not be perceivable.
It should be frictionless. The more you notice the influence of the government,
the more friction you experience, the less effective and efficient the government probably is.
Right? So a government, game theoretically, is an agent that imposes an offset on your
payout metrics to make your Nash equilibrium compatible with the common good.
Right? So you have these situations where people act on the local incentives. And these local
incentives, everybody does the thing that's locally the best for them, but the global outcome is
not good. And this is even the case when people care about the global outcome, because a regulation
mechanisms exist that creates a causal relationship between what I want to have for the global good
and what I do. So for instance, if I think that we should fly less and I stay at home,
there's not a single plane that is going to not start because of me, right? It's not going to
have an influence, but I don't get from A to B. So the way to implement this would basically to
have a government that is sharing this idea that we should fly less and is then imposing a regulation
that, for instance, makes flying more expensive and gives incentives for inventing other forms
of transportation that are less putting that strain on the environment, for instance.
So there's so much optimism and so many things you describe, and yet there's the pessimism of
you think our civilization is going to come to an end. So that's not 100% probability,
nothing in this world is. So what's the trajectory out of self-destruction, do you think?
I suspect that in some sense, we are both too smart and not smart enough,
which means we are very good at solving near-term problems. And at the same time, we are unwilling
to submit to the imperatives that we would have to follow and if you want to stick around.
So that makes it difficult. If you were unable to solve everything technologically,
you can probably understand how high the child mortality needs to be to absorb the mutation rate
and how either mutation rate needs to be to adapt to a slowly changing ecosystemic environment.
So you could, in principle, compute all these things game theoretically and adapt to it.
But if you cannot do this because you are like me and you have children, you don't want them to die,
you will use any kind of medical information to keep child mortality low. Even if it means that
our, within the future generations, we have enormous genetic drift and most of us have
allergies as a result of not being adapted to the changes that we made to our food supply.
That's for now. I say technologically speaking, we're just a very young, 300 years industrial
revolution. We're very new to this idea. So you're attached to your kids being alive and not being
murdered for the good of good of society, but that might be a very temporary moment of time.
Yes. That we might evolve in our thinking. So like you said, we're both smart and not
smart enough. We are probably not this first human civilization that has discovered technology
that allows to efficiently overgrace our resources. And this overgracing, at some point,
we think we can compensate this because if we have eaten all the grass, we will find a way to
grow mushrooms. But it could also be that the ecosystems tip. And so what really concerns
me is not so much the end of the civilization because we will invent a new one. But what concerns
me is the fact that, for instance, the oceans might tip. So for instance, maybe the plankton
dies because of ocean acidification and cyanobacteria take over. And as a result, we can no longer
breathe the atmosphere. This would be really concerning. So basically a major reboot of
most complex organisms on earth. And I think this is a possibility. I don't know about the
percentage for this possibility, but it doesn't seem to be outlandish to me if you look at the
scale of the changes that we've already triggered on this planet. And so Danny Hillers suggests
that, for instance, we may be able to put chalk into the stratosphere to limit solar radiation.
Maybe it works. Maybe this is sufficient to counter the effects of what we've done.
Maybe it won't be. Maybe we won't be able to implement it by the time it's prevalent.
I have no idea how the future is going to play out in this regard. It's just,
I think it's quite likely that we cannot continue like this. All our cousin species,
the other home, and it's a gun. So the right step would be to what? To rewind
towards the industrial revolution and slow the, to try to contain the technological process that
leads to the overconsumption of resources? Imagine you get to choose. You have one lifetime.
You get born into a sustainable agricultural civilization, 300, maybe 400 million people
on the planet tops. Or before this, some kind of nomadic species with like a million or two
million. And so you don't meet new people unless you give birth to them. You cannot travel to
other places in the world. There is no internet. There is no interesting intellectual tradition
that reaches considerably deep. So you would not discover to run completeness probably and so on.
So we wouldn't exist. And the alternative is you get born into an insane world.
One that is doomed to die because it has just burned 100 million years worth of trees in a
single century. Which one do you like? I think I like this one. It's a very weird thing that when
you find yourself on a Titanic and you see this iceberg and it looks like we are not going to
miss it. And a lot of people are in denial and most of the counter arguments sound like denial
to me. There don't seem to be rational arguments. And the other thing is we are born on this
Titanic. Without this Titanic we wouldn't have been born. We wouldn't be here. We wouldn't be
talking. We wouldn't be on the internet. We wouldn't do all the things that we enjoy.
And we are not responsible for this happening. It's basically if we had the choice, we would
probably try to prevent it. But when we were born, we were never asked when we want to be born,
in which society we want to be born, what incentive structures we want to be exposed to.
We have relatively little agency in the entire thing. Humanity has relatively little agency
in the whole thing. It's basically a giant machine. It's tumbling down a hill and everybody
is fantastically trying to push some buttons. Nobody knows what these buttons are meaning,
what they connect to. And most of them are not stopping. It's tumbling down the hill.
Is it possible the artificial intelligence will give us
an escape latch somehow? So there's a lot of worry about existential threats of
artificial intelligence. But what AI also allows in general forms of automation allows
the potential of extreme productivity growth that will also perhaps in a positive way transform
society that may allow us to inadvertently return to the same kind of ideals of closer to nature
that's represented in hunter-gatherer societies that's not destroying the planet,
that's not doing overconsumption and so on. I mean, generally speaking, do you have hope
that AI can help somehow? I think it is not fun to be very close to nature until you completely
subdue nature. So our idea of being close to nature means being close to agriculture,
basically forests that don't have anything in them that eats us.
See, I mean, I want to disagree with that. I think the niceness of being close to nature
is to being fully present and in like, when survival becomes your primary,
not just your goal, but your whole existence. I mean, I'm not just romanticizing,
I can just speak for myself. I am self-aware enough that that is a fulfilling existence.
I prefer to be in nature and not fight for my survival. I think fighting for your survival
while being in the cold and in the rain and being hunted by animals and having open wounds
is very unpleasant. There's a contradiction in there. Yes, I and you just as you said would not
choose it. But if I was forced into it, it will be a fulfilling existence.
Yes, if you are adapted to it, basically, if your brain is wired up in such a way that you'll get
rewards optimally in such an environment. And there's some evidence for this that for a certain
degree of complexity, basically, people are more happy in such an environment because it's
what we largely have evolved for. In between, we had a few thousand years in which I think we have
evolved for a slightly more comfortable environment. So there is probably something like an immediate
stage in which people would be more happy than there would be if they would have to fend for
themselves in small groups in the forest and often die versus something like this where we now have
basically a big machine, a big mordor in which we run through concrete boxes and press buttons
and machines and largely don't feel well cared for as the monkeys that we are.
So returning briefly to, not briefly, but returning to AI, what, let me ask a romanticized
question. What is the most beautiful to you, silly ape? The most beautiful, surprising idea
in the development of artificial intelligence, whether in your own life or in the history of
artificial intelligence that you've come across? If you build an AI, it probably can make models
at an arbitrary degree of detail of the world. And then it would try to understand its own nature.
It's tempting to think that at some point when we have general intelligence, we have competitions
where we will let the AIs wake up in different kinds of physical universes and we measure how
many movements of the Rubik's Cube it takes until it's figured out what's going on in its universe
and what it is in its own nature and its own physics and so on, right? So what if we exist in
the memory of an AI that is trying to understand its own nature and remembers its own genesis and
remembers Lex and Joshua sitting in a hotel, sparking some of the ideas of that led to the
development of general intelligence. So we're a kind of simulation that's running in an AI system
that's trying to understand itself. It's not that I believe that, but I think it's a beautiful.
I mean, you kind of return to this idea with the Turing test of intelligence being
of intelligence being the process of asking and answering, what is intelligence?
I mean, why do you think there is an answer? Why is there such a search for an answer?
So does there have to be like an answer? You just had an AI system that's trying to
understand the why of what, you know, understand itself.
Is that a fundamental process of greater and greater complexity, greater and greater
intelligence? Is the continuous trying of understanding itself?
No, I think you will find that most people don't care about that because they're well adjusted enough
to not care. And the reason why people like you and me care about it probably has to do with the
need to understand ourselves. It's because we are in fundamental disagreement with the universe
that we wake up in. They look down on me and I see, oh my God, I'm caught in a monkey. What's that?
That's the feeling, right? It's the government and I'm unhappy with the
entire universe that I find myself in. Oh, so you don't think that's a fundamental
aspect of human nature that some people are just suppressing that they wake up shocked
they're in the body of a monkey? No, there is a clear adaptive value to not be confused by that.
Well, no, that's not what I asked. So, yeah, if there's clear adaptive value, then there's clear
adaptive value to while fundamentally your brain is confused by that, by creating an illusion.
Another layer of the narrative that says, you know, that tries to suppress that and instead
say that, you know, what's going on with the government right now is the most important
thing. What's going on with my football team is the most important thing. But it seems to me
the, like, for me, it was a really interesting moment reading Ernest Beck's denial of death,
that, you know, this kind of idea that we're all, you know, the fundamental thing from which most
of our human mind springs is this fear of mortality and being cognizant of your mortality and the
fear of that mortality. And then you construct illusions on top of that. I guess you being,
just to push on it, you really don't think it's possible that this worry of the big existential
questions is actually fundamental as the existentialist thought to our existence.
I think that the fear of death only plays a role as long as you don't see the big picture. The thing
is that minds are software states, right? Software doesn't have identity. Software in some sense is
a physical law. But it feels like there's an identity. I thought that was for this particular
piece of software. And the narrative it tells, that's a fundamental property of it. The
maintenance of the identity is not terminal. It's instrumental to something else. You maintain
your identity so you can serve your meaning. So you can do the things that you're supposed to do
before you die. And I suspect that for most people, the fear of death is the fear of dying before
they are done with the things that they feel they have to do, even though they cannot quite put their
finger on it, what that is. Right. But in the software world, return to the question, then what
happens after we die? Why would you care? You will not be longer there. The point of dying is that
you are gone. Well, maybe I'm not. This is what, you know, it seems like there's so much,
in the idea that this is just, the mind is just a simulation that's constructing a narrative around
some particular aspects of the quantum mechanical wave function world that we can't quite get direct
access to. Then like the idea of mortality seems to be fuzzy as well. Maybe there's not a clear
end. The fuzzy idea is the one of continuous existence. We don't have continuous existence.
How do you know that? Like that? Because it's not computable.
Because you're saying it's going to be direct. There is no continuous process. The only thing
that binds you together with the Lex Friedman from yesterday is the illusion that you have
memories about him. So if you want to upload, it's very easy. You make a machine that thinks
it's you, because it's the same thing that you are. You are a machine that thinks it's you.
But that's immortality. Yeah, but it's just a belief. You can create this belief very easily
once you realize that the question whether you are immortal or not depends entirely on your
beliefs and your own continuity. But then you can be immortal by the continuity of the belief.
It cannot be immortal, but you can stop being afraid of your mortality because you realize
you were never continuously existing in the first place. Well, I don't know if I'd be more
terrified or less terrified with that. It seems like the fact that I existed.
Oh, so you don't know this state in which you don't have a self. You can turn off yourself,
you know? I can't turn off myself. You can turn it off. You can turn it off. I can. Yes. So you
can basically meditate yourself in a state where you are still conscious. There are still things
are happening where you know everything that you knew before, but you're no longer identified with
changing anything. And this means that yourself in a way dissolves. There is no longer this person.
You know that this person construct exists in other states and it runs on the brain of
Lex Friedman. But it's not a real thing. It's a construct. It's an idea. And you can change
that idea. And if you let go of this idea, if you don't think that you are special,
you realize it's just one of many people and it's not your favorite person even, right? It's
just one of many. And it's the one that you are doomed to control for the most part,
and that is basically informing the actions of this organism as a control model. And this is
all there is. And you are somehow afraid that this control model gets interrupted
or loses the identity of continuity. Yeah, so I'm attached. I mean, yeah, it's a very popular,
it's a somehow compelling notion that being attached, like there's no need to be attached
to this idea of an identity. But that in itself could be an illusion that you construct. So the
process of meditation, while popular, is thought of as getting under the concept of identity,
it could be just putting a cloak over it, just telling it to be quiet for the moment.
I think that meditation is eventually just a bunch of techniques that let you control
attention. And when you can control attention, you can get access to your own source code,
hopefully not before you understand what you're doing. And then you can change the way it works
temporarily or permanently. So yeah, meditation is to get a glimpse at the source code, get under,
so basically control or turn off the attention. The entire thing is that you learn to control
attention. So everything else is downstream from controlling attention. And control the attention
that's looking at the attention. Normally, we only get attention in the parts of our mind that
create heat, where you have a mismatch between model and the results that are happening. And
so most people are not self-aware because their control is too good. If everything works out
roughly the way you want, and the only things that don't work out is whether your football team
wins, then you will mostly have models about these domains. And it's only when, for instance,
your fundamental relationships to the world around you don't work, because the ideology
of your country is insane, and the other kids are not nerds, and don't understand why you
understand physics, and why you want to understand physics, and you don't understand why somebody
would not want to understand physics. So we brought up neurons in the brain as reinforcement
learning agents. And there's been some successes as you brought up with Go, with AlphaGo AlphaZero,
with ideas of self-play, which I think are incredibly interesting ideas of systems playing
each other in an automated way to improve by playing other systems in a particular construct
of a game that are a little bit better than itself, and then thereby improving continuously.
All the competitors in the game are improving gradually, so being just challenging enough
and learning from the process of the competition. Do you have hope for that reinforcement learning
process to achieve greater and greater level of intelligence? So we talked about different
ideas in AI that we need to be solved. Is RL a part of that process of trying to create an
AGI system? So definitely forms of unsupervised learning, but there are many algorithms that
can achieve that. And I suspect that ultimately the algorithms that work, there will be a class
of them or many of them, and they might have small differences of magnitude and efficiency.
But eventually what matters is the type of model that you form. And the types of models that we
form right now are not sparse enough. What does it mean to be sparse?
So it means that ideally every potential model state should correspond to a potential world
state. So basically if you vary states in your model, you always end up with valid world states.
And our mind is not quite there. So an indication is basically what we see in dreams. The older
we get, the more boring our dreams become because we incorporate more and more constraints that we
learned about how the world works. So many of the things that we imagine to be possible as children
turn out to be constrained by physical and social dynamics. And as a result, fewer and fewer things
remain possible. And it's not because our imagination scales back, but the constraints
under which it operates become tighter and tighter. And so the constraints under which
our neural networks operate are almost limitless, which means it's very difficult to get a neural
network to imagine things that look real. So I suspect part of what we need to do is we probably
need to build dreaming systems. I suspect that part of the purpose of dreams is similar to a
generative adversarial network, to learn certain constraints. And then it produces alternative
perspectives on the same set of constraints. So you can recognize it under different circumstances.
Maybe we have flying dreams as children, because we recreate the objects that we know and the maps
that we know from different perspectives, which also means from a bird's eye perspective.
So I mean, aren't we doing that anyway? I mean, not without with our eyes closed and when we're
sleeping, aren't we just constantly running dreams and simulations in our mind as we try to interpret
the environment? I mean, sort of considering all the different possibilities, the way we interact
with the environment seems like essentially, like you said, sort of creating a bunch of
simulations that are consistent with our expectations, with our previous experiences,
with the things we just saw recently. And through that hallucination process, we are able to then
somehow stitch together what actually we see in the world with the simulations that match it well
and thereby interpret it. I suspect that you and my brain are slightly unusual in this regard,
which is probably what got you into MIT. So there's obsession of constantly pondering
possibilities and solutions to problems. Oh, stop it. I think I'm not talking about
intellectual stuff. I'm talking about just doing the kind of stuff it takes to walk
and not fall. Yes, this is largely automatic.
Yes, but the process is, I mean... It's not complicated. It's relatively easy to build a
neural network that in some sense learns the dynamics. The fact that we haven't done it right so
far doesn't mean it's hard because you can see that a biological organism does it with relatively
few neurons. So basically, you build a bunch of neural oscillators that entrain themselves with
the dynamics of your body in such a way that the regulator becomes isomorphic and it's modeled to
the dynamics that it regulates. And then it's automatic. And it's only interesting the sense
that it captures attention when the system is off. See, but thinking of the kind of mechanism
that's required to do walking as a controller, as a neural network, I think it's a compelling notion,
but it discards quietly or at least makes implicit the fact that you need to have something like
common sense reasoning to walk. It's an open question whether you do or not. But my intuition
is to act in this world, there's a huge knowledge base that's underlying it somehow. There's so
much information of the kind we have never been able to construct in neural networks or
in artificial intelligence systems period, which is like it's humbling, at least in my imagination,
the amount of information required to act in this world humbles me. And I think saying that
neural networks can accomplish it is missing the fact that we don't have yet a mechanism for
constructing something like common sense reasoning. What's your sense about to linger on the idea of
what kind of mechanism would be effective at walking? You said just in neural network,
not maybe the kind we have, but something a little bit better, we'll be able to walk easily.
Don't you think it also needs to know a huge amount of knowledge that's represented under
the flag of common sense reasoning? How much common sense knowledge do we actually have? Imagine
that you are really hardworking through all your life and you form two new concepts every half hour,
so you end up with something like a million concepts because you don't get that old.
So a million concepts, that's not a lot.
So it's not just a million concepts. I personally think it might be much more than
a million. But if you think just about the numbers, you don't live that long. If you
think about how many cycles do your neurons have in your life, it's quite limited. You don't get
that old. Yeah, but the powerful thing is the number of concepts, and they're probably deeply
hierarchical in nature. The relations as you described between them is the key thing. So it's
like even if it's a million concepts, the graph of relations that's formed and some kind of
perhaps some kind of probabilistic relationships, that's what common sense reasoning is, the
relationship between things. Yeah, so in some sense, I think of the concepts as the address space
for our behavior programs. And the behavior programs allow us to recognize objects and
interact with them, also mental objects. And a large part of that is the physical world that
we interact with, which is this res extensor thing, which is basically navigation of information and
space. And basically, it's similar to a game engine. It's a physics engine that you can use to
describe and predict how things that look in a particular way, that feel when you touch them
in particular way, that are proprioception, that love auditory perception and so on,
how they work out. So basically, the geometry of all these things. And this is probably 80%
of what our brain is doing is dealing with that with this real time simulation. And by itself,
a game engine is fascinating. But it's not that hard to understand what it's doing, right? And
our game engines are already in some sense, approximating the fidelity of what we can perceive.
So if we put on an Oculus Quest, we get something that is still relatively crude with
respect to what we can perceive, but it's also in the same ballpark already, right? It's just a
couple order of magnitudes away from saturating our perception in terms of the complexity that it
can produce. So in some sense, it's reasonable to say that the computer that you can buy and put
into your home is able to give a perceptual reality that has a detail that is already in the same
ballpark as what your brain can process. And everything else are ideas about the world. And
I suspect that they are relatively sparse. And also the intuitive models that we form
about social interaction. Social interaction is not so hard. It's just hard for us nerds,
because we all have our wires crossed, so we need to deduce them. But the pyres are present in most
social animals. So it's an interesting thing to notice that many domestic social animals,
like cats and dogs, have better social cognition than children.
Right. I hope so. I hope it's not that many concepts, fundamentally, to do to exist in this
world. Social interaction. For me, it's more like afraid. So because this thing that we only appear
to be so complex to each other, because we are so stupid, it's a little bit depressing.
Yeah. To me, that's inspiring. If we're indeed as stupid as it seems.
The things our brains don't scale and the information processing that we build tend to
scale very well. Yeah. But one of the things that worries me is that the fact that the brain
doesn't scale means that that's actually a fundamental feature of the brain. All the flaws
of the brain, everything we see as limitations, perhaps there is a fundamental, the constraints
on the system could be a requirement of its power, which is different than our current
understanding of intelligent systems where scale, especially with deep learning, especially with
reinforcement learning, the hope behind open AI and deep mind, all the major results really
have to do with huge compute. And yeah. It would also be that our brains are so small,
not just because they take up so much glucose in our body, like 20% of the glucose, so they don't
arbitrarily scale. There are some animals like elephants, which have larger brains than us,
and the domes seem to be smarter. Elephants seem to be autistic. They have very, very good
motor control, and they're really good with details, but they really struggle to see the big
picture. So you can make them recreate drawings, stroke by stroke, they can do that, but they
cannot reproduce a still life. So they cannot make a drawing of a scene that they see, they will
always be only able to reproduce the line drawing, at least as far from what I could see in the
experiments. Why is that? Maybe smarter elephants would meditate themselves out of existence,
because their brains are too large. So basically the elephants that were not autistic,
they didn't reproduce. Yeah. So we have to remember that the brain is fundamentally
interlinked with the body in our human and biological system. Do you think that AGI systems,
that we try to create a greater intelligence systems, would need to have a body?
I think they should be able to make use of a body if you give it to them.
But I don't think that I fundamentally need a body. So I suspect if you can interact with the
world by moving your eyes and your head, you can make controlled experiments. And this allows you
to have many magnitudes, fewer observations in order to reduce the uncertainty in your models.
So you can pinpoint the areas in your models where you're not quite sure and you just move your head
and see what's going on over there and you get additional information. If you just have to use
YouTube as an input and you cannot do anything beyond this, you probably need just much more data.
But we have much more data. So if you can build a system that has enough time and attention to
browse through all of YouTube and extract all the information that there is to be found,
I don't think there's an obvious limit to what it can do.
Yeah. But it seems that the interactivity is a fundamental thing that the physical body allows
you to do. But let me ask on that topic, that's what a body is, is allowing the brain to touch
things and move things and interact with whether the physical world exists or not, whatever,
but interact with some interface to the physical world. What about a virtual world?
Do you think we can do the same kind of reasoning, consciousness, intelligence
if we put on a VR headset and move over to that world? Do you think there's any fundamental
difference between the interface to the physical world that it's here in this hotel and if we
were sitting in the same hotel in a virtual world? The question is, does this non-physical world or
this other environment entice you to solve problems that require general intelligence?
If it doesn't, then you probably will not develop general intelligence. And arguably,
most people are not generally intelligent because they don't have to solve problems that make them
generally intelligent. And even for us, it's not yet clear if we are smart enough to build AI and
understand our own nature to this degree. So it could be a matter of capacity. And for most people,
it's in the first place a matter of interest. I don't see the point because the benefit of
attempting this project are marginal because you're probably not going to succeed in it and the cost
of trying to do it requires complete dedication of your entire life. But it seems like the
possibility is what you can do in a virtual world. So imagine that is much greater than you can in
the real world. So imagine a situation, maybe interesting option for me. If somebody came to
me and offered, what I'll do is, so from now on, you can only exist in the virtual world.
And so you put on this headset and when you eat, we'll make sure to connect your body up in a way
that when you eat in the virtual world, your body will be nourished in the same way in the
virtual world. So the aligning incentives between the our common sort of real world and the virtual
world. But then the possibilities become much bigger. Like I could be other kinds of creatures
that could do, I can break the laws of physics, we know them, I could do a lot. I mean, the
possibilities are endless, right? As far as we think, it's an interesting thought whether,
like what existence would be like, what kind of intelligence would emerge there, what kind of
consciousness, what kind of maybe greater intelligence, even in me, Lex, even at this stage
in my life, if I spend the next 20 years in that world to see how that intelligence emerges.
And if I was, if that happened at the very beginning before I was even cognizant of my
existence in this physical world, it's interesting to think how that child would develop. And the way
virtual reality and digitization of everything is moving, it's not completely out of the realm
of possibility that we're all, that some part of our lives will, if not entirety of it, will live
in a virtual world to a greater degree than we currently have living on Twitter and social media
and so on. Do you have, I mean, do, does something draw you intellectually or naturally in terms of
thinking about AI to this virtual world where more possibilities are. I think that currently it's a
waste of time to deal with the physical world before we have mechanisms that can automatically
learn how to deal with it. The body gives you a second order agency. What constitutes the body
is the things that you can indirectly control. Third order are tools. And the second order
is the things that are basically always present, but you operate on them with first order things
which are mental operators. And the zero order is in some sense the direct sense of what you're
deciding. So you observe yourself initiating an action. There are features that you interpret
as the initiation of an action. Then you perform the operations that you perform to make that happen.
And then you see the movement of your limbs and you learn to associate those and thereby model
your own agency over this feedback. But the first feedback that you get is from this first order
thing already. Basically, you decide to think a thought and the thought is being thought. You
decide to change the thought and you observe how the thought is being changed. And in some sense,
this is, you could say, an embodiment already. And I suspect it's sufficient as an embodiment
or intelligence. And so it's not that important, at least at this time, to consider variations
in the second order. But the thing that you also mentioned just now is physics that you
could change in any way you want. So you need an environment that puts up resistance against you.
If there's nothing to control, you cannot make models. There needs to be a particular way that
resists you. And by the way, your motivation is usually outside of your mind. It resists your
motivation is what gets you up in the morning, even though it would be much less work to stay in
bed. So it's basically forcing you to resist the environment and it forces your mind to serve it,
to serve this resistance to the environment. So in some sense, it is also putting up resistance
against the natural tendency of the mind to not do anything. Yeah. So some of that resistance,
just like you described with motivation, is like in the first order, it's in the mind.
Some resistance is in the second order, like the actual physical objects pushing against you,
so on. It seems that the second order stuff in virtual reality could be recreated.
Of course. But it might be sufficient that you just do mathematics and mathematics is already
putting up enough resistance against you. So basically, just with an aesthetic motive,
this could maybe sufficient to form a type of intelligence. It would probably not be a very
human intelligence, but it might be one that is already general. So to mess with this zero
order, maybe first order, what do you think about ideas of brain-computer interfaces? So
again, returning to our friend Elon Musk and Neuralink, a company that's trying to, of course,
there's a lot of trying to cure diseases and so on with a near term. But the long term vision
is to add an extra layer to basically expand the capacity of the brain connected to the
computational world. Do you think one that's possible, two, how does that change the fundamentals
of the zero-th order in the first order? It's technically possible, but I don't see that the
FDA would ever allow me to drill holes in my skull to interface my neocortex on Musk envisions.
So at the moment, I can do horrible things to mice, but I'm not able to do useful things to
people except maybe at some point down the line in medical applications. So this thing that we
are envisioning, which means recreational and recreational brain-computer interfaces
are probably not going to happen in the present legal system. I love it how I'm asking you out
there philosophical and sort of engineering questions. And for the first time ever, you jumped
to the legal FDA. There would be enough people that would be crazy enough to have holes drilled
in their skull to try a new type of brain-computer interface. But also if it works, FDA will approve
it. I work a lot with autonomous vehicles. Yes, you can say that it's going to be very
difficult regulatory process of approving autonomous, but it doesn't mean autonomous vehicles are
never going to happen. No, they will totally happen as soon as we create jobs for at least two
lawyers and one regulator per car. Lawyers, that's actually like lawyers, this is the fundamental
substrate of reality. In the US, it's a very weird system. It's not universal in the world.
The law is a very interesting software once you realize it, right? These circuits are
in some sense streams of software and largely works by exception handling. So you make decisions
on the ground and they get synchronized with the next level structure as soon as an exception is
being thrown. So it escalates the exception handling. The process is very expensive, especially
since it incentivizes the lawyers for producing work for lawyers. Yes, so the exceptions are
actually incentivized for firing often. But to return, outside of lawyers, is there anything
interesting, insightful about the possibility of this extra layer of intelligence added to the
brain? I do think so, but I don't think that you need technically invasive procedures to do so.
We can already interface with other people by observing them very, very closely and getting
in some kind of empathetic resonance. I'm not very good at this, but I noticed that people
are able to do this to some degree. And it basically means that we model an interface
layer of the other person in real time. And it works despite our neurons being slow because
most of the things that we do are built on periodic processes. So you just need to
entrain yourself with the oscillation that happens. And if the oscillation itself changes
slowly enough, you can basically follow along. Right.
Right. But the bandwidth of the interaction, it seems like you can do a lot more computation
when there's... Yes, of course. But the other thing is that the bandwidth that our brain,
our own mind is running on, is actually quite slow. So the number of thoughts that I can
productively think in any given day is quite limited. But it's much...
If they had the discipline to write it down and the speed to write it down, maybe it would be a
book every day or so. But if you think about the computers that we can build, the magnitudes at
which they operate, this would be nothing. It's something that it can put out in a second.
Well, I don't know. So it's possible the number of thoughts you have in your brain is...
It could be several orders of magnitude higher than what you're possibly able to express through
your fingers or through your voice. Most of them are going to be repetitive because they...
How do you know that? Because they have to control the same problems every day.
When I walk, they are going to be processes in my brain that model my walking pattern and
regulate them and so on. But it's going to be pretty much the same every day.
But that could be because... Every step.
But I'm talking about intellectual reasoning. Thinking, so the question, what is the best
system of government? So you sit down and start thinking about that. One of the constraints is
that you don't have access to a lot of... You don't have access to a lot of facts, a lot of studies.
You have to do... You always have to interface with something else to learn more to aid in
your reasoning process. If you can directly access all of Wikipedia in trying to understand what is
the best form of government, then every thought won't be stuck in a loop. Every thought that
requires some extra piece of information will be able to grab it really quickly. That's the
possibility of... If the bottleneck is literally the information that the bottleneck of breakthrough
ideas is just being able to quickly access huge amounts of information, then the possibility of
connecting your brain to the computer could lead to totally new breakthroughs. You can think of
mathematicians being able to just up the orders of magnitude of power in their reasoning about
mathematical roots. What if humanity has already discovered the optimal form of government through
a evolutionary process? There is an evolution going on. What we discover is that maybe the
problem of government doesn't have stable solutions for us as a species because we are not designed
in such a way that we can make everybody conform to them. But there could be solutions that work
under given circumstances or that are the best for certain environment and depends on, for instance,
the primary forms of ownership and the means of production. If the main means of production is land,
then the forms of government will be regulated by the landowners and you get a monarchy. If you
also want to have a form of government in which you depend on some form of slavery, for instance,
where the peasants have to work very long hours for very little gain, so very few people can have
plumbing, then maybe you need to promise them to get paid in the afterlife over time. You need
a theocracy. For much of human history in the West, we had a combination of monarchy and theocracy
that was our form of governance. At the same time, the Catholic Church implemented game
theoretic principles. I recently reread Thomas O'Kynos. It's very interesting to see this because
he was not a dualist. He was translating Aristotle in a particular way for the designing and operating
system for the Catholic society. He says that basically people are animals and very much the
same way as Aristotle envisions, which basically organisms with cybernetic control. Then he says
that there are additional rational principles that humans can discover and everybody can discover
them so they are universal. If you are sane, you should understand, you should submit to them
because you can rationally deduce them. These principles are roughly, you should be willing
to self-regulate correctly. You should be willing to do correct social regulation. It's
intraorganismic. You should be willing to act on your models. You have skin in the game.
You should have goal rationality. You should be choosing the right goals to work on. Basically,
these three rational principles, goal rationality, he calls prudence or wisdom. The social regulation
is justice. The correct social one and the internal regulation is temperance. This,
I think, willingness to act on your models is courage. Then he says that there are additionally
to these four cardinal virtues, three divine virtues. These three divine virtues cannot be
rationally deduced, but they reveal themselves by the harmony, which means if you assume them
and you extrapolate what's going to happen, you will see that they make sense. It's often been
misunderstood as God has to tell you that these are the things. Basically, there's something nefarious
going on. The Christian conspiracy forces you to believe some guy with a long beard
that they discovered this. These principles are relatively simple. Again, it's for high-level
organization, for the resulting civilization that you form. Commitment to unity. Basically,
you serve this higher-larger thing, this structural principle on the next level,
and he calls that phase. Then there needs to be commitment to shared purpose. Basically,
this global reward that you try to figure out what that should be and how you can facilitate this,
and this is love. The commitment to shared purpose is the core of love. You see this sacred thing
that is more important than your own organismic interests in the other. You serve this together
and this is how you see the sacred in the other. The last one is hope, which means you need to
be willing to act on that principle without getting rewards in the here and now, because it
doesn't exist yet. Then you start out building the civilization. You need to be able to do this
in the absence of its actual existence yet, so it can come into being.
So the way it comes into being is by you accepting those notions and then you see
these three divine concepts and you see them realized.
Another problem is divine is a loaded concept in our world, because we are outside of this
cult and we are still scarred from breaking free of it. But the idea is basically we need to have
a civilization that acts as an intentional agent, like an insect state. We are not actually a tribal
species. We are a state building species. What enabled state building is basically the formation
of religious states and other forms of rule-based administration in which the individual doesn't
matter as much as the rule or the higher goal. We got there by the question, what's the optimal
form of governance? So I don't think that Catholicism is the optimal form of governance,
because it's obviously on the way out. So it is for the present type of society that we are in.
Religious institutions don't seem to be optimal to organize at us. So what we discovered right now
that we live in in the West is democracy. And democracy is the role of oligarchs that are
the people that currently own the means of production, that is administered not by the
oligarchs themselves, because there's too much disruption. We have so much innovation
that we have in every generation new means of production that we invent and corporations die
usually after 30 years or so. And something other takes the leading role in our societies.
So it's administered by institutions. And these institutions themselves are not elected,
but they provide continuity. And they are led by electable politicians. And this makes it possible
that you can adapt to change without having to kill people. So you can, for instance,
if a change in governments, if people think that the current government is too corrupt or
is not up to date, you can just elect new people. Or if a journalist finds out something inconvenient
about the institution and the institution has no plan B like in Russia, the journalist has to
die. This is when you run society by the deep state. So ideally, you have an administration layer
that you can change if something bad happens. So you will have a continuity in the whole thing.
And this is the system that we came up in the West. And the way it's set up in the US is largely
a result of low level models. So it's mostly just second, third order consequences that people
are modeling in the design of these institutions. It's a relatively young society that doesn't
really take care of the downstream effects of many of the decisions that are being made.
And I suspect that AI can help us this in a way if you can fix the incentives.
The society of the US is the society of cheaters. It's basically cheating is so
indistinguishable from innovation. And we want to encourage innovation.
Can you elaborate on what you mean by cheating? It's basically people do things that they know
are wrong. It's acceptable to do things that you know are wrong in the society to a certain degree.
You can, for instance, suggest some non-sustainable business models and implement them.
Right. But you're always pushing the boundaries. I mean, you're-
And yes, this is seen as a good thing largely. Yes.
And this is different from other societies. So for instance, social mobility is an aspect
of this. Social mobility is the result of individual innovation that would not be
sustainable at scale for everybody else. Right.
Normally, you should not go up. You should go deep, right? We need bakers. And indeed,
we are very good bakers. But in a society that innovates, maybe you can replace all the bakers
with a really good machine. Right. And that's not a bad thing. And it's a thing that made the US
so successful, right? But it also means that the US is not optimizing for sustainability,
but for innovation. And so it's not obvious as the evolutionary processes on rolling is not
obvious that that long-term would be better. It has side effects. So basically, if you treat,
you will have a certain layer of toxic sludge that covers everything that is a result of
cheating. And we have to unroll this evolutionary process to figure out if these side effects
are so damaging that the system is horrible, or if the benefits actually outweigh the negative
effects. How do we get to the- which system of government is best? That was from- I'm trying to
trace back the last five minutes. I suspect that we can find a way back to AI by thinking about
the way in which our brain has to organize itself. In some sense, our brain is a society of neurons.
Our mind is a society of behaviors. And they need to be organizing themselves into a structure
that implements regulation. And government is social regulation. We often see government
as the manifestation of power or local interest, but it's actually a platform for negotiating the
conditions of human survival. And this platform emerges over the current needs and possibilities
in the trajectory that we have. So given the present state, there are only so many options on
how we can move into the next state without completely disrupting everything. And we mostly
agree that it's a bad idea to disrupt everything because it will endanger our food supply for a
while and the entire infrastructure and fabric of society. So we do try to find natural transitions.
And there are not that many natural transitions available at any given point.
What do you mean by natural transitions? So we try to not have revolutions if we can have it.
Right. So speaking of revolutions and the connection between government systems in the mind,
you've also said that- you've said that in some sense becoming an adult means you take charge
of your emotions. Maybe you never said that. Maybe I just made that up. But in the context
of the mind, what's the role of emotion? And what is it? First of all, what is emotion? What's its
role? It's several things. So psychologists often distinguish between emotion and feeling and then
common day parlance we don't. I think that emotion is a configuration of the cognitive system.
And that's especially true for the lowest level for the affective state. So when you have an affect,
it's the configuration of certain modulation parameters like arousal, valence, your attentional
focus, whether it's wide or narrow, interception or extra reception and so on. And all these
parameters together put you in a certain way to- you relate to the environment and to yourself.
And this is in some sense an emotional configuration. And the more narrow sense
and emotion is an affective state that has an object. And the relevance of that object is
given by motivation. And motivation is a bunch of needs that are associated with rewards,
things that give you pleasure and pain. And you don't actually act on your needs, you act on
models of your needs. Because when the pleasure and pain manifests, it's too late, you've done
everything. But so you act on expectations that will give you pleasure and pain. And these are
your purposes. The needs don't form a hierarchy, they just coexist and compete. And your organism
has to- your brain has to find a dynamic homeostasis between them. But the purposes need to be
consistent. So you basically can create a story for your life and make plans. And so we organize
them all into hierarchies. And there is not a unique solution for this. Some people eat to
make art, and other people make art to eat. And they might end up doing the same things,
but they cooperate in very different ways. Because their ultimate goals are different,
and we cooperate based on shared purpose. Everything else that is not cooperation on shared
purpose is transactional. I don't think I understood the last piece of
achieving the homeostasis. Are you distinguishing between the experience of emotion and the
expression of emotion? Of course. So the experience of emotion is a feeling. And in this sense, what
you feel is an appraisal that your perceptual system has made of the situation at hand. And
it makes this based on your motivation and on your estimates, not your, but of the subconscious
geometric parts of your mind that assess the situation in the world with something like a
neural network. And this neural network is making itself known to the symbolic parts of your mind,
to your conscious attention, by mapping them as features into a space. So what you will feel
about your emotion is a projection usually into your body map. So you might feel anxiety in your
solar plexus, and you might feel it as a contraction, which is all geometry. Your body map is the
space that is always instantiating, always available. So it's a very obvious cheat if your
non-symbolic parts of your brain try to talk to your symbolic parts of your brain to map
the feelings into the body map. And then you perceive them as pleasant and unpleasant,
depending on whether the appraisal has a negative or positive valence. And then you have different
features of them that give you more knowledge about the nature of what you're feeling. So,
for instance, when you feel connected to other people, you typically feel this in your chest
region around your heart. And you feel this is an expansive feeling in which you're
reaching out, right? And it's very intuitive to encode it like this. That's why it's encoded
like this for most people. It's encoded. It's a code. It's a code in which the non-symbolic
parts of your mind talk to the symbolic ones. And then the expression of emotion is then the
final step that could be sort of gestural or visual and so on. That's part of the communication.
This probably evolved as part of an adversarial communication. So as soon as you started to
observe the facial expression and posture of others to understand what emotional state they're in,
others started to use this as signaling and also to subvert your model of their emotional state.
So we now look at the inflections, at the difference between the standard phase that
they're going to make in this situation. When you are in the funeral, everybody expects you
to make a solemn phase. But the solemn phase doesn't express whether you're sad or not.
It just expresses that you understand what phase you have to make at the funeral.
Nobody should know that you are triumphant. So when you try to read the emotion of another person,
you try to look at the delta between a truly sad expression and the things that are animated
mating this phase behind the curtain. So the interesting thing is, so having done these,
having done this podcast and the video component, one of the things I've learned is that now I'm
Russian and I just don't know how to express emotion on my face when I see that as weakness.
The people look to me after you say something, they look to my face to help them see how they
should feel about what you said, which is fascinating because then they'll often comment on
why did you look bored? Or why did you particularly enjoy that part? Or why did you whatever?
It's a kind of interesting, it makes me cognizant of I'm part, like you're basically saying a bunch
of brilliant things, but I'm part of the play that you're the key actor and by making my facial
expressions and then therefore telling the narrative of what the big, like the big point is,
which is fascinating, makes me cognizant that I'm supposed to be making facial expressions.
Even this conversation is hard because my preference will be to wear a mask with sunglasses
to where I could just listen. I understand this because it's intrusive to interact with others
this way and basically Eastern European society have a taboo against that and especially Russia,
the further you go to the East and in the US it's the opposite. You're expected to be hyperanimated
in your face and you're also expected to show positive effect. And if you show positive effect
without a good reason in Russia, people will think you are a stupid, unsophisticated person.
Exactly. And here positive effect without reason goes either appreciate or goes unnoticed.
No, it's the default. It's being expected. Everything is amazing. Have you seen this?
Lego movie? No, there was a diagram where somebody gave the appraisals that exist in
US and Russia. So you have your bell curve and the lower 10% in US are, it's a good start.
Everything above the lowest 10% is amazing. And for Russians, everything below the top 10%
is terrible. And then everything except the top percent is, I don't like it. And the top percent
is even so. It's funny, but it's kind of true. There's a deeper aspect to this. It's also how
we construct meaning in the US. Usually you focus on the positive aspects and you just
suppress the negative aspects. And in our Eastern European traditions, we emphasize
the fact that if you hold something above the waterline, you also need to put something below
the waterline because existence by itself is as best neutral. Right. That's the basic intuition.
If at best neutral, or it could be just suffering, the default. There are moments of beauty, but
these moments of beauty are inextricably linked to the reality of suffering. And to not acknowledge
the reality of suffering means that you are really stupid and unaware of the fact that basically
every conscious being spends most of the time suffering. Yeah. You just summarized the ethos
of the Eastern Europe. Yeah. Most of life is suffering with occasional moments of beauty.
And if your facial expressions don't acknowledge the abundance of suffering in the world and in
existence itself, then you must be an idiot. It's an interesting thing when you raise children
in the US and you in some sense preserve the identity of the intellectual and cultural
traditions that are embedded in your own families. And your daughter asks you about Ariel, the mermaid.
And you ask you, why is Ariel not allowed to play with the humans? And you tell her the truth.
She's a siren. Sirens eat people. You don't play with your food. It does not end well.
And then you tell her the original story, which is not the one by Anderson, which is the romantic
one. And there's a much darker one, the Undine story. What happened? So Undine is a mermaid
or a water woman. She lives on the ground of a river and she meets this prince and they fall
in love. And the prince really, really wants to be with her. And she says, okay, but the deal is
you cannot have any other woman. If you marry somebody else, even though you cannot be with me,
because obviously you cannot breathe underwater and have other things to do than managing your
kingdom with you up here, you will die. And eventually, after a few years, he falls in love
with some princess and marries her and she shows up and quietly goes into his chamber and nobody
is able to stop her or willing to do so because she is fierce. And she comes quietly and sat out
of his chamber and they ask her, what has happened? What did you do? And she said, I kissed him to
death. All done. And you know the Anderson story, right? In the Anderson story, the mermaid is
playing with this prince that she saves and she falls in love with him and she cannot live out
there. So she is giving up her voice and her tale for a human-like appearance so she can walk
among the humans. But this guy does not recognize that she is the one that you marry. Instead,
he marries somebody who has a kingdom and economical and political relationships to his own
kingdom and so on, as he should. And she dies. Instead, Disney, the Little Mermaid story has a
little bit of a happy ending. That's the Western, that's the American way. My own problem is this,
of course, that I read Oscar Wilde before I read the other things. So I'm indoctrinated,
inoculated with this romanticism and I think that the mermaid is right. You sacrifice your life for
romantic love. That's what you do because if you are confronted with either serving the machine
and doing the obviously right thing under the economic and social and other human incentives,
that's wrong. You should follow your heart. So do you think suffering is fundamental to happiness
along these lines? No. Suffering is the result of caring about things that you cannot change.
And if you are able to change what you care about to those things that you can change,
you will not suffer. But would you then be able to experience happiness? Yes. But happiness itself
is not important. Happiness is like a cookie. When you are a child, you think cookies are very
important and you want to have all the cookies in the world. You look forward to being an adult
because then you have as many cookies as you want, right? Yes. But as an adult, you realize
a cookie is a tool. It's a tool to make you eat vegetables. And once you eat your vegetables,
anyway, you stop eating cookies for the most part because otherwise you will get diabetes
and will not be around for your kids. Yes. But then the cookie, the scarcity of a cookie.
If scarcity is enforced, nevertheless, so like the pleasure comes from the scarcity.
Yes. But the happiness is a cookie that your brain bakes for itself. It's not made by the
environment. The environment cannot make you happy. It's your appraisal of the environment that makes
you happy. And if you can change the appraisal of the environment that you can learn to, then you
can create arbitrary states of happiness. And some meditators fall into this trap. So they
discover the room, the basement room in their brain where the cookies are made,
and they indulge in stuff themselves. And after a few months, it gets really old and the big
crisis of meaning comes. Because they thought before that their unhappiness was the result of
not being happy enough. So they fixed this, right? They can release the newer transmitters
at will if they train. And then the crisis of meaning pops up a deeper layer. And the question
is, why do I live? How can I make a sustainable civilization that is meaningful to me? How can
I insert myself into this? And this was the problem that you couldn't solve in the first place.
But at the end of all this, let me then ask that same question. What is
the answer to that? What could the possible answer be of the meaning of life? What could
an answer be? What is it to you? I think that if you look at the meaning of life, you look at what
the cell is, the life is the cell, right? Yes, or this principle, the cell, it's this
self-organizing thing that can participate in evolution. In order to make it work, it's a
molecular machine. It needs a self-replicator and like entropy extractor and a Turing machine.
If any of these parts is missing, you don't have a cell and it is not living, right? And life is
basically the emergent complexity over that principle. Once you have this intelligent super
molecule, the cell, there is very little that you cannot make it do. It's probably the optimal
computronium and especially in terms of resilience. It's very hard to sterilize the planet once it's
infected with life. So it's active function of these three components of the supercell of cell
is present in the cell, is present in us, and it's just- We are just an expression of the cell. It's
a certain layer of complexity in the organization of cells. So in a way, it's tempting to think of
the cell as a von Neumann probe. If you want to build intelligence on other planets, the best way
to do this is to infect them with cells and wait for long enough and with a reasonable chance,
the stuff is going to evolve into an information processing principle that is general enough to
become sentient. Well, that idea is very akin to sort of the same dream and beautiful ideas that
are expressed to cellular automata in their most simple mathematical form. If you just inject the
system with some basic mechanisms of replication and so on, basic rules, amazing things would emerge.
And the cell is able to do something that James Trady calls existential design. He points out that
in technical design, we go from the outside in. We work in a highly controlled environment in which
everything is deterministic, like our computers, our labs, or our engineering workshops. And then
we use this determinism to implement a particular kind of function that we dream up and that seamlessly
interfaces with all the other deterministic functions that we already have in our world.
So it's basically from the outside in. And biological systems designed from the inside out as
seed will become a seedling by taking some of the relatively unorganized matter around it
and turn it into its own structure and thereby subdue the environment. And cells can cooperate
if they can rely on other cells having a similar organization that is already compatible. But
unless that's there, the cell needs to divide to create that structure by itself. So it's a
self-organizing principle that works on a somewhat chaotic environment. And the purpose of life,
in the sense, is to produce complexity. And the complexity allows you to harvest
negentropy gradients that you couldn't harvest without the complexity. And in this sense,
intelligence and life are very strongly connected because the purpose of intelligence is to allow
control and the conditions of complexity. So basically, you shift the boundary between the
ordered systems into the realm of chaos. You build bridgeheads into chaos with complexity.
And this is what we are doing. This is not necessarily a deeper meaning. I think the
meaning that we have priors for that we are evolved for outside of the priors, there is no
meaning. Meaning only exists if a mind projects it. That is probably civilization. I think that
what feels most meaningful to me is to try to build and maintain a sustainable civilization.
And taking a slight step outside of that, we talked about a man with a beard and God. But
something, some mechanism, perhaps, must have planted the seed, the initial seed of the cell,
do you think there is a God? What is a God? And what would that look like?
So if there was no spontaneous our biogenesis, in the sense that the first cell formed by some
happy random accidents where the molecules just happened to be in the right constellation to
each other. But there could also be the mechanism that allows for the random. I mean, there's like
turtles all the way down. There seems to be, there has to be a head turtle at the bottom.
And let's consider something really wild. Imagine, is it possible that a gas giant could
become intelligent? What would that involve? So imagine you have vortices that spontaneously
emerge on the gas giants like big storm systems that endure for thousands of years. And some
of these storm systems produce electromagnetic fields because some of the clouds are ferromagnetic
or something. And as a result, they can change how certain clouds react rather than other clouds
and thereby produce some self-sabilizing patterns that eventually to regulation feedback loops,
nested feedback loops and control. So imagine you have such a thing that basically has emergent,
self-sustaining, self-organizing complexity. And at some point, this fakes up and realizes
and basically lambs Solaris. I am a sinking planet, but I will not replicate because I
can recreate the conditions of my own existence somewhere else. I'm just basically an intelligence
that has spontaneously formed because it could. And now it builds a von Neumann probe. And the
best von Neumann probe is such a thing might be the cell. So maybe it, because it's very,
very clever and very enduring, creates cells and sends them out. And one of them has infected our
planet. And I'm not suggesting that this is the case, but it would be compatible with the
Prince Birmingham hypothesis. And with my intuition that abiogenesis is very unlikely.
It's possible, but you probably need to roll the cosmic dice very often, maybe more often
than there are planetary surfaces. I don't know. So God is just a system that's large enough
that allows randomness. Now, I don't think that God has anything to do with creation.
I think it's a mistranslation of the Talmud into the Catholic mythology. I think that Genesis is
actually the childhood memories of a God. Sorry, the Genesis is the childhood memories of a God.
It's basically a mind that is remembering how it came into being. And we typically interpret
Genesis as the creation of a physical universe by a supernatural being. And I think when you'll
read it, there is light and darkness that is being created. And then you discover sky and
ground, create them. You construct the plants and the animals, and you give everything their
names and so on. That's basically cognitive development. It's a sequence of steps that every
mind has to go through when it makes sense of the world. And when you have children, you can see
how initially they distinguish light and darkness. And then they make out directions in it, and they
discover sky and ground, and they discover the plants and the animals, and they give everything
their name. And it's a creative process that happens in every mind. Because it's not given,
right? Your mind has to invent these structures to make sense of the patterns on your retina.
Also, if there was some big nerd who set up a server and runs this world on it, this would not
create a special relationship between us and the nerd. This nerd would not have the magical power
to give meaning to our existence, right? So this equation of a creator God with the God of meaning
is a slate of hand. You shouldn't do it. The other one that is done in Catholicism is the
equation of the first mover, the prime mover of Aristotle, which is basically the automaton
that runs the universe. Aristotle says, if things are moving and things seem to be moving here,
something must move them, right? If something moves them, something must move the thing that
is moving it. So there must be a prime mover. This idea to say that this prime mover is a
supernatural being is complete nonsense, right? It's an automaton in the simplest case. So we
have to explain the enormity that this automaton exists at all. But again, we don't have any
possibility to infer anything about its properties except that it's able to produce change in
information, right? So there needs to be some kind of computational principle. This is all there is.
But to say this automaton is identical again with the creator of first cause or with the
thing that gives meaning to our life is confusion. Now, I think that what we perceive is the higher
being that we are part of. And the higher being that we are part of is the civilization. It's the
thing in which we have a similar relationship as the cell has to our body. And we have this prior
because we have evolved to organize in these structures. So basically, the Christian God
in its natural form without the mythology, if you do undress it, is basically the platonic form of
the civilization. Is the ideal? Yes, it's this ideal that you try to approximate when you interact
with others, not based on your incentives, but on what you think is right. Wow, we covered a lot
of ground. And we left with one of my favorite lines. And there's many, which is happiness
is a cookie that the brain bakes itself. It's been a huge honor and a pleasure to talk to you.
I'm sure our paths will cross many times again. Josh, thank you so much for talking today.
Really appreciate it. Thank you, Lex. It was so much fun. I enjoyed it. Awesome.
Thanks for listening to this conversation with Yosha Bach. And thank you to our sponsors,
ExpressVPN and Cash App. Please consider supporting this podcast by getting ExpressVPN
at expressvpn.com slash Lex pod and downloading Cash App and using code Lex podcast. If you
enjoy this thing, subscribe on YouTube, review it with five stars and Apple podcast supported on
Patreon or simply connect with me on Twitter at Lex Friedman. And yes, try to figure out how to
spell it without the E. And now let me leave you with some words of wisdom from Yosha Bach.
If you take this as a computer game metaphor, this is the best level for humanity to play.
And this best level happens to be the last level as it happens against the backdrop of a dying world.
But it's still the best level. Thank you for listening and hope to see you next time.