logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 9h 33m 5s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand
the structure, function, and the origin of intelligence in the human brain.
He previously wrote a seminal book on the subject titled On Intelligence, and recently
a new book called A Thousand Brains, which presents a new theory of intelligence
that Richard Dawkins, for example, has been raving about, calling the book, quote,
brilliant and exhilarating. I can't read those two words and not think of him saying it in his
British accent. Quick mention of our sponsors, Code Academy, Bio Optimizers, ExpressVPN,
A Sleep and Blinkist. Check them out in the description to support this podcast.
As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions in his
new book is that if human civilization were to destroy itself, all of knowledge, all our creations
will go with us. He proposes that we should think about how to save that knowledge in a way that
long outlives us, whether that's on Earth, in orbit around Earth, or in deep space. And then
to send messages that advertise this backup of human knowledge to other intelligent alien
civilizations. The main message of this advertisement is not that we are here, but that we were once
here. This little difference somehow was deeply humbling to me, that we may with some nonzero
likelihood destroy ourselves, and that an alien civilization, thousands or millions of years
from now may come across this knowledge store, and they would only with some low probability even
notice it, not to mention be able to interpret it. And the deeper question here for me is what
information in all of human knowledge is even essential? Does Wikipedia capture it or not at
all? This thought experiment forces me to wonder what are the things we've accomplished and are
hoping to still accomplish that will outlive us? Is it things like complex buildings, bridges,
cars, rockets? Is it ideas like science, physics, and mathematics? Is it music and art? Is it
computers, computational systems, or even artificial intelligence systems? I personally
can't imagine that aliens wouldn't already have all of these things. In fact, much more and much
better. To me, the only unique thing we may have is consciousness itself, and the actual
subjective experience of suffering, of happiness, of hatred, of love. If we can record these experiences
in the highest resolution directly from the human brain, such that aliens will be able to replay
them, that is what we should store and send as a message. Not Wikipedia, but the extremes of
conscious experiences, the most important of which, of course, is love. This is the Lex
Friedman podcast, and here is my conversation with Jeff Hawkins. We previously talked over two
years ago. Do you think there's still neurons in your brain that remember that conversation,
that remember me and got excited? There's a Lex neuron in your brain that just finally has a purpose?
I do remember our conversation, or I have some memories of it, and I formed additional memories
of you in the meantime. I wouldn't say there's a neuron or a neuron in my brain that know you,
but there are synapses in my brain that have formed that reflect my knowledge of you and
the model I have of you in the world. Whether the exact same synapses were formed two years
ago, it's hard to say because these things come and go all the time. One thing to note about
brains is that when you think of things, you often erase the memory and rewrite it again.
Yes, but I have a memory of you, and that's instantiated in synapses. There's a simpler way
to think about it, Lex. We have a model of the world in your head, and that model is continually
being updated. I updated it this morning. You offered me this water, you said it was from the
refrigerator. I remember these things. The model includes where we live, the places we know, the
words, the objects in the world, but it's just a monstrous model, and it's constantly being updated,
and people are just part of that model. We're animals, so are other physical objects,
so are events we've done. It's no special in my mind, special place for the memories of humans.
I mean, obviously, I know a lot about my wife and friends and so on, but it's not like a special
place for humans or over here, but we model everything, and we model other people's behaviors
too. If I said, there's a copy of your mind in my mind, it's just because I know how humans,
I've learned how humans behave, and I've learned some things about you, and that's part of my
world model. I just also mean the collective intelligence of the human species. I wonder
if there's something fundamental to the brain that enables that, so modeling other humans with
their ideas. You're actually jumping into a lot of big problems. Collective intelligence is a
separate topic that a lot of people like to talk about. We can talk about that,
but so that's interesting. We're not just individuals, we live in society and so on,
but from our research point of view, and so again, let's just talk, we studied the neocortex,
it's a sheet of neural tissue, it's about 75% of your brain. It runs on this very repetitive
algorithm. It's a very repetitive circuit. You can apply that algorithm to lots of different
problems, but it's all underneath, it's the same thing. We're just building this model.
From our point of view, we wouldn't look for these special circuits someplace buried in your brain
that might be related to understanding other humans. It's more like, how do we build a model
of anything? How do we understand anything in the world? Humans are just another part of the
things we understand. There's nothing to the brain that knows the emergent phenomenon of
collective intelligence. Well, I certainly know about that. I've heard the terms, I've read.
No, but that's an idea. Well, I think we have language, which is built into our brains and
that's a key part of collective intelligence. There are some prior assumptions about the world
we're going to live in. When we're born, we're not just a blank slate. Did we evolve to take
advantage of those situations? Yes, but again, we study only part of the brain, the neocortex.
There's other parts of the brain are very much involved in societal interactions and human
emotions and how we interact and even societal issues about how we interact with other people
when we support them, when we're greedy and things like that. I mean, certainly the brain
is a great place where to study intelligence. I wonder if it's the fundamental atom of intelligence.
Well, I would say it's absolutely an essential component, even if you believe in collective
intelligence as, hey, that's where it's all happening. That's what we need to study,
which I don't believe that, by the way. I think it's really important, but I don't think that is
the thing. But even if you do believe that, then you have to understand how the brain works in
doing that. It's more like we are intelligent individuals and together, we are much more
magnified our intelligence. We can do things that we couldn't do individually, but even as
individuals, we're pretty damn smart and we can model things and understand the world and interact
with it. So to me, if you're going to start someplace, you need to start with the brain,
then you could say, well, how do brains interact with each other? And what is the nature of language?
And how do we share models that I've learned something about the world? How do I share it
with you? Which is really what sort of communal intelligence is. I know something, you know
something. We've had different experiences in the world. I've learned something about brains,
maybe I can impart that to you. You've learned something about physics, and you can part that
to me. But it all comes down to even just the epistemological question of, well, what is knowledge?
And how do you represent it in the brain? That's where it's going to reside in our writings.
It's obvious that human collaboration, human interaction is how we build societies. But
some of the things you talk about and work on, some of those elements of what makes up an intelligent
entity is there with a single person. Absolutely. I mean, we can't deny that the brain is the core
element here in, at least I think it's obvious, the brain is the core element in all theories of
intelligence. It's where knowledge is represented. It's where knowledge is created. We interact,
we share, we build upon each other's work. But without a brain, you'd have nothing.
There would be no intelligence without brains. And so that's where we start. I got into this field
because I just was curious as to who I am. How do I think? What's going on in my head when I'm
thinking? What does it mean to know something? I can ask what it means for me to know something
independent of how I learned it from you or from someone else or from society. What does it mean for
me to know that I have a model of you in my head? What does it mean to know I know what this
microphone does and how it works physically, even when I can't see it right now? How do I know that?
What does it mean? How do the neurons do that at the fundamental level of neurons and synapses and
so on? Those are really fascinating questions. And I'm just happy to understand those if I could.
So in your new book, you talk about our brain, our mind as being made up of many brains.
So the book is called A Thousand Brain Theory of Intelligence. What is the key idea of this book?
The book has three sections and it has sort of maybe three big ideas. So the first section is
all about what we've learned about the neocortex and that's the Thousand Brain Theory. Just to
complete the picture, the second section is all about AI and the third section is about the future
of humanity. So the Thousand Brain Theory, the big idea there, if I had to summarize into one
big idea, is that we think of the brain, the neocortex is learning this model of the world.
But what we learned is actually there's tens of thousands of independent modeling systems going
on. And so each, what we call a column in the cortex with about 150,000 of them, is a complete
modeling system. So it's a collective intelligence in your head in some sense. So the Thousand Brain
Theory says about where do I have knowledge about, you know, this coffee cup or where is the model
of this cell phone? It's not in one place. It's in thousands of separate models that are complementary
and they communicate with each other through voting. So this idea that we have, we feel like
we're one person, you know, that's our experience, we can explain that. But reality, there's lots of
these like, it's almost like little brains, like, but they're sophisticated modeling systems,
about 150,000 of them in each of human brain. And that's a totally different way of thinking about
how the neocortex is structured than we or anyone else thought of even just five years ago.
So you mentioned you started this journey on just looking in the mirror and trying to understand
who you are. So if you have many brains, who are you then? So it's interesting, we have a singular
perception, right? You know, we think, oh, I'm just here, I'm looking at you. But it's, it's composed
of all these things. There's sounds and there's, and there's vision and there's touch and all kinds
of inputs. Yeah, we have the singular perception. And what the Thousand Brain Theory says, we have
these models that are visual models, we have a lot of models, auditory models, models, talk to
models and so on. But they vote. And so they send in the cortex, you can think about that these
columns as that like little grains of rice, 150,000 stacked next to each other. And each one is its
own little modeling system. But they have these long range connections that go between them.
And we call those voting connections or voting neurons. And so the different columns try to
reach the consensus. Like, what am I looking at? Okay, you know, each one has some ambiguity,
but they come to a consensus. Oh, there's a water bottle, I'm looking at. We are only consciously
able to perceive the voting. We're not able to perceive anything that goes under the hood.
So the voting is what we're aware of. The results of the vote. Yeah, the result. Well,
it's, you can imagine it this way. We were just talking about eye movement a moment ago. So as
I'm looking at something, my eyes are moving about three times a second. And with each movement,
a completely new input is coming into the brain. It's not repetitive. It's not shifting around.
It's completely new. I'm totally unaware of it. I can't perceive it. But yet if I looked at the
neurons in your brain, they're going on and off, on and off, on and off, on and off. But the voting
neurons are not. The voting neurons are saying, you know, we all agree, even though I'm looking at
different parts of this is a water bottle right now. And that's not changing. And it's in some
position and pose relative to me. So I have this perception of the water bottle about two feet away
from me at a certain pose to me. That is not changing. That's the only part I'm aware of. I
can't be aware of the fact of the inputs from the eyes are moving and changing and all this other
is tapping. So these long range connections are the part we can be conscious of. The individual
activity in each column is doesn't go anywhere else. It doesn't get shared anywhere else. It
doesn't, there's no way to extract it and talk about it or extract it and even remember it to say,
oh, yes, I can recall that. So, but these long range connections are the things that are accessible
to language and to our, you know, it's like the hippocampus or our memories, you know, our
short-term memory systems and so on. So we're not aware of 95% or maybe it's even 98% of what's
going on in your brain. We're only aware of this sort of stable, somewhat stable voting outcome
of all these things that are going on underneath the hood.
So what would you say is the basic element in the 1000 brains theory of intelligence,
of intelligence? Like, what's the atom of intelligence when you think about it? Is it the
individual brains and then what is a brain? Well, let's, let's, can we just talk about what
intelligence is first and then, and then we can talk about the elements are. So in my, in my book,
intelligence is the ability to learn a model of the world, to build internal to your head, a model
that represents the structure of everything, you know, to know what this is a table and that's a
coffee cup and this is a goose neck lamp and all this. To know these things, I have to have a model
in my head. I just don't look at them and go, what is that? I already have internal representations
of these things in my head and I had to learn them. I wasn't born of any of that knowledge.
You were, you know, we have some lights in the room here. I, you know, that's not part of my
evolutionary heritage, right? It's not in my genes. So we have this incredible model and the model
includes not only what things look like and feel like, but where they are relative to each other
and how they behave. I've never picked up this water bottle before, but I know that if I took
my hand on that blue thing and I turn it, it'll probably make a funny little sound as a little
plastic things detach and then it'll rotate and it'll rotate a certain way and it'll come off.
How do I know that? Because I have this model in my head. So the essence of intelligence as our
ability to learn a model and the more sophisticated our model is, the smarter we are. Not that there
is a single intelligence because you can know about, you know a lot about things that I don't
know and I know about things you don't know and we can both be very smart, but we both learned
a model of the world through interacting with it. So that is the essence of intelligence.
Then we can ask ourselves, what are the mechanisms in the brain that allow us to do that? And what
are the mechanisms of learning? Not just the neural mechanisms, what are the general process
for how we learn a model? So that was a big insight for us. It's like, what are the, what is the
actual things that, how do you learn this stuff? It turns out you have to learn it through movement.
You can't learn it just by, that's how we learn. We learn through movement. We learn.
So you build up this model by observing things and touching them and moving them and walking
around the world and so on. So either you move or the thing moves. Somehow. Yeah. Obviously,
you can learn things just by reading a book, something like that. But think about if I were
to say, oh, here's a new house. I want you to learn, you know, what do you do? You
have to walk, you have to walk from room to room. You have to open the doors,
look around, see what's on the left, what's on the right. As you do this, you're building a model
in your head. It's just, that's what you're doing. You can't just sit there and say,
I'm going to grok the house. No, you know, or you don't even want to just sit down and read
some description of it, right? Yeah. You literally physically interact with them. The same with
like a smartphone. If I'm going to learn a new app, I touch it and I move things around. I see
what happens when I, when I do things with it. So that's the basic way we learn in the world.
And by the way, when you say model, you mean something that can be used for prediction in
the future. It's used for prediction and for behavior and planning. Right. And does a pretty
good job at doing so. Yeah. Here's the way to think about the model. A lot of people get hung up on
this. So you can imagine an architect making a model of a house, right? So there's a physical
model that's small. And why don't they do that? Well, we do that because you can imagine what it
would look like from different angles. Okay. Look from here, look from there. And you can also say,
well, how far to get from the garage to the swimming pool or something like that, right?
You can imagine looking at this. And so what would be the view from this location? So we
build these physical models to let you imagine the future and imagine behaviors.
Now we can take that same model and put it in a computer. So we now, today, they all build
models of houses in a computer and they, and they do that using a set of,
we'll come back to this term in a moment, reference frames, but eventually you assign a
reference frame for the house and you assign different things for the house in different
locations. And then the computer can generate an image and say, okay, this is what it looks
like in this direction. The brain is doing something remarkably similar to this. Surprising.
It's using reference frames. It's building these, it's similar to a model in a computer,
which has the same benefits of building a physical model. It allows me to say, what would
this thing look like if it was in this orientation? What would likely happen if I pushed this button?
I've never pushed this button before. Or how would I accomplish something? I want to convey
a new idea of learned. How would I do that? I can imagine in my head, well, I could talk about it.
I could write a book. I could do some podcasts. I could, you know, maybe tell my neighbor,
you know, and I can imagine the outcomes of all these things before I do any of them.
That's what the model that you do. Let's just plan the future and imagine the consequences
of our actions. Prediction, you asked about prediction. Prediction is not the goal of the
model. Prediction is an inherent property of it. And it's how the model corrects itself.
So prediction is fundamental to intelligence. It's fundamental to building a model and the
model's intelligence. And I mean, go back and be very precise about this. Prediction,
you can think of prediction two ways. One is like, hey, what would happen if I did this?
That's the type of prediction. That's a key part of intelligence. But it isn't prediction is like,
oh, what's this water bottle going to feel like when I pick it up? You know, and that
doesn't seem very intelligent. But the way to think, one way to think about prediction is
it's a way for us to learn where our model is wrong. So if I picked up this water bottle
and it felt hot, I'd be very surprised. Or if I picked up it was very light, it would be very,
I'd be surprised. Or if I turn this top and it didn't, I had to turn it the other way,
I'd be surprised. And so all those might have a prediction like, okay, I'm going to do it,
I'll drink some water. I'm okay. Okay, do this. There it is. I feel opening, right?
What if I had to turn it the other way? Or what if it's split in two? Then I say, oh my gosh,
I misunderstood this. I didn't have the right model. This thing, my attention would be drawn
to, I'll be looking at it going, well, how the hell did that happen? Why did it open up that way?
And I would update my model by doing it, just by looking at it and playing around that update
and say, this is a new type of water bottle. So you're talking about sort of complicated things
like a water bottle, but this also applies for just basic vision, just like seeing things.
That's almost like a precondition of just perceiving the world is predicting.
Everything that you see is first passed through your prediction.
Everything you see and feel, in fact, this is the insight I had back in the late 80s,
excuse me, early 80s. And another people have reached the same idea,
is that every sensory input you get, not just vision, but touch and hearing,
you have an expectation about it and a prediction. Sometimes you can predict very
accurately, sometimes you can't. I can't predict what next word is going to come out of your mouth,
but as you start talking, I'll have better and better predictions. And if you talk about some
topics, I'd be very surprised. So I have this sort of background prediction that's going on all the
time for all of my senses. Again, the way I think about that is this is how we learn. It's more
about how we learn. It's a test of our understanding, our predictions, our test. Is this really a
water bottle? If it is, I shouldn't see a little finger sticking out the side. And if I saw a
little finger sticking out, I was like, what the hell's going on? That's not normal.
That's fascinating. Let me linger on this for a second. It really honestly feels that prediction
is fundamental to everything, to the way our mind operates, to intelligence. So it's just
a different way to see intelligence, which is like everything starts at prediction.
And prediction requires a model. You can't predict something unless you have a model of it.
Right. But the action is prediction. The thing the model does is prediction.
But you can then extend it to things like, what would happen if I took this today? I went and
did this. What would be likely? You can extend prediction to like, oh, I want to get a promotion
at work. What action should I take? And you can say, if I did this, I could predict what might
happen. If I spoke to someone, I could predict what would happen. So it's not just low-level
predictions. Yeah, it's all predictions. It's like this black box. You can ask basically any
question, low-level or high-level. So we started off with that observation. It's all, it's this
nonstop prediction. And I write about this in the book about, and then we asked, how do neurons
actually make predictions? And physically, like, what does the neuron do when it makes a prediction?
And, or the neural tissue does when it makes prediction. And then we asked, what are the
mechanisms by how we build a model that allows you to make prediction? So we started with prediction
as sort of the fundamental research agenda, if in some sense, like, and say, well, we understand
how the brain makes predictions. We will understand how it builds these models and how it learns,
and that's the core of intelligence. So it was like, it was the key that got us in the door
to say, that is our research agenda. Understand predictions. So in this whole process,
where does intelligence originate, would you say? So if we look at things that are
much less intelligent to humans, and you start to build up a human, the process of evolution,
where is this magic thing that has a prediction model or a model that's able to predict
that starts to look a lot more like intelligence? Is there a place where
Richard Dawkins wrote an introduction to your, to your book, an excellent introduction? I mean,
it puts a lot of things into context. And it's funny, just looking at parallels for your book
and Darwin's origin of species. So Darwin wrote about the origin of species. So what is the
origin of intelligence? Yeah, well, we have a theory about it. And it's just that is the theory.
The theory goes as follows. As soon as living things started to move, they're not just floating
in sea, they're not just a plant, you know, grounded someplace. As soon as they started
to move, there was an advantage to moving intelligently, to moving in certain ways.
And there's some very simple things you can do, you know, bacteria or single cell organisms can
move towards a source of gradient of food or something like that. But an animal that might
know where it is and know where it's been and how to get back to that place or an animal that might
say, oh, there was a source of food someplace, how do I get to it? Or there was a danger,
how do I get to it? There was a mate, how do I get to them? There was a big evolution advantage
to that. So early on, there was a pressure to start understanding your environment like,
where am I? And where have I been? And what happened in those different places?
So we still have this neural mechanism in our brains. It's in the mammals, it's in the
hippocampus and enterinocortex, these are older parts of the brain. And these are very well studied.
We build a map of our environment. So these neurons in these parts of the brain know where
I am in this room and where the door was and things like that.
So a lot of other mammals have this? All mammals have this. And almost any animal that
knows where it is and get around must have some mapping system, must have some way of saying,
I've learned a map of my environment, I have hummingbirds in my backyard. And they go the
same places all the time. They must know where they are. They just know where they are. They're
not just randomly flying around. They know particular flowers they come back to. So we all
have this. And it turns out it's very tricky to get neurons to do this, to build a map of an
environment. And so we now know there's these famous studies that's still very active about
place cells and grid cells and these other types of cells in the older parts of the brain
and how they build these maps of the world. It's really clever. It's obviously been under a lot
of evolutionary pressure over a long period of time to get good at this. So animals know where
they are. What we think has happened, and there's a lot of evidence that suggests this,
is that mechanism we learned to map like a space is, was repackaged, the same type of neurons
was repackaged into a more compact form. And that became the cortical column. And it was,
it was in some sense, genericized, if that's a word. It was turned into a very specific thing
about learning maps of environments to learning maps of anything, learning a model of anything,
not just your space, but coffee cups and so on. And it got sort of repackaged into a more compact
version, a more universal version, and then replicated. So the reason we're so flexible is
we have a very generic version of this mapping algorithm. And we have 150,000 copies of it.
Sounds a lot like the progress of deep learning.
How so?
So take neural networks that seem to work well for a specific task,
compress them, and multiply it by a lot. And then you just stack them on top of it. It's like the
story of transformers in natural language. But in deep learning networks, they end up,
you're replicating an element, but you still need the entire network to do anything.
Right. Here, what's going on, each individual element is a complete learning system.
This is why I can take a human brain, cut it in half, and it still works.
It's pretty amazing.
It's fundamentally distributed.
It's fundamentally distributed, complete modeling systems. But that's our story we like to tell.
I would guess it's likely largely right. But there's a lot of evidence supporting that story,
this evolutionary story. The thing which brought me to this idea is that the human brain
got big very quickly. So that led to the proposal a long time ago that, well,
there's this common element just instead of creating new things, it just replicated something.
We also are extremely flexible. We can learn things that we had no history about.
And so that tells us that the learning algorithm is very generic. It's very kind of universal.
Because it doesn't assume any prior knowledge about what it's learning.
And so you combine those things together and you say, okay, well, how did that come about?
Where did that universal algorithm come from? It had to come from something that wasn't universal.
It came from something that was more specific.
And so anyway, this led to our hypothesis that you would find grid cells and play cell equivalents
in the New York Cortex. And when we first published our first papers on this theory,
we didn't know of evidence for that. It turns out there was some, but we didn't know about it.
And since then, so then we became aware of evidence for grid cells in parts of the New York
Cortex. And then now there's been new evidence coming out. There's some interesting papers
that came out just January of this year. So one of our predictions was if this
evolutionary hypothesis is correct, we would see grid cell play cell equivalents, cells that work
like them through every column in the New York Cortex. And that's starting to be seen.
What does it mean that, why is it important that they're present?
Because it tells us, well, we're asking about the evolutionary origin of the
evolutionary origin of intelligence, right? So our theory is that these columns in the Cortex
are working on the same principles, the modeling systems. And it's hard to imagine how neurons
do this. And so we said, Hey, it's really hard to imagine how neurons could learn these models
of things. We can talk about the details of that if you want. But let's, but there's another part
of the brain, we know the learned models of environments. So could that mechanism, the
learned model of this room be used to learn to model the water bottle? Is it the same mechanism?
So we said it's much more likely that the brain is using the same mechanism,
which case they would have these equivalent cell types. So it's basically the whole theory is built
on the idea that these columns have reference frames and they're learning these models and these
grid cell create these reference frames. So it's basically the major, in some sense, the major
predictive part of this theory is that we will find these equivalent mechanisms in each column in
the near Cortex, which tells us that that's that that's what they're doing. They're learning these
sensory motor models of the world. So just we're pretty confident that would happen. But now we're
seeing the evidence. So the evolutionary process nature does a lot of copy and paste and see what
happens. Yeah. Yeah, there's no direction to it. But but it just found out like, Hey, if I took this
these elements and made more of them, what happens? And let's hook them up to the eyes and let's
look up to ears and and that seems to work pretty well. Yeah, for us. Again, just to take a quick
step back to our conversation of collective intelligence. Do you sometimes see that as just
another copy and paste aspect is copying pasting these brains and humans and making a lot of them
and then creating social structures that then almost operates as a single brain?
I wouldn't have said it, but you said it sounded pretty good.
So to you, the brain is fundamental is is like, is it something? I mean, our goal is to understand
how the New York Cortex works. We can argue how essential that is to understanding human brain,
because it's not the entire human brain. You can argue how essential that is to understanding human
intelligence. You can argue how essential this to to, you know, sort of communal intelligence.
I'm not I didn't our goal was to understand the New York Cortex. Yeah. So what is the New York
Cortex and where does it fit in the various aspects of what the brain does? Like how important is
it to you? Well, obviously, again, we I mentioned again, in the beginning, it's it's about 70 to
75% of the volume of a human brain. So it's, you know, it dominates our brain in terms of size.
Not in terms of number of neurons, but in terms of size. Size isn't everything, Jeff. I know.
But it's it's nothing. It's nothing. It's not that we know that all high level vision,
hearing and touch happens in New York Cortex. We know that all language occurs and is understood
in the New York Cortex, whether that's spoken language, written language, sign language,
whether language of mathematics, language of physics, music, math, you know, we know that
all high level planning and thinking occurs in the New York Cortex. If I were to say,
you know, what part of your brain designed a computer and understands programming and
creates music, it's all the New York Cortex. So then that's just kind of an undeniable fact.
If but then there's other parts of our brain are important too, right? Our emotional states,
our body regulating our body. So the way I like to look at it is, you know, could you can you
understand the New York Cortex without the rest of the brain? And some people say you can't.
I think absolutely you can. It's not that they're not interacting, but you can understand it.
Can you understand the New York Cortex without understanding the emotions of fear? Yes,
you can. You can understand how the system works. It's just a modeling system. I make the analogy
in the book that it's, it's like a map of the world and how that map is used depends on who's
using it. So how our map of our world in our New York Cortex, how we how we manifest as a human
depends on the rest of our brain. What are our motivations? You know, what are my desires? Am I
a nice guy or not a nice guy? Am I a cheater or am I, you know, or not a cheater? You know,
how important different things are in my life? So, so, but the New York Cortex can be understood
on its own. And, and I say that as a neuroscientist, I know there's all these interactions and I want
to say I don't know them and we don't think about them. But a layperson's point of view,
you can say it's a modeling system. I don't generally think too much about the communal
aspect of intelligence, which you brought up a number of times already. So that's not really
been my concern. I just wonder if there's a continuum from the origin of the universe, like
this pockets of complexities that form living organisms. I wonder if we're just,
if you look at humans, we feel like we're at the top. I wonder if there's like just
where everybody probably, every living type pocket of complexity is probably thinks they're the
part in the French, they're the shit. Yeah. They're at the top of the pyramid.
Well, if they're thinking. Well, then what is thinking? In a sense, the whole point is,
in their sense of the world, they, their sense is that they're at the top of it.
I think. What is a turtle? But you're, you're, you're bringing up, you know, the problems of
complexity and complexity theory are, you know, it's a huge, interesting problem in science.
And, you know, I think we've made surprisingly little progress in understanding complex systems
in general. And so, you know, the Santa Fe Institute was founded to, to study this and,
and even the scientists there will say it's really hard. We haven't really been able to figure out
exactly, you know, that science hasn't really congealed yet. We're still trying to figure out
the basic elements of that science. What, you know, where does complexity come from and what is
it and how you define it, whether it's DNA, creating bodies or phenotypes or its individuals
creating societies or ants and, you know, markets and so on. It's, it's a very complex thing. I'm
not a complexity theorist person, right? And I think you need to ask, well, the brain itself is
a complex system. So can we understand that? I think we've made a lot of progress understanding
how the brain works. So, but I haven't brought it out to like, oh, well, where are we on the
complexity spectrum? You know, it's like, it's a great question. I prefer for that answer to be,
we're not special. It seems like if we're honest, most likely we're not special. So if there is a
spectrum, we're probably not in some kind of significant place. I think there's one thing
we could say that we are special. And again, only here on earth, I'm not saying I'm bad,
is that if we think about knowledge, what we know, we clearly, human brains have the only
brains to have a certain types of knowledge. We're the only brains on this earth to understand
what the earth is, how old it is, that the universe is a picture as a whole with the only
organisms understand DNA and the origins of, you know, of species. No other species on this
planet has that knowledge. So if we think about, I like to think about, you know, one of the endeavors
of humanity is to understand the universe as much as we can. I think our species is further along
on that, undeniably, whether our theories are right or wrong, we can debate, but at least we
have theories, you know, we know that what the sun is and how it's fusion is and how what black
holes are. And, you know, we know general theory of relativity and no other animal has any of this
knowledge. So from that sense, we're special. Are we special in terms of the hierarchy of
complexity in the universe? Probably not. Can we look at a neuron? You say that prediction
happens in the neuron. What does that mean? So the neuron tradition is seen as the basic element
of the brain. So I mentioned this earlier that prediction was our research agenda.
Yeah. We said, okay, how does the brain make a prediction? Like, I'm about to grab this water
bottle and my brain is predicting what I'm going to feel on all my parts of my fingers. If I felt
something really odd on any part here, I'd notice it. So my brain is predicting what it's going to
feel as I grab this thing. So what is that? How does that manifest itself in neural tissue?
Right? We got brains made of neurons and there's chemicals and there's neurons and there's spikes
and the connect, you know, where is the prediction going on? And one argument could be that, well,
when I'm predicting something, a neuron must be firing in advance. It's like, okay, this neuron
represents what you're going to feel and it's firing. It's sending a spike. And certainly that
happens to some extent. But our predictions are so ubiquitous that we're making so many of them,
which we're totally unaware of. Just the vast majority of them have no idea that you're doing
this. That it, there wasn't really, we were trying to figure, how could this be? Where are these
happening? Right? And I won't walk you through the whole story unless you insist upon it. But
we came to the realization that most of your predictions are occurring inside individual
neurons, especially these, the most common neuron, the pyramidal cells. And there are,
there's a property of neurons. We, everyone knows or most people know that a neuron is a cell and it
has this spike called an action potential and it sends information. But we now know that there's
these spikes internal to the neuron. They're called dendritic spikes. They travel along the
branches of the neuron and they don't leave the neuron. They're just internal only. There's far
more dendritic spikes than there are action potentials, far more. They're happening all the
time. And what we came to understand that those dendritic spikes, the ones that are occurring,
are actually a form of prediction. They're telling the neuron, the neuron is saying,
I expect that I might become active shortly. And that internal, so the internal spike is a way of
saying, you're going to, you might be generating external spikes soon. I predicted you're going
to become active. And, and we've, we've, we wrote a paper in 2016, which explained how this
manifests itself in neural tissue and how it is that this all works together. But the vast,
we think it's, there's a lot of evidence supporting it. So we, that's where we think that most
of these predictions are internal. That's why you can't be, the internal neuron, you can't perceive
them. Well, from understanding the, the prediction mechanism of a single neuron, do you think there's
deep insights to be gained about the prediction capabilities of the mini brains within the bigger
brain and the brain? Oh yeah. Yeah. Yeah. So having a prediction side of their individual neuron is
not that useful. You know, what, so what? The way it manifests itself in neural tissue is that
when a neuron, a neuron emits these spikes, or a very singular type of event, if a neuron is
predicting that it's going to be active, it emits its spike very a little bit sooner, just a few
milliseconds sooner than it would have otherwise. It's like, I give the analogy of the book as
like a sprinter on a, on a starting blocks in a, in a race. And if someone says, get ready set,
you get up and you're ready to go. And then when your race starts, you get a little bit earlier
start. So that, it's that, that ready set is like the prediction and the neurons like ready to go
quicker. And what happens is when you have a whole bunch of neurons together and they're all getting
these inputs, the ones that are in the predictive state, the ones that are anticipating to become
active, if they do become active, they, they happen sooner, they disable everything else.
And it leads to different representations in the brain. So you have to, it's not isolated just to
the neuron, the prediction occurs with the neuron, but the network behavior changes. So what happens
under different predictions, different inputs have different representations. So how I, what I
predict is going to be different under different contexts, you know, what my input will be is
different under different contexts. So this is, this is a key to the whole theory, how this works.
So the theory of the 1000 brains, if you were to count the number of brains, how would you do it?
The 1000 brain theory says that basically every cortical column in your near cortex is a complete
modeling system. And that when I ask, where do I have a model of something like a coffee cup,
it's not in one of those models, it's in thousands of those models. There's thousands of models
of coffee cups. That's what the 1000 brains. Then there's a voting mechanism. Then there's a voting
mechanism, which leads, which is the thing you're, which you're conscious of, which leads to your
singular perception. That's why you perceive something. So that's the 1000 brain theory.
The details, how we got to that theory are complicated. It wasn't, we just thought of it
one day. And one of those details, we had to ask, how does a model make predictions? And we talked
about just these predictive neurons. That's part of this theory. That's like saying, oh, it's a
detail, but it was like a crack in the door. It's like, how are we going to figure out how these
neurons build, do this? What is going on here? So we just looked at prediction as like, well,
we know that's ubiquitous. We know that every part of the cortex is making predictions.
Therefore, whatever the predictive system is, it's going to be everywhere. We know there's a
gazillion predictions happening at once. So this too, we can start teasing apart,
ask questions about, how could neurons be making these predictions? And that sort of built up to
now what we have this 1000 brain theory, which is complex. I can state it simply, but we just
didn't think of it. We had to get there step by step. It took years to get there.
And where does reference frames fit in? So yeah.
Okay. So again, a reference frame, I mentioned earlier about the model of a house. And I said,
if you're going to build a model of a house in a computer, they have a reference frame. And you
can think of referencing like Cartesian coordinates, like X, Y, and Z axes. So I could say, oh,
I'm going to design a house. I can say, well, the front door is at this location, X, Y, Z,
and the roof is at this location, X, Y, Z, and so on. That's the type of reference frame.
So it turns out for you to make a prediction and then I walk you through the thought experiment
in the book where I was predicting what my finger was going to feel when I touched a coffee cup,
was a ceramic coffee cup, but this one will do. And what I realized is that to make a prediction
what my finger is going to feel like, it's just going to feel different than this,
which will feel different if I touch the hole or the thing on the bottom. Make that prediction.
The cortex needs to know where the finger is, the tip of the finger relative to the coffee cup.
And exactly relative to the coffee cup. And to do that, I have to have a reference frame
for the coffee cup. There has to have a way of representing the location of my finger
to the coffee cup. And then we realize, of course, every part of your skin has to have
a reference frame relative to the things that touch. And then we did the same thing with vision.
But so the idea that a reference frame is necessary to make a prediction when you're
touching something or when you're seeing something and you're moving your eyes or
moving your fingers, it's just a requirement to know what to predict. If I have a structure,
I'm going to make a prediction. I have to know where it is. I'm looking or touching it.
So then we said, well, how do neurons make reference frames? It's not obvious.
XYZ coordinates don't exist in the brain. It's just not the way it works. So that's when we
looked at the older part of the brain, the hippocampus and the entorhinal cortex,
where we knew that in that part of the brain, there's a reference frame for a room or a reference
frame for an environment. Remember I talked earlier about how you could make a map of this room.
So we said, oh, they are implementing reference frames there. So we knew that a reference
frame needed to exist in every quarter of a column. And so that was a deductive
thing. We just deduced it. So you take the old mammalian ability to know where you are in a
particular space and you start applying that to higher and higher levels. Yeah. You first,
you apply it to like where your finger is. So here's what I think about it. The whole part
of the brain says, where's my body in this room? Yeah. The new part of the brain says,
where's my finger relative to this object? Yeah. Where is a section of my retina relative
to this object? I'm looking at one little coin here. Where is that relative to this patch of
my retina? Yeah. And then we take the same thing and apply it to concepts, mathematics, physics,
you know, humanity, whatever you want to think about. And eventually you're pondering your own
mortality. Well, whatever. But the point is, when we think about the world, when we have
knowledge about the world, how is that knowledge organized, Lex? Where is it in your head? The
answer is it's in reference frames. So the way I learned the structure of this water bottle,
where the features are relative to each other, when I think about history or democracy or
mathematics, there's same basic underlying structures happening. There's reference frames
for where the knowledge that you're assigning things to. So in the book, I go through examples
like mathematics and language and politics. But the evidence is very clear in the neuroscience.
The same mechanism that we use to model this coffee cup, we're going to use to model
high level thoughts. You're the demise of humanity, whatever you want to think about.
It's interesting to think about how different are the representations of those higher
dimensional concepts, higher level concepts, how different the representation there is
in terms of reference frames versus spatial. But interesting thing, it's a different application,
but it's the exact same mechanism. But isn't there some aspect to higher level concepts that
they seem to be hierarchical? They just seem to integrate a lot of information into them.
So is our physical objects. So take this water bottle. I'm not particular to this brand,
but this is a Fiji water bottle and it has a logo on it. I use this example in my book,
our company's coffee cup has a logo on it. But this object is hierarchical. It's got a cylinder
and a cap, but then it has this logo on it and the logo has a word. The word has letters,
the letters have different features. So I don't have to remember, I don't have to think about
this. So I say, oh, there's a Fiji logo on this water bottle. I don't have to go through and say,
oh, what is the Fiji logo? It's the F and I and the J and I and there's a hibiscus flower.
Oh, it has the stain on it. I don't have to do that. I just incorporate all of that in some
sort of hierarchical representation. I say, put this logo on this water bottle. And then the
logo has a word and the word has letters, all hierarchical.
It's all that stuff is big. It's amazing that the brain instantly just does all that.
The idea that there's water, it's liquid and the idea that you can
drink it when you're thirsty, the idea that there's brands. And then there's like,
all of that information is instantly built into the whole thing once you proceed.
So I wanted to get back to your point about hierarchical representation. The world itself
is hierarchical. And I can take this microphone in front of me. I know inside there's going to be
some electronics. I know there's going to be some wires and I know there's going to be a little
dive from them was back and forth. I don't see that, but I know it. So everything in the world
is hierarchical. Just go into room. It's composed of other components. The kitchen has a refrigerator.
The refrigerator has a door. The door has a hinge. The hinge has screws and pin.
So anyway, the modeling system that exists in every cortical column
learns the hierarchical structure of objects. So it's a very sophisticated modeling system
in this grain of rice. It's hard to imagine, but this grain of rice can do really sophisticated
things. It's got a hundred thousand neurons in it. It's very sophisticated. So that same
mechanism that can model a water bottle or a coffee cup can model conceptual objects as well.
That's the beauty of this discovery that this guy at Vernon Mountain Castle made many, many
years ago, which is that there's a single cortical algorithm underlying everything we're doing.
So common sense concepts and higher level concepts are all represented in the same way?
They're set in the same mechanisms. It's a little bit like computers. All computers are
universal Turing machines, even the little teeny one that my toaster and the big one that's running
some cloud server someplace. They're all running on the same principle. They can apply different
things. So the brain is all built on the same principle. It's all about learning these models,
structured models using movement and reference frames. And it can be applied to something as
simple as a water bottle and a coffee cup. And it can be applied to thinking like, what's the future
of humanity? And why do you have a hedgehog on your desk? I don't know. Nobody knows.
I think it's a hedgehog. That's right. It's a hedgehog in the fog. It's a Russian reference.
Does it give you any inclination or hope about how difficult that is to engineer common sense
reasoning? So how complicated is this whole process? So looking at the brain, is this a
marvel of engineering or is it pretty dumb stuff stuck on top of each other over a pretty
extensive copy? Can it be both? Can it be both, right? I don't know if it can be both because
if it's an incredible engineering job, that means it's so evolution did a lot of work.
Yeah, but then it just copied that, right? So as I said earlier, the figuring out how to model
something like a space is really hard. And evolution had to go through a lot of trick,
and these cells I was talking about, these grid cells and place cells, they're really
complicated. This is not simple stuff. This neural tissue works on these really unexpected,
weird mechanisms. But it did it. It figured it out. But now you could just make lots of copies
of it. But then finding, yeah, so it's a very interesting idea that's a lot of copies of a
basic mini brain. But the question is how difficult it is to find that mini brain that you can copy
and paste effectively. Well, today, we know enough to build this. I'm sitting here with,
you know, I know the steps we have to go. There's still some engineering problems to solve,
but we know enough. And this is not like, Oh, this is an interesting idea, we have to go
think about it for another few decades. No, we actually understand in pretty well details.
So not all the details, but most of them. So it's complicated, but it is an engineering problem.
So in my company, we are working on that. We are basically a roadmap, how we do this.
It's not going to take decades, it's better a few years,
optimistically, but I think that's possible. It's, you know, complex things. If you understand
them, you can build them. So in which domain do you think it's best to build them? Are we talking
about robotics, like entities that operate in the physical world that are able to interact with
that world? Are we talking about entities that operate in the digital world? Are we talking
about something more like, more specific, like it's done in the machine learning community,
where you look at natural language or computer vision. Where do you think is easiest to?
It's the first, it's the first two more than the third one, I would say.
Again, again, let's just use computers as an analogy. The pioneers are computing people
like John Van Norman on Turing. They created this thing, you know, we now call the universal
Turing machine, which is a computer, right? Did they know how it was going to be applied,
where it was going to be used, you know, could they envision any of the future? No,
they just said, this is like a really interesting computational idea about algorithms and how you
can implement them in a machine. And we're doing something similar to that today, like we are,
we are building this sort of universal learning principle that can be applied to many, many
different things. But the robotics piece of that, the interactive.
Okay, all right. Let us be specific. You can think of this cortical column is what we call a
sensory motor learning system. It has the idea that there's a sensor, and then it's moving.
That sensor can be physical. It could be like my finger, and it's moving in the world. It could
be like my eye, and it's physically moving. It can also be virtual. So it could be, an example
would be I could have a system that lives in the internet that actually samples information on the
internet and moves by following links. That's a sensory motor system. So something that echoes
the process of a finger moving along a... But in a very, very loose sense. It's like, again,
learning is inherently about the subring, the structure in the world and discover the structure
of the world. You have to move through the world, even if it's a virtual world, even if it's a
conceptual world, you have to move through it. It doesn't exist in one. It has some structure to it.
So here's a couple of predictions at getting what you're talking about. In humans, the same
algorithm does robotics. It moves my arms, my eyes, my body. In the future, to me, robotics and AI
will merge. They're not going to be separate fields because the algorithms through really
controlling robots are going to be the same algorithms we have in our brain, these sensory
motor algorithms. Today, we're not there, but I think that's going to happen. But not all AI
systems will have to be robotics. You can have systems that have very different types of embodiments.
Some will have physical movements. Some will not have physical movements. It's a very generic
learning system. Again, it's like computers. The Turing machine doesn't say how it's supposed
to be implemented. It doesn't say how big it is. It doesn't say what you can apply it to, but it's
a computational principle. Cortical column equivalent is a computational principle about
learning. It's about how you learn and it can be applied to a gazillion things. I think this
impact of AI is going to be as large if not larger than computing has been in the last century,
by far, because it's getting at a fundamental thing. It's not a vision system or a learning
system. It's not a vision system or a hearing system. It is a learning system. It's a fundamental
principle how you learn the structure in the world, how you gain knowledge and be intelligent.
That's what the 1000 Brain says was going on. We have a particular implementation in our head,
but it doesn't have to be like that at all. Do you think there's going to be some kind of impact?
Let me ask it another way. What do increasingly intelligent AI systems do with us humans in
the following way? How hard is the human in the loop problem? How hard is it to interact?
The finger on the coffee cup equivalent of having a conversation with a human being.
How hard is it to fit into our little human world?
I think it's a lot of engineering problems. I don't think it's a fundamental problem.
I could ask you the same question. How hard is it for computers to fit into a human world?
Right. That's essentially what I'm asking. How much are we elitist? Are we as humans?
We tried to keep out systems. I don't know. I'm not sure that's the right question.
Let's look at computers as an analogy. Computers are million times faster than us.
They do things we can't understand. Most people have no idea what's going on when
they use computers. How do we integrate them in our society? We don't think of them as their own
entity. They're not living things. We don't afford them rights. We rely on them. Our survival
as seven billion people or something like that is relying on computers now.
Don't you think that's a fundamental problem that we see them as something we don't give rights to?
Computers? Yeah, computers. Robots, computers,
intelligent systems, it feels like for them to operate successfully, they would need to have
a lot of the elements that we would start having to think about should this entity have rights.
I don't think so. I think it's tempting to think that way. First of all, I don't think anyone,
hardly anyone thinks that's for computers today. No one says, oh, this thing needs a right. I
shouldn't be able to turn it off. Or if I throw it in the trash can and hit it with a sledgehammer,
I might form a criminal act. No, no one thinks that. Now we think about intelligent machines,
which is where you're going. All of a sudden, well, now we can't do that. I think the basic
problem we have here is that people think intelligent machines will be like us. They're
going to have the same emotions as we do, the same feelings as we do. What if I can build an
intelligent machine that absolutely could care less about whether it was on or off or destroyed
or not? It just doesn't care. It's just like a map. It's just a modeling system. It has no desires
to live, nothing. Is it possible to create a system that can model the world deeply and not care
of whether it lives or dies? Absolutely. No question about it. To me, that's not 100% obvious.
It's obvious to me. We can debate if we want. Where does your desire to live come from?
It's an old evolutionary design. I mean, we can argue, does it really matter if we live or not?
Objectively, no. We're all going to die eventually. But evolution makes us want to live.
Evolution makes us want to fight to live. Evolutionists want to care and love one another
and to care for our children and our relatives and our family and so on. Those are all good things,
but they come about not because we're smart, because we're animals. They grew up. The hummingbird
in my backyard cares about its offspring. Every living thing in some sense cares about
surviving. But when we talk about creating intelligent machines, we're not creating life.
We're not creating evolving creatures. We're not creating living things. We're just creating
a machine that can learn really sophisticated stuff. And that machine, it may even be able to
talk to us. But it's not going to have a desire to live unless somehow we put it into that system.
Well, there's learning, right? The thing is... But you don't learn to want to live.
It's built into you. It's hard to use your name. People like Ernest Becker argue,
so, okay. There's the fact of finiteness of life. The way we think about it is something we learned,
perhaps. So, okay. Some people decide they don't want to live. And some people decide,
you can... The desire to live is built in DNA, right? But I think what I'm trying to get to is,
in order to accomplish goals, it's useful to have the urgency of mortality. So what the Stoics
talked about is meditating in your mortality. It might be a very useful thing to do to die
and have the urgency of death. And to conceive yourself as an entity that operates in this
world that eventually will no longer be a part of this world. And actually conceive of yourself
as a conscious entity might be very useful for you to be a system that makes sense of the world.
Otherwise, you might get lazy. Well, okay. We're going to build these machines, right?
So we're talking about building AI. But we're building the equivalent of the
cortical columns. The neocortex. The neocortex. And the question is, where do they arrive at?
Because we're not hard coding everything in. Well, in terms of if you build the neocortex
equivalent, it will not have any of these desires or emotional states. Now, you could argue that
neocortex won't be useful unless I give it some agency, unless I give it some desire,
unless I give it some motivation. Otherwise, you'll be lazy to do nothing, right? You could argue that.
But on its own, it's not going to do those things. It's just not going to sit there and say,
I understand the world. Therefore, I care to live. No, it's not going to do that. It's just
going to say, I understand the world. Why is that obvious to you? Do you think it's... Okay, let me
ask it this way. Do you think it's possible it will at least assign to itself agency and perceive
itself in this world as being a conscious entity as a useful way to operate in the world and to
make sense of the world? I think intelligent machine can be conscious, but that does not,
again, imply any of these desires and goals that you're worried about.
We can talk about what it means for a machine to be conscious.
By the way, not worry about, but get excited about. It's not necessarily that we should worry
about it. I think there's a legitimate problem or not a problem. A question asked, if you build
this modeling system, what's it going to model? What's its desire? What's its goal? What are we
applying it to? That's an interesting question. One thing, and it depends on the application.
It's not something that inherent to the modeling system. It's something we apply to the modeling
system in a particular way. If I wanted to make a really smart car, it would have to know about
driving in cars and what's important in driving in cars. It's not going to figure that out on its
own. It's not going to sit there and say, you know, I've understood the world and I've decided,
you know, no, no, no, no. We're going to have to tell it. We're going to have to say like,
so I imagine I make this car really smart. It learns about your driving habits. It learns
about the world. It's just, you know, is it one day going to wake up and say, you know what,
I'm tired of driving and doing what you want. I think I have better ideas about how to spend my
time. Okay. No, it's not going to do that. Well, part of me is playing a little bit of
devil's advocate, but part of me is also trying to think through this because I've studied cars
quite a bit and I've studied pedestrians and cyclists quite a bit. And there's part of me that
thinks that there needs to be more intelligence than we realize in order to drive successfully.
That game theory of human interaction seems to require some deep understanding of human nature.
Okay. When a pedestrian crosses the street, there's some sense they look at a car usually
and then they look away. There's some sense in which they say, I believe that you're not going
to murder me. You don't have the guts to murder me. This is the little dance of pedestrian car
interaction is saying, I'm going to look away and I'm going to put my life in your hands
because I think you're human. You're not going to kill me. And then the car, in order to successfully
operate in like Manhattan streets, has to say, no, no, no, no, I am going to kill you like a
little bit. There's a little bit of this weird inkling of mutual murder and that's a dance
and then somehow successfully operate through that. Do you think you're born of that? Did you
learn that social interaction? I think it might have a lot of the same elements that you're
talking about, which is we're leveraging things we were born with and applying them in the context
that-
All right. I would have said that that kind of interaction is learned because people in
different cultures have different interactions like that. If you cross the street in different
cities and different parts of the world, they have different ways of interacting. I would say
that's learned and I would say an intelligent system can learn that too, but that does not
lead and the intelligent system can understand humans. It could understand that just like I can
study an animal and learn something about that animal. I could study apes and learn something
about their culture and so on. I don't have to be an ape to know that. I may not be completely,
but I can understand something. So an intelligent machine can model that. That's just part of the
world. It's just part of the interactions. The question we're trying to get at, will the intelligent
machine have its own personal agency that's beyond what we assigned to it or its own personal
goals or will evolve and create these things? My confidence comes from understanding the
mechanisms I'm talking about creating. This is not hand-wavy stuff. It's down in the details.
I'm going to build it and I know what it's going to look like and I know what's going to behave.
I know what the kind of things it could do and the kind of things it can't do. Just like when
I build a computer, I know it's not going to, on its own, decide to put another register inside of
it. It can't do that. It's no way. No matter what your software does, it can't add a register to
the computer. So in this way, when we build AI systems, we have to make choices about how we
embed them. So I talk about this in the book. I said, intelligent system is not just the
neocortex equivalent. You have to have that, but it has to have some kind of embodiment,
physical or virtual. It has to have some sort of goals. It has to have some sort of
ideas about dangers, about things it shouldn't do. We build in safeguards in the systems.
We have them in our bodies. We have put them in the cars. My car follows my directions until the
day it sees I'm about to hit something and it ignores my directions and puts the brakes on.
So we can build those things in. So that's a very interesting problem, how to build those in.
I think my differing opinion about the risks of AI for most people is that people assume that
somehow those things will just appear automatically and it will evolve. And intelligence itself
begets that stuff or requires it. But it's not. Intelligence of the neocortex
equivalent doesn't require this. The neocortex equivalent just says,
I'm a learning system. Tell me what you want me to learn and I'll ask me questions and I'll
tell you the answers. But in that, again, it's, again, like a map. A map has no intent about
things, but you can use it to solve problems. Okay. So the building, engineering, the neocortex
in itself is just creating an intelligent prediction system.
Modeling system. Sorry, modeling system. You can use it to then make predictions.
But you can also put it inside a thing that's actually acting in this world.
You have to put it inside something. Again, think of the map analogy, right?
A map on its own doesn't do anything. It's just inert. It can learn, but it's inert.
So we have to embed it somehow in something to do something.
So what's your intuition here? You had a conversation
with Sam Harris recently that was sort of, you've had a bit of a disagreement and you're
sticking on this point. Elon Musk, Stuart Russell kind of have a worry existential
threats of AI. What's your intuition? Why, if we engineer an increasingly intelligent
neocortex type of system in the computer, why that shouldn't be a thing that we...
It was interesting to use the word intuition and Sam Harris used the word intuition too.
And when he used that intuition, that word immediately stopped and said,
that's the cut to the problem. He's using intuition. I'm not speaking about my intuition.
I'm speaking about something I understand, something I'm going to build, something I am
building, something I understand completely or at least well enough to know what it's all
I'm guessing. I know what this thing's going to do. And I think most people who are worried,
they have trouble separating out. They don't have the unknowledge or the understanding about
like, what is intelligence? How is it manifest in the brain? How is it separate from these
other functions in the brain? And so they imagine it's going to be human-like or animal-like.
It's going to have the same sort of drives and emotions we have, but there's no reason for that.
That's just because there's an unknown. If the unknown is like, oh my God,
I don't know what this is going to do. We have to be careful. It could be like us,
but really smarter. I'm saying, no, it won't be like us. It'll be really smarter,
but it won't be like us at all. But I'm coming from that not because I'm just guessing,
I'm not using intuition. I'm basically like, okay, I understand this thing works. This is
what it does. It makes money to you. Okay. But to push back, so I also disagree with the
intuitions that Sam has, but so disagree with what you just said, which, you know,
what's a good analogy. So if you look at the Twitter algorithm in the early days,
just recommender systems, you can understand how recommender systems work. What you can't
understand in the early days is when you apply that recommender system at scale to thousands
and millions of people, how that can change societies. So the question is, yes, you're just
saying this is how an engineer in your cortex works, but when you have a very useful
TikTok type of service that goes viral, when your neural cortex goes viral, and then millions of
people start using it, can that destroy the world? No. Well, first of all, this is back,
one thing I want to say is that AI is a dangerous technology. I'm not denying that.
All technology is dangerous. Well, and AI, maybe particularly so. Okay. So
am I worried about it? Yeah, I'm totally worried about it. The thing where the narrow component
we're talking about now is the existential risk of AI. So I want to make that distinction because
I think AI can be applied poorly. It can be applied in ways that people are going to understand
the consequences of it. These are all potentially very bad things, but they're not the AI system
creating this existential risk on its own. And that's the only place that I disagree with other
people. Right. So I think the existential risk thing is humans are really damn good at surviving.
So to kill off the human race would be very, very different. Yes, but I'll go further. I don't think
AI systems are ever going to try to. I don't think AI systems are ever going to like say,
I'm going to ignore you. I'm going to do what I think is best. I don't think that's going to
happen, at least not in the way I'm talking about it. So the Twitter recommendation algorithm
is an interesting example. Let's use computers as an analogy again. I build a computer. It's a
universal computing machine. I can't predict what people are going to use it for. They can build
all kinds of things. They can even create computer viruses. It's all kinds of stuff.
So there's some unknown about its utility and about where it's going to go. But in the other
hand, I pointed out that once I build a computer, it's not going to fundamentally change how it
computes. It's like I use the example of a register, which is a part internal part of a computer.
You know, I say it can't just say it because computers don't evolve. They don't replicate.
They don't evolve. They don't, you know, the physical manifestation of the computer itself
is not going to, there's certain things that can't do. Right. So we can break into things like
things that are possible to happen we can't predict and things that are just impossible to
happen. Unless we go out of our way to make them happen, they're not going to happen unless somebody
makes them happen. Yeah. So there's a bunch of things to say. One is the physical aspect,
which you're absolutely right. We have to build a thing for it to operate in the physical world
and you can just stop building them. You know, the moment they're not doing the thing you want
them to do or just change the design or change the design. The question is, I mean, there's,
it's possible in the physical world, this is probably the longer term is you automate the
building. It makes, it makes a lot of sense to automate the building. There's a lot of factories
that are doing more and more and more automation to go from raw resources to the final product.
It's possible to imagine that obviously much more efficient to keep, to create a factory
that's creating robots that do something, you know, that do something extremely useful for society.
It could be personal assistance. It could be, it could be your toaster, but a toaster that's much
has a deeper knowledge of your culinary preferences. Yeah. And that could. Well, I think now you've
hit on the right thing. The real thing we need to be worried about next is self-replication.
Right. That is the thing that we're in the physical world or even the virtual world.
Self-replication because self-replication is dangerous. It's probably more likely to be
killed by a virus, you know, or a human engineered virus. Anybody can create a, you know, this,
the technology is getting so almost anybody, well, not anybody, but a lot of people could create
a human engineered virus that could wipe out humanity. That is really dangerous. No intelligence
required. Just self-replication. So we need to be careful about that. So when I think about,
you know, AI, I'm not thinking about robots building robots. Don't do that. Don't build a,
you know, just. Well, that's because you're interested in creating intelligence.
It seems like self-replication is a good way to make a lot of money.
Well, all right. But so is, you know, maybe editing viruses is a good way to, I don't know.
The point is, as a society, when we want to look at existential risks, the existential risks we face
that we can control almost all evolve around self-replication.
Yes. The question is, I don't see a good way to make a lot of money by engineering viruses
and deploying them on the world. There could be, there could be applications that are useful.
But let's separate out. Let's separate out. I mean, you don't need to. You only need some,
you know, terrorists who want to do it because it doesn't take a lot of money to make viruses.
Let's just separate out what's risky and what's not risky. I'm arguing that the intelligence
side of this equation is not risk. It's not risky at all. It's the self-replication side of the
equation that's risky. And I'm not dismissing that. I'm scared as hell.
It's like the paperclip maximizer thing. Those are often like talked about in the same conversation.
I think you're right. Creating ultra-intelligent, super-intelligent systems is not necessarily
coupled with a self-replicating, arbitrarily self-replicating systems.
Yeah. And you don't get evolution unless you're self-replicating.
Yeah. And so I think that's just this argument,
that people have trouble separating those two out. They just think, oh, yeah, intelligence looks
like us. And look how, look at the damage we've done to this planet. Like how we've, you know,
destroyed all these other species. Yeah, well, we replicate. We have eight billion of us or seven
billion of us now. I think the idea is that the more intelligent we're able to build systems,
the more tempting it becomes from a capitalist perspective of creating products. The more
tempting it becomes to create self-reproducing systems.
All right. So let's say that's true. So does that mean we don't build intelligent systems?
No. That means we regulate, we understand the risks, we regulate them.
You know, look, there's a lot of things we could do as society which have some sort of
financial benefit to someone which could do a lot of harm. And we have to learn how to regulate
those things. We have to learn how to deal with those things. I will argue this. I would say the
opposite. Like I would say having intelligent machines at our disposal will actually help us
in the end more because it'll help us understand these risks better. It'll help us mitigate these
risks better. There might be ways of saying, oh, well, how do we solve climate change problems?
You know, how do we do this or how do we do that? That just like computers are dangerous in the hands
of the wrong people, but it's been so great for so many other things we live with those dangers.
And I think we have to do the same with intelligent machines. We just, but we have
to be constantly vigilant about this idea of A, bad actors doing bad things with them and B,
don't ever, ever create a self-replicating system. And by the way, I don't even know if you could
create a self-replicating system that uses a factory. That's really dangerous. You know,
nature's way of self-replicating is so amazing. You know, it doesn't require anything. It just
means the thing and resources and it goes, right? If I said to you, you know what? We have to build,
our goal is to build a factory that can make, that builds new factories. And it has to end to
end supply chain. It has to mine the resources, get the energy. I mean, that's really hard.
No one's doing that in the next 100 years. I've been extremely impressed by the efforts
of Elon Musk and Tesla to try to do exactly that, not from raw resource. Well, he actually,
I think, states the goal is to go from raw resource to the final car in one factory.
That's the main goal. Of course, it's not currently possible, but they're taking huge
leaps. Well, he's not the only one to do that. This has been a goal for many industries for a
long, long time. It's difficult to do. Well, a lot of people, what they do is instead they have like
a million suppliers and then they, like there's everybody's management. They all co-locate them
and they tie the systems together. It's a fundamental distributed system. I think that's,
that also is not getting at the issue I was just talking about, which is self-replication.
I mean, self-replication means there's no entity involved other than the entity that's
replicating. Right? And so if there are humans in the loop, that's not really self-replicating.
Right? It's, unless somehow we're duped into that. But it's also, don't necessarily
agree with you, because you've kind of mentioned that AI will not say no to us. I just think.
They will. Yeah. So like, I think it's a useful feature to build in. I'm just trying to like
put myself in the mind of engineers to sometimes say no.
Yeah. Well, I gave the example earlier, right? I gave the example of my car, right? My car
turns the wheel and applies the accelerator and the brake as I say, until it decides there's
something dangerous. Yes. And then it doesn't do that. Yeah. Now, that was something it didn't
decide to do. It's something we programmed into the car. And so good. It was a good idea, right?
Right? The question again, isn't like, if we create an intelligent system, will it ever
ignore our commands? Of course it will sometimes. Is it going to do it because it came up with its
own goals that serve its purposes and it doesn't care about our purposes? No, I don't think that's
going to happen. Okay. So let me ask you about these super intelligent cortical systems that we
engineer and us humans. Do you think with these entities operating out there in the world,
what does the future, most promising future look like? Is it us merging with them?
Or is it us? Like, how do we keep us humans around when we have increasingly intelligent
beings? Is it one of the dreams is to upload our minds in the digital space? So can we just
give our minds to these systems so they can operate on them? Is there some kind of more
interesting merger or is there more? In the third part of my book, I talked about all these scenarios
and let me just walk through them. Sure. The uploading the mind one. Yes. Extremely,
really difficult to do. Like, we have no idea how to do this even remotely right now.
So it would be a very long way away, but I make the argument you wouldn't like the result.
And you wouldn't be pleased with the result. It's really not what you think it's going to be.
Imagine I could upload your brain into a computer right now and now the computer
is sitting there going, hey, I'm over here. Great. Get rid of that old bio person. I don't
need him. You're still sitting here. Yeah. What are you going to do? No. No, that's not me. I'm
here, right? Yeah. Are you going to feel satisfied with that? But people imagine, look, I'm on my
death bed and I'm about to expire and I push the button and now I'm uploaded. But think about it
a little differently. And so I don't think it's going to be a thing because people by the time
we're able to do this, if ever, because you have to replicate the entire body, not just the brain,
it's really, I walk through the issues. It's really substantial. Do you have a sense of what
makes us us? Is there a shortcut to it can only save a certain part that makes us truly us?
No, but I think that machine would feel like it's you too. Right. Right. You have two people,
just like I have a child, right? I have two daughters. They're independent people. I created
them. Well, partly. And I don't, just because they're somewhat like me, I don't feel on them
and they don't feel like on me. So if you split the part, you have two people. So we can tell them
to come back to what makes, what consciousness if you want, we can talk about that. But we don't
have a remote consciousness. I'm not sitting there going, oh, I'm conscious of that. I'm
emitting that system over there. So let's stay on our topic. So one was uploading a brain.
Yeah. Ain't gonna happen in a hundred years, maybe a thousand, but I don't think people are
going to want to do it. The merging your mind with, you know, the neural link thing, right? Like,
again, really, really difficult. It's one thing to make progress to control a prosthetic arm. It's
another to have like a billion or several billion, you know, things and understanding what those
signals mean. Like it's the one thing to like, okay, I can learn to think some patterns to make
something happen. It's quite another thing to have a system, a computer, which actually knows
exactly what cells it's talking to and how it's talking to them and interacting in a way like
that. Very, very difficult. We're not getting anywhere closer to that.
Interesting. Can I ask a question here? So for me, what makes that merger very difficult
practically in the next 10, 20, 50 years is like literally the biology side of it, which is like,
it's just hard to do that kind of surgery in a safe way. But your intuition is even the machine
learning part of it, where the machine has to learn what the heck it's talking to. That's even
hard. I think it's even harder. And it's not, it's easy to do when you're talking about hundreds of
signals. It's a totally different thing to say, talking about billions of signals.
You don't think it's the raw, it's a machine learning problem. You don't think it could be
learned? Well, I'm just saying, no, I think you'd have to have detailed knowledge. You'd have to
know exactly what the types of neurons you're connecting to. I mean, in the brain, there's
these, there are neurons that do all different types of things. It's not like a neural network.
It's a very complex organism system up here. We talked about the grid cells or the place cells.
You know, you have to know what kind of cells you're talking to and what they're doing and how
their timing works and all this stuff, which you can't today is no way of doing that, right?
But I think it's, I think it's a, I think the problem, you're right that the biological aspect
of like who wants to have a surgery and have this stuff inserted in your brain, that's a problem.
But this is when we solve that problem. I think the, the information coding aspect is much worse.
I think that's much worse. It's not like what they're doing today. Today, it's simple machine
learning stuff because you're doing simple things. But if you want to merge your brain,
like I'm thinking on the internet, I'm merged my brain with the machine and we're both doing,
that's a totally different issue. That's interesting. I tend to think if the, okay,
if you have a super clean signal from a bunch of neurons at the start, you don't know what those
neurons are. I think that's much easier than the getting of the clean signal.
Yeah. I think if you think about today's machine learning, that's what you would conclude.
I'm thinking about what's going on in the brain and I don't reach that conclusion.
So we'll have to see. But I don't think even, even then, I think there's kind of a sad future.
Like, you know, do I, do I have to like plug my brain into a computer? I'm still a biologic
organism. I assume I'm still going to die. So what, what have I achieved? Right? You know,
what have I achieved today? Oh, I disagree that we don't know what those are, but it seems like
there could be a lot of different applications. It's like virtual reality is to expand your
brain's capability to, to, to like read Wikipedia. Yeah, but fine. But you're still a biologic
organism. Yes. Yes. You know, you're still, you're still mortal. You still, all right. So,
so what are you accomplishing? You're making your life in this short period of time better,
right? Just like having the internet made our life better. Yeah. Yeah. Okay. So I think that's
of, of, if I think about all the possible gains we can have here, that's a marginal one. It's an
individual, hey, I'm better, you know, I'm smarter. But you'll find I'm not against it. I just don't
think it's earth changing. I, but it, so this is the true of the internet. When each of us
individuals are smarter, we get a chance to then share our smartness. We get smarter and smarter
together as like, as a collective. This is kind of like the sand colony. Why don't I just create
an intelligent machine that doesn't have any of this biological nonsense? This is all the same.
It's everything except don't burden it with my brain. Yeah. Right. It has a brain. It is smart.
It's like my child, but it's much, much smarter than me. So I have a choice between doing some
implant, doing some hybrid weird, you know, biological thing that's bleeding and all these
problems and limited by my brain or creating a system which is super smart that I can talk to
that helps me understand the world that can read the internet, you know, read Wikipedia and talk
to me. I guess my, the open questions there are, what does the manifestation of super intelligence
look like? So like, what are we going to, you talked about, why do I want to merge with AI?
What's the actual marginal benefit here? If we have a super intelligent system,
how will it make our life better? So that's a great question, but let's break it onto little
pieces. All right. On the one hand, it can make our life better in lots of simple ways. You mentioned
like a care robot or something that helps me do things, a cook side, I don't know what it does,
right? Little things like that. We have super, better, smarter cars. We can have, you know,
better agents, aids helping us in our work environment and things like that. To me, that's
like the easy stuff, the simple stuff in the beginning. And so in the same way that computers
made our lives better in ways, many, many ways, I will have those kind of things.
To me, the really exciting thing about AI is sort of its transcendent quality in terms of
humanity. We're still biological organisms. We're still stuck here on earth. It's going to
be hard for us to live anywhere else. I don't think you and I are going to want to live on Mars
anytime soon. And we're flawed. You know, we may end up destroying ourselves. It's totally possible.
If not completely, we could destroy our civilizations. You know, it does face the
fact that we have issues here, but we can create intelligent machines that can help us in various
ways. For example, one example I gave, and that sounds a little sci-fi, but I believe this.
If we really want to live on Mars, we'd have to have intelligent systems that go there
and build the habitat for us. Not humans. Humans are never going to do this. It's just too hard.
But could we have 1,000 or 10,000, you know, engineer workers up there doing this stuff,
building things, terraforming Mars? Sure. Maybe we can move to Mars. But then if we want to go
around the universe, should I send my children around the universe? Or should I send some
intelligent machine, which is like a child, that represents me and understands our needs
here on Earth that could travel through space? So it sort of, in some sense, intelligence allows
us to transcend our limitations of our biology. And don't think of it as a negative thing. It's
in some sense, my children transcend my biology, too, because they live beyond me.
And we impart, they represent me, and they also have their own knowledge, and I can impart
knowledge to them. So intelligent machines would be like that, too, but not limited like us.
But the question is, there's so many ways that transcendence can happen. And the merger with
AI and humans is one of those ways. So you said intelligent, basically beings or systems
propagating throughout the universe representing us humans.
They represent us humans in the sense they represent our knowledge and our history,
not us individually. Right, right. But I mean, the question is, is it just a database
with a really damn good model of the world? No, no, they're conscious, just like us.
Okay. But just different. They're different. Just like my children are different. They're like me,
but they're different. These are more different. I guess maybe I've already, I kind of, I take
a very broad view of our life here on Earth. I say, you know, why are we living here? Are we
just living because we live? Are we surviving because we can survive? Are we fighting just
because we want to just keep going? What's the point of it? Right. So to me, the point,
if I ask myself, what's the point of life is, what transcends that ephemeral sort of biological
experience is to me, this is my answer, is the acquisition of knowledge to understand more
about the universe and to explore. And that's partly to learn more, right? I don't view it as
a terrible thing if the ultimate outcome of humanity is we create systems that are intelligent,
that are offspring, but they're not like us at all. And we stay here and live on Earth as long
as we can, which won't be forever, but as long as we can. And, but that would be a great thing
to do. It's not, it's not like a negative thing. Well, would you be okay then if the human
species vanishes, but our knowledge is preserved and keeps being expanded by intelligent systems?
I want our knowledge to be preserved and expanded. Yeah. Am I okay with humans dying? No, I don't
want that to happen. But if it does happen, what if we were sitting here and this is the
last two people on Earth who are saying, Lex, we blew it, it's all over, right? Wouldn't I feel
better if I knew that our knowledge was preserved and that we had agents that knew about that,
that were trans, you know, that left Earth? I would want that. It's better than not having that.
You know, I make the analogy of like, you know, the dinosaurs, the poor dinosaurs, they live for,
you know, tens of millions of years. They raised their kids. They, you know, they, they fought
to survive. They were hungry. They, they did everything we do. And then they're all gone.
Yeah. Like, you know, and, and if we didn't discover their bones, nobody would ever know
that they ever existed, right? Do we want to be like that? I don't want to be like that.
There's a sad aspect to it. And it kind of is jarring to think about that it's possible that
a human-like intelligence civilization has previously existed on Earth. The reason I say
this is like, it is jarring to think that we would not, if they weren't extinct, we wouldn't be able
to find evidence of them. After a sufficient amount of time. After a sufficient amount of time,
of course, there's like, like basically humans, like if we destroy ourselves now,
human civilization destroy ourselves now, after a sufficient amount of time, we would not be,
we'd find evidence of the dinosaurs. We would not find evidence of us humans.
Yeah. That's, that's kind of an odd thing to think about. Although I'm not sure if we have enough
knowledge about species going back for billions of years, but we could, we could, we might be able
to eliminate that possibility. But it's an interesting question. Of course, this is a similar
question to, you know, there were lots of intelligent species throughout our galaxy
that have all disappeared. That's super sad that they're exactly that there may have been much
more intelligent alien civilizations in our galaxy. There are no longer there. Yeah. You actually
talked about this, that humans might destroy ourselves. Yeah. And how we might preserve our
knowledge and advertise that knowledge to other. Advertise is a funny word to use.
From a PR perspective. There's no financial gain in this.
You know, like make it like from a tourism perspective, make it interesting. Can you
describe how you think about this problem? Well, there's a couple things. I broke it down into
two parts, actually three parts. One is, you know, there's a lot of things we know that,
what if, what if we were, what if we ended, what if our civilization collapsed? Yeah,
I'm not talking tomorrow. Yeah, we could be a thousand years from now. Like, you know, we
don't really know. But, but historically would be likely at some point. Time flies when you're
having fun. Yeah. You know, could we, and then, then intelligent life evolved again on this planet.
Wouldn't they want to know a lot about us and what we knew? Wouldn't they wouldn't be able to
ask us questions? So one very simple thing I said, how would we archive what we know?
That was a very simple idea. I said, you know, it wouldn't be that hard to put a few satellites,
you know, going around the sun and we upload Wikipedia every day and that kind of thing.
So, you know, if we end up killing ourselves, well, it's up there and the next intelligent
species will find it and learn something. They would like that. They would appreciate that.
So that's one thing. The next thing I said, well, what if, you know,
how outside of our solar system, we have the SETI program, we're looking for these intelligent
signals from everybody. And if you do a little bit of math, which I did in the book, and you say,
well, what if intelligent species only live for 10,000 years before, you know, technologically
intelligent species, like ones are really able to do the, we're just starting to be able to do.
Well, the chances are we wouldn't be able to see any of them because they would have all
been disappeared by now. They've lived 10,000 years and now they're gone. And so we're not
going to find these signals being sent from these people because, but I said, what kind of signal
could you create that would last a million years or a billion years that someone would say, damn it,
someone smart live there. We know that that would be a life changing event for us to figure that
out. Well, what we're looking for today in the SETI program, isn't that we're looking for very
coded signals in some sense. And so I asked myself, what would be a different type of signal one could
create? I've always thought about this throughout my life. And in the book, I gave one, one possible
suggestion, which was we now detect planets going around other, other suns, other stars, excuse me.
And we do that by seeing this, the slight dimming of the light as the planets move in front of them.
That's how we detect planets elsewhere in our galaxy. What if we created something like that,
that just rotated around our, around the sun, and it blocked out a little bit of light in a
particular pattern that someone said, Hey, that's not a planet. That is a sign that someone was once
there. You can say, what if it's beating up pie, you know, three point, whatever. So I did.
From a distance, you can. From a distance, broadly broadcast, takes no
continue activation on our part. This is the key, right? No one has to be
seeing a running computer and supplying it with power. It just goes on. So we go, it's continues.
And, and I argued that part of the study program should be looking for signals like that. And to
look for signals like that, you ought to figure out what the, how would we create a signal?
Like, what would we create that would be like that, that would persist for millions of years,
that would be broadcast broadly that you could see from a distance that was unequivocal, came from
an, an intelligent species. And so I gave that one example, because they don't know what to know
of actually. And then, and then finally, right? If, if our, ultimately our solar system will
die at some point in time, you know, how do we go beyond that? And I think it's possible,
if it all possible, we'll have to create intelligent machines that travel throughout the,
throughout the solar system or throughout the galaxy. And I don't think that's going to be
humans. I don't think it's going to be biological organisms. So these are just things to think about,
you know, like, what's the, you know, I don't want to be like the dinosaur. I don't want to just
live in there. Okay, that was it. We're done, you know. Well, there is a kind of presumption
that we're going to live forever, which I think it is a bit sad to imagine that the message we
send as, as he talked about is that we were once here instead of we are here. Well, it could be
we are still here. But it's more of a, it's more of an insurance policy in case we're not here,
you know, well, I don't know, but there's something I think about, we, as humans don't
often think about this, but it's like, like, whenever I record a video, I've done this a
couple of times in my life, I've recorded a video for my future self, just for personal,
just for fun. And it's always just fascinating to think about that, preserving yourself for future
civilizations. For me, it was preserving myself for a future me, but that's a little,
that's a little fun example of archival. Well, these podcasts are preserving you and I in a way,
yeah, for future, hopefully, well, after we're gone. But you don't often, we're sitting here
talking about this. You are not thinking about the fact that you and I are going to die. And there
will be like 10 years after somebody watching this and we're still alive. You know, in some sense,
I do. I'm here because I want to talk about ideas. And these ideas transcend me and they
transcend this time and on our planet. We're talking here about ideas that could be around
a thousand years from now or a million years from now. When I wrote my book, I had an audience
in mind. And one of the clearest audiences was, were people reading this 100 years from now?
Yes. I said to myself, how do I make this book relevant to someone reading this 100 years from
now? What would they want to know that we were thinking back then? What would make it like,
that was an interesting, it's still an interesting book. I'm not sure I can achieve that, but that
was how I thought about it because these ideas, especially in the third part of the book, the
ones we were just talking about, you know, these crazy, it sounds like crazy ideas about, you know,
storing our knowledge and, and, you know, merging our brains or computers and sending,
you know, our machines out into space is not going to happen in my lifetime.
And they may not, they've been happening the next hundred years. They may not happen for
a thousand years. Who knows? But we have the unique opportunity right now, we, you, me,
and other people like this, to sort of at least propose the agenda that might impact the future
like that. It's a fascinating way to think both like writing or creating, try to make,
try to create ideas, try to create things that hold up in time. Yeah. You know,
understanding how the brain works, we're going to figure that once. That's it. It's going to be
figured out once. And after that, that's the answer. And people will, people will study that
thousands of years now. We still, we still, you know, venerate Newton and Einstein. And,
and, and, you know, because, because ideas are exciting even well into the future, you know.
Well, the interesting thing is like big ideas, even if they're wrong, are still useful. Like,
yeah, especially if they're not completely wrong. Like you're right, right, right.
Right. Noons laws are not wrong. They're just Einstein's are better.
So yeah, I mean, but we're talking when Newton and Einstein were talking about physics. I wonder
if we'll ever achieve that kind of clarity, but understanding like complex systems and the,
this particular manifestation of complex systems, which is the human brain.
I'm totally optimistic we can do that. I mean, we're making progress at it. I don't see any
reasons why we can't completely, I mean, completely understand in the sense, you know,
we don't really completely understand what all the molecules in this water bottle are doing.
But, you know, we have laws that sort of capture it pretty good. And so we'll have that kind of
understanding. I mean, it's not like you're going to have to know what every neuron in your brain
is doing. But enough to, first of all, to build it. And second of all, to do, you know, do what
physics does, which is like have concrete experiments where we can validate. This is
happening right now. Like it's not, this is not some future thing. You know, I'm very optimistic
about, I know about our work and what we're doing. We'll have to prove it to people. But
I consider myself a rational person. And, you know, until fairly recently, I wouldn't have said that.
But right now, where I'm sitting right now, I'm saying, you know, this is going to happen.
There's no big obstacles to it. We finally have a framework for understanding what's going on in
the cortex. And that's liberating. It's like, oh, it's happening. So I can't see why we wouldn't
be able to understand it. I just can't. Okay. So, I mean, on that topic, let me ask you to play
devil's advocate. Is it possible for you to imagine, look, look a hundred years from now,
and looking at your book, in which ways might your ideas be wrong? Oh, I worry about this all
the time. Yeah. It's still useful. Yeah. Yeah. I think there's, you know, I can best relate it
to like things I'm worried about right now. So we talked about this voting idea, right? It's
happening. There's no question it's happening. But it could be far more, there's enough things
I don't know about it that it might be working in different ways differently than I'm thinking about
the kind of what's voting, who's voting, you know, where are representations. I talked about,
like you have a thousand models of a coffee cup like that. That could turn out to be wrong
because it may be, maybe there are a thousand models that are sub models, but not really a
single model of the coffee cup. I mean, there's things, these are all sort of on the edges,
things that I present as like, oh, it's so simple and clean. Well, that's not that. It's always going
to be more complex. And there's parts of the theory which I don't understand the complexity well.
So I think the idea that the brain is a distributed modeling system is not controversial at all,
right? That's not, that's well understood by many people. The question then is,
are each critical column an independent modeling system? Right.
I could be wrong about that. I don't think so, but I worry about it.
But my intuition, not even thinking why you could be wrong is the same intuition I have
about any sort of physicist like strength theory that we as humans desire for a clean
explanation. And a hundred years from now, intelligent systems might look back at us
and laugh at how we try to get rid of the whole mess by having simple explanation when the reality
is it's way messier. And in fact, it's impossible to understand you can only build it. It's like
this idea of complex systems and cellular automata is you can only launch the thing you cannot
understand it. Yeah, I think that, you know, the history of science suggests that's not likely to
occur. The history of science suggests that as a theorist and we're theorists, you look for simple
explanations, right? Fully knowing that whatever simple explanation you're going to come up with,
is not going to be completely correct. I mean, it can't be. I mean, it's just more complexity.
But that's the role of theorists play. They sort of, they give you a framework on which you now
can talk about a problem and figure out, okay, now we can start digging more details. The best
frameworks stick around while the details change. You know, again, the classic example is Newton
and Einstein, right? You know, Newton's theories are still used. They're still valuable. They're
still practical. They're not like wrong. Just they've been refined. Yeah, but that's in physics.
It's not obvious, by the way, it's not obvious for physics either that the universe should be
such that's amenable to these simple, but it's so far it appears to be as far as we can tell.
Yeah, I mean, but as far as we could tell. And but it's also an open question whether the brain
is amenable to such clean theories. That's the brain, but intelligence.
Well, I don't know. I would take intelligence out of it. Just say, you know,
well, okay. The evidence we have suggests that the human brain is A, at the one time,
extremely messy and complex, but there's some parts that are very regular and structured.
That's why we started the neocortex. It's extremely regular in its structure.
Yeah. And unbelievably so. And then I mentioned earlier, the other thing is, it's universal
abilities. It is so flexible to learn so many things. We haven't figured out what it can't
learn yet. We don't know, but we haven't figured out yet, but it learns things that it never was
evolved to learn. So those give us hope. That's why I went into this field, because I said, you
know, this regular structure, it's doing this amazing number of things. There's got to be
some underlying principles that are common, and other scientists have come up with the same conclusions.
And so it's promising. And whether the theories play out exactly this way or not,
that is the role that theorists play. And so far, it's worked out well, even though maybe,
we don't understand all the laws of physics, but so far, it's been pretty damn useful. The
ones we have, our theories are pretty useful. You mentioned that we should not necessarily be,
at least to the degree that we are, worried about the existential risks of artificial intelligence
relative to human risks from human nature being existential risk.
What aspect of human nature worries you the most in terms of the survival of the human species?
I mean, I'm disappointed in humanity as humans. I mean, all of us, I'm once,
I'm disappointed myself too. It's kind of a sad state. There's two things that disappoint me.
One is how it's difficult for us to separate our rational component of ourselves from our
evolutionary heritage, which is not always pretty. Rape is an evolutionary good strategy
for reproduction. Murder can be at times too. Making other people miserable at times is a
good strategy for reproduction. And so now that we know that, and yet we have this sort of,
you and I can have this very rational discussion talking about intelligence and brains and life
and so on. It seems like it's so hard. It's just a big transition to get humans, all humans,
to make the transition from like, let's pay no attention to all that ugly stuff over here.
Let's just focus on the incident. What's unique about humanity is our knowledge and our intellect.
But the fact that we're striving isn't itself amazing, right? The fact that we're able to
overcome that part, and it seems like we are more and more becoming successful at overcoming that
part. That is the optimistic view, and I agree with you. But I worry about it. I'm not saying,
I'm worrying about it. I think that was your question. I still worry about it.
We could end tomorrow because some terrorists could get nuclear bombs and blow us all up.
The other thing that I'm disappointed is, and I understand it, I guess you can't really be
disappointed. It's just a fact, is that we're so prone to false beliefs. We have a model on our head.
The things we can interact with directly, physical objects, people, that model is pretty good,
and we can test it all the time, right? I touch something, I look at it, I talk to you,
see if my model is correct. But so much of what we know is stuff I can't directly interact with.
I don't even know because someone told me about it. And so we're prone, inherently prone to having
false beliefs because if I'm told something, how am I going to know it's right or wrong?
Right? And so then we have the scientific process, which says we are inherently flawed.
So the only way we can get closer to the truth is by looking for
contrary evidence. Yeah. Like this conspiracy theory, this theory that scientists keep telling
me about, that the earth is round. As far as I can tell, when I look out, it looks pretty flat.
Yeah. So yeah, there is attention. But it's also,
I tend to believe that we haven't figured out most of this thing, right? Most of nature around us
is a mystery. And so it... But does that worry you? I mean, it's like, oh, that's like a pleasure,
more to figure out, right? Yeah, that's exciting. But I'm saying like there's going to be a lot of
quote unquote, wrong ideas. I mean, I've been thinking a lot about engineering systems like
social networks and so on. And I've been worried about censorship and thinking through all that
kind of stuff because there's a lot of wrong ideas. There's a lot of dangerous ideas. But
then I also read a history, read history and see when you censor ideas that are wrong. Now,
this could be a small scale censorship, like a young grad student who comes up, who like raises
their hand and says some crazy idea. A form of censorship could be, I shouldn't use the word
censorship, but like de-incentivize them from, no, no, no, this is the way it's been done.
Yeah, you're a foolish kid, don't think so. Yeah, you're foolish. So in some sense,
those wrong ideas most of the time end up being wrong, but sometimes end up being a foolish kid.
I agree with you. So I don't like the word censorship. At the very end of the book,
I ended up with sort of a plea or a recommended force of action. And the best way I could,
I know how to deal with this issue that you bring up, is if everybody understood,
it's part of your upbringing life, something about how your brain works, that it builds a model of
the world, how it worked, how it basically builds that model of the world, and that the model is
not the real world, it's just a model. And it's never going to reflect the entire world, and it
can be wrong, and it's easy to be wrong. And here's all the ways you can get the wrong model in your
head, right? It's not to prescribe what's right or wrong, just understand that process. If we all
understood the process, and then I got together and you say, I disagree with you, Jeff, and I
said, Lex, I disagree with you that, at least we understand that we're both trying to model something.
We both have different information which leads to our different models. And therefore,
I shouldn't hold it against you, and you shouldn't hold it against me. And we can at least agree that,
well, what can we look for in its common ground to test our beliefs? As opposed to so much,
we raise our kids on dogma, which is this is a fact, and this is a fact, and these people are
bad. And if everyone knew just to be skeptical of every belief, and why, and how their brains do
that, I think we might have a better world. Do you think the human mind is able to comprehend
reality? So you talk about this, creating models that are better and better. How close do you think
we get to reality? So the wildest idea is, it's like Donald Hoffman saying, we're very far away
from reality. Do you think we're getting close to reality? Well, I guess it depends on what you
define reality. We have a model of the world that's very useful for basic goals of survival,
and the things we want to pleasure right out, right? So that's useful. I mean, it's really useful.
Oh, we can build planes, we can build computers, we can do these things, right?
I don't think, I don't know the answer to that question. I think that's part of the question
we're trying to figure out, right? Like, you know, obviously, if you end up with a theory of
everything, that really is a theory of everything, and all of a sudden, everything comes into play,
and there's no room for something else, then you might feel like we have a good model of the world.
Yeah, but if we have a theory of everything, and somehow, first of all, you'll never be able to
really conclusively say it's a theory of everything, but say somehow, we are very damn sure it's a
theory of everything, we understand what happened at the Big Bang, and how just the entirety of
the physical process, I'm still not sure that gives us an understanding of the next many layers
of the hierarchy of abstractions that form. Well, also, what if string theory turns out to be true?
And then you say, well, we have no reality, no modeling, what's going on in those other
dimensions that are wrapped into it on each other, right? Or the multiverse, you know?
I honestly don't know how, for us, for human interaction, for ideas of intelligence,
how it helps us to understand that we're made up of vibrating strings that are
like 10 to the whatever times smaller than us. I don't, you know, you could probably build better
weapons of better rockets, but you're not going to be able to understand intelligence.
I guess maybe better computers. No, you won't be able to. I think it's just more purely knowledge.
You might lead to a better understanding of the beginning of the universe,
right? It might lead to a better understanding of, I don't know, I guess, I think the acquisition
of knowledge has always been one where you pursue it for its own pleasure, and you don't always know
what is going to make a difference. Yeah. You're pleasantly surprised by the weird things you find.
Do you think for the for the New York Cortex in general, do you think there's a lot of innovation
to be done on the machine side? You know, you use the computer as a metaphor quite a bit.
Is there different types of computer that would help us build intelligence?
I mean, what are the physical manifestations of intelligent machines?
Yeah. Or is it? Oh, no, it's going to be totally crazy.
We have no idea how this is going to look out yet. You can already see this.
Today, of course, we model these things on traditional computers, and now GPUs are really
popular with neural networks and so on. But there are companies coming up with fundamentally new
physical substrates that are just really cool. I don't know if they're going to work or not,
but I think there'll be decades of innovation here. Yeah. Totally.
Do you think the final thing will be messy, like our biology is messy? Or do you think
it's the it's the old bird versus airplane question? Or do you think we could just
build airplanes that fly way better than birds in the same way we can build
electrical and neural cortex? Yeah. Yeah. Can I riff on the bird thing a bit?
Because I think it's interesting. People really misunderstand this. The Wright brothers,
the problem they were trying to solve was controlled flight, how to turn an airplane,
not how to propel an airplane. They weren't worried about that.
Interesting. Yeah. They already had, at that time, there was already wing shapes,
which they had from stunning birds. There was already gliders that carry people.
The problem was if you put a rudder on the back of a glider and you turn it,
the plane falls out of the sky. So the problem was how do you control flight?
And they studied birds and they actually had birds in captivity. They watched birds in wind
tunnels and they were in the wild and they discovered the secret was the birds twist their wings
when they turn. And so that's what they did on the Wright brothers flyer. They had these sticks
that you would twist the wing and that was their innovation, not the propeller.
And today, airplanes still twist their wings. We don't twist the entire wing. We just twist the tail
end of it. The flaps, which is the same thing. So today's airplanes fly on the same principles
as birds, which we observe. So everyone get that analogy wrong. But let's step back from that.
Once you understand the principles of flight, you can choose how to implement them. No one's
going to use bones and feathers and muscles, but they do have wings and we don't flap them.
We have propellers. So when we have the principles of computation that goes on to modeling the world
in a brain, we understand those principles right clearly. We have choices of how to implement them
and some of them be biological like and some won't. But I do think there's going to be a huge
amount of innovation here. Just think about the innovation when in the computer, they had to
invent the transistor, they invented the silicon chip, they had to invent software. I mean,
it's the things they had to do, memory systems. It's going to be similar.
Well, it's interesting that the deep learning, the effectiveness of deep learning for specific
tasks is driving a lot of innovation in the hardware, which may have effects for actually
allowing us to discover intelligent systems that operate very differently or this much
bigger than deep learning. Yeah, interesting. So ultimately, it's good to have an application
that's making our life better now because the capitalist process, if you can make money,
that works. I mean, the other way, Neil deGrasse Tyson writes about this, is the other way we
fund science, of course, is through military conquests. It's interesting that we're doing
on this regard. So we've decided, we used to have a series of these biological principles,
and we can see how to build these intelligent machines, but we've decided to apply some of
these principles to today's machine learning techniques. So one that we didn't talk about
this principle, one is sparsity in the brain, mostly neurons are active at any point in time,
sparse and the connectivity sparse, and that's different than deep learning networks.
So we've already shown that we can speed up existing deep learning networks
anywhere from 10 to a factor of 100, I mean, literally 100, and make them more robust at
the same time. So this is commercially very, very valuable. And so, you know, if we can prove this
actually in the largest systems that are commercially applied today, there's a big commercial
desire to do this. Well, sparsity is something that doesn't run really well on existing hardware.
It doesn't really run really well on GPUs and on CPUs. And so that would be a way of sort of
bringing more brain principles into the existing system on a commercially valuable basis.
Another thing we can think we can do is we're going to use these dendrites models.
I talked earlier about the prediction occurring from inside and around that that basic property
can be applied to existing neural networks and allow them to learn continuously with something
they don't do today. And so the dendritic spikes that you were talking about. Yeah. Well, we wouldn't
model the spikes, but the idea that you have that neural today's neural networks have to come to
go to point neurons is a very simple model of a neuron. And by adding dendrites to them at just
one more level of complexity, that's in biological systems, you can solve problems in continuous
learning and rapid learning. So we're trying to take we're trying to bring the existing field
and we'll see if we can do it, we're trying to bring the existing field of machine learning
commercially along with us. You brought up this idea of keeping paying for it commercially along
with us as we move towards the ultimate goal of a true AI system. Even small innovations on
your own networks are really, really exciting. Yeah. It seems like such a trivial model of the
brain and applying different insights that just even like you said, continuous learning or
making it more asynchronous, or maybe making more dynamic, or like incentivizing.
Or more robust, even just more robust. And making it somehow much better,
incentivizing sparsity somehow. Yeah. Well, if you can make things 100 times faster,
then there's plenty of incentive. That's true. People are spending millions of dollars just
training some of these networks now, these transforming networks.
Let me ask you a big question. How for young people listening to this today in high school
and college, what advice would you give them in terms of which career path to take and
maybe just about life in general? Well, in my case, I didn't start life with any kind of goals.
When I was going to college, I was like, oh, what do I study? Well, maybe I'll do
some electrical engineering stuff. I wasn't like, today you see some of these young kids
are so motivated, I'm going to change the world. I was like, whatever. But then I did fall in love
with something besides my wife. But I fell in love with this, like, oh, my God, it would be so cool
to understand how the brain works. And then I said to myself, that's the most important thing I
could work on. I can't imagine anything more important because if you're interested in
how the brain's working, build tells machines and they could figure out all the other big
questions in the world. And then I said, but I want to understand how I work. So I fell in
love with this idea and I became passionate about it. And this is a trope, people say this,
but it's true because I was passionate about it. I was able to put up with almost so much crap.
I was in that, I was like, person said, you can't do this. I was a graduate student at Berkeley
when they said, you can't study this problem. No one's can solve this or you can't get funded for
it. Then I went into do mobile computing and it was like people say, you can't do that. You
can't build a cell phone. But all along, I kept being motivated because I wanted to work on this
problem. I said, I want to understand the brain works. I got myself a meal. I got one lifetime.
I'm going to figure it out, do as best I can. So by having that, because as you point out, Lex,
it's really hard to do these things. There's so many downers along the way.
So many obstacles are getting your way. I'm sitting here happy all the time, but trust me,
it's not always like that. That's, I guess, the happiness that the passion is a prerequisite
for surviving the whole thing. Yeah, I think so. I think that's right. And so I don't want to sit to
someone say, you need to find a passion and do it. No, maybe you don't. But if you do find something
you're passionate about, then you can follow it as far as your passion will let you put up with
it. Do you remember how you found it? Is how the spark happened? Why specific for me? Yeah,
like, because you said it's such an interesting, so like almost like later in life, by later,
I mean, like not when you were five. Yeah. You didn't really know. And then all of a sudden,
you fell in love with it. Yeah, yeah. There was two separate events that compounded one another.
One, when I was probably a teenager, it might have been 17 or 18, I made a list of the most
interesting problems I could think of. First was, why does the universe exist? Seems like
not existing is more likely. Yeah. The second one was, well, given exists, why does it behave the
way it does? You know, laws of physics, why is it equal MC squared, not MC cubed? You know,
that's interesting question. I don't know. Third one was like, what's the origin of life?
And the fourth one was what's intelligence? And I stopped there. I said, well, that's probably
the most interesting one. And I put that aside as a teenager. But then when I was 22,
and I was reading the, no, I was, excuse me, I was 70, it was 1979, excuse me, 1979.
I was reading, so I was at that time was 22. I was reading the September issue of Scientific
American, which is all about the brain. And then the final essay was by Francis Crick,
who of DNA fame, and he had taken his interest to studying the brain now. And he said, you know,
there's something wrong here. He says, we got all this data, all this fact, this is 1979,
all these facts about the brain, tons and tons of facts about the brain. Do we need more facts?
Or do we just need to think about a way of rearranging the facts we have? Maybe we're just
not thinking about the problem correctly. You know, because he says, this shouldn't be like this,
you know? So I read that and I said, wow, I said, I don't have to become like an experimental
neuroscientist. I could just look at all those facts and try to become a theoretician and try to
figure it out. And I said, that, I felt like it was something I would be good at. I said,
I wouldn't be a good experimentalist. I don't have the patience for it, but I'm a good thinker.
And I love puzzles. And this is like the biggest puzzle in the world. This is the biggest puzzle
all the time. Like all the puzzle pieces in front of me. Damn, that was exciting.
And there's something, obviously you can't cover it towards it. It just kind of
sparked this passion. And I have that a few times in my life, just something
yeah, just, just like you, it grabs you. Yeah. I thought it was something that was both important
and that I could make a contribution to. Yeah. And so all of a sudden it felt like,
oh, it gave me purpose in life. Yeah. You know? I honestly don't think it has to be as big as
one of those four questions. No, no. I think you can find those things in the smallest. Oh,
absolutely. I'm with David Foster Wallace said like the key to life is to be unboreable. I'm,
I think, I think it's very possible to find that intensity of joy in the smallest thing.
Absolutely. I'm just, you asked me my story. Yeah. Yeah. No, but I'm actually speaking to
the audience. Yeah. It doesn't have to be those four. You happen to get excited by one of the
bigger questions in the universe. But even the smallest things and watching the Olympics now,
just giving yourself life, giving your life over to the study and the mastery of a particular sport
is fascinating. And if it sparks joy and passion, you're able to, in the case of the Olympics,
basically suffer for like a couple of decades to achieve. I mean, you can find joy and passion
just being a parent. I mean, yeah. Yeah. The parenting one is funny. So I always, not always,
but for a long time wanted kids and get married and stuff. And especially it has to do with the
fact that I've seen a lot of people that I respect get a whole nother level of joy from kids.
And at, you know, at first is like, you're thinking is, well, like I don't have enough time in the
day, right? If I have this passion, which is true. But like, if I want to solve intelligence,
how's this kid situation going to help me? But then you realize that,
you know, like you said, the things that sparks joy, and it's very possible that kids
can provide even a greater or deeper, more meaningful joy than those bigger questions
when they enrich each other. And that seemed like a, obviously, when I was younger, it's
probably a counterintuitive notion because there's only so many hours in the day. But then life is
finite and you have to pick the things that give you joy. Yeah. But you also understand you can
be patient too. I mean, it's finite, but we do have, you know, whatever, 50 years or something.
It's also long. Yeah. So in my case, you know, in my case, I had to give up on my dream of the
neuroscience because I was a graduate student at Berkeley and they told me I couldn't do this and
I couldn't get funded. And, you know, and, and so I went back in, in, went back into the computing
industry for a number of years. I thought it would be four, but it turned out to be more.
But I said, but I said, I'll come back. You know, I definitely, I'm definitely going to come back.
I know I'm going to do this computer stuff for a while, but I'm definitely coming back. Everyone
knows that. And it's like raising kids. Well, yeah, you still, you have to spend a lot of time
with your kids. It's fun, enjoyable. But that doesn't mean you have to give up on other dreams.
It just means that you have to wait a week or two to work on that next idea.
You talked about the, the darker side of me, disappointing sides of human nature that we're
hoping to overcome so that we don't destroy ourselves. I tend to put a lot of value in
the broad general concept of love, of the human capacity to, of compassion towards each other,
of just kindness, whatever that longing of like just the human, human to human connection,
it connects back to our initial discussion. I tend to see a lot of value in this collective
intelligence aspect. I think some of the magic of human civilization happens when there's,
a party is not as fun when you're alone. I totally agree with you on these issues.
Do you think from a Neocortex perspective, what role does love play in the human condition?
Well, those are two separate things from the Neocortex point of view. I don't think it,
it doesn't impact our thinking about human, about the Neocortex. From a human condition
point of view, I think it's core. I mean, we get so much pleasure out of loving people and helping
people. So, you know, I can, I'll rack it up to old brain stuff and maybe we can throw it
onto the, the bus of evolution if you want. That's fine. It doesn't impact how I think
about how we model the world. But from a humanity point of view, I think it's essential.
Well, I tend to give it to the new brain. And also I tend to think the sum of aspects of that
need to be engineered into AI systems, both in their ability to have compassion for other humans
and their ability to maximize love in the world between humans. So I'm more thinking about the
social network. So like whenever there's a deep integration between AI systems and humans,
there's a specific applications where it's AI and humans. I think that's something that
often not talked about in terms of metrics over which you try to maximize,
like which metric to maximize in a system. It seems like one of the most
powerful things in societies is the capacity to love.
It's fascinating. I think it's a great way of thinking about it. I have been thinking more
of these fundamental mechanisms in the brain as opposed to the social interaction between or
the interaction between humans and AI systems in the future, which is, and I think if you think
about that, you're absolutely right. But that's a complex system. I can have intelligent systems
that don't have that component, but they're not interacting with people. They're just running
something or building a building someplace or something. I don't know. But if you think
about interacting with humans, yeah, but it has to be engineered in there. I don't think it's going
to appear on its own. That's a good question. In terms of, from a reinforcement learning perspective,
whether the darker sides of human nature or the better angels of our nature win out,
statistically speaking, I don't know. I tend to be optimistic and hope that love wins out in the
end. You've done a lot of incredible stuff. Your book is driving towards this fourth question that
you started with on the nature of intelligence. What do you hope your legacy for people reading
a hundred years from now? How do you hope they remember your work? How do you hope they remember
this book? Well, I think as an entrepreneur or a scientist or any human who's trying to accomplish
some things, I have a view that really all you can do is accelerate the inevitable. It's like,
if we didn't figure out, if we didn't study the brain, someone else would study the brain. If
Elon just didn't make electric cars, someone else would do it eventually. And if Thomas Edison
didn't invent a light bulb, we wouldn't be using candles today. So what you can do as an individual
is you can accelerate something that's beneficial and make it happen sooner than whatever. That's
really it. That's all you can do. You can't create a new reality that wasn't going to happen.
So from that perspective, I would hope that our work, not just me, but our work in general,
people would look back and said, hey, they really helped make this better future happen sooner.
They helped us understand the nature of false beliefs sooner than we might have.
They made it. Now, we're so happy that we have these intelligent machines doing these things,
helping us that maybe that solved the climate change problem, and they made it happen sooner.
So I think that's the best I would hope for. Some would say those guys just moved the needle
forward a little bit in time. Well, it feels like the progress of human civilization is not,
there's a lot of trajectories. And if you have individuals that accelerate
towards one direction that helps steer human civilization. So I think in those long stretch
of time, all trajectories will be traveled. But I think it's nice for this particular
civilization on earth to travel down one that's not. Yeah. Well, I think you're right. We have
to take the whole period of World War II, Nazism or something like that. Well, that was a bad
side step, right? We've been on with that for a while. But there is the optimistic view about life
that ultimately it does converge in a positive way. It progresses ultimately,
even if we have years of darkness. So yeah. So I think you could perhaps, that's accelerating
the positive. It could also mean eliminating some bad missteps along the way too. But I'm
an optimistic in that way. Despite we talking about the end of civilization, I think we're
going to live for a long time. I hope we are. I think our society in the future is going to be
better. We're going to have less discord. We're going to have less people killing each other.
We'll make them live in some sort of way that's compatible with the carrying capacity of the
earth. I'm optimistic these things will happen. And all we can do is try to get there sooner.
And at the very least, if we do destroy ourselves, we'll have a few satellites that will tell alien
civilization that we were once here. Or maybe our future inhabitants of earth. Imagine the
planet of the ape scenario. We kill ourselves million years from now or billion years from
now. There's another species on the planet. Curious creatures who are once here. Jeff,
thank you so much for your work. And thank you so much for talking to me once again.
Well, it's great. I love what you do. I love your podcast. You have those interesting people,
me aside. So it's a real service, I think you do for a very broader sense for humanity, I think.
Thanks, Jeff. All right. It's a pleasure. Thanks for listening to this conversation with
Jeff Hawkins. And thank you to Code Academy, bio optimizers, ExpressVPN, A Sleep and Blinkist.
Check them out in the description to support this podcast. And now let me leave you with
some words from Albert Camus. An intellectual is someone whose mind watches itself. I like this
because I'm happy to be both halves, the watcher and the watched. Can they be brought together?
This is a practical question we must try to answer. Thank you for listening. I hope to see you next time.