logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 12h 13m 31s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

As part of MIT course 6S099 on artificial general intelligence, I got a chance to sit
down with Christoph Koch, who is one of the seminal figures in neurobiology, neuroscience,
and generally in the study of consciousness.
He is the president, the chief scientific officer of the Allen Institute for Brain Science
in Seattle.
From 1986 to 2013, he was the professor at Caltech.
For that, he was at MIT, he is extremely well cited over 100,000 citations.
His research, his writing, his ideas have had big impact on the scientific community
and the general public in the way we think about consciousness, in the way we see ourselves
as human beings.
He's the author of several books, The Quest for Consciousness, A Neurobiological Approach,
and a more recent book, Consciousness, Confessions of a Romantic Reductionist.
If you enjoy this conversation, this course, subscribe, click the little bell icon to make
sure you never miss a video, and in the comments, leave suggestions for any people you'd like
to see be part of the course or any ideas that you would like us to explore.
Thanks very much and I hope you enjoy.
Okay, before we delve into the beautiful mysteries of consciousness, let's zoom out a little
bit and let me ask, do you think there's intelligent life out there in the universe?
Yes, I do believe so.
We have no evidence of it, but I think the probabilities are overwhelming in favor of
it.
Given the universe, we have 10 to the 11 galaxies and each galaxy has between 10 to the 11,
10 to the 12 stars and we know more stars have one or more planets.
So how does that make you feel?
It still makes me feel special because I have experiences.
I feel the world, I experience the world and independent of whether there are other creatures
out there, I still feel the world and I have access to this world in this very strange,
compelling way and that's the core of human existence.
Now, you said human.
Do you think if those intelligent creatures are out there, do you think they experience
their world?
Yes, they are evolved if they are a product of natural evolution, if they would have to
be they will also experience their own world.
The consciousness isn't just a human, you're right, it's much wider, it may be spread across
all of biology.
The only thing that we have special is we can talk about it.
Of course, not all people can talk about it.
Babies and little children can talk about it.
Patients who have a stroke, let's say the left inferior frontal gyrus can talk about
it, but most normal adult people can talk about it and so we think that makes us special
compared to little monkeys or dogs or cats or mice or all the other creatures that we
share the planet with, but all the evidence seems to suggest that they too experience
the world and so it's overwhelmingly likely that aliens would also experience their world.
Of course, differently because they have a different sensorim, they have different sensors,
they have a very different environment, but the fact that I would strongly suppose that
they also have experiences.
With your pain and pleasure and see in some sort of spectrum and hear and have all the
other sensors.
Of course, their language, if they have one, would be different so we might not be able
to understand their poetry about the experiences that they have.
That's correct.
So in a talk, in a video, I've heard you mention Saputzel, a Dachshund that you came up with,
that you grew up with, it was part of your family when you were young.
First of all, you're technically a Midwestern boy, you just –
Technically.
But after that, you traveled around a bit, hence a little bit of the accent.
You talked about Saputzel, the Dachshund, having these elements of humanness, of consciousness
that you discovered.
So I just wanted to ask, can you look back in your childhood and remember when was the
first time you realized you, yourself, sort of from a third-person perspective, are a
conscious being, this idea of stepping outside yourself and seeing, there's something special
going on here in my brain.
I can't really actually – it's a good question, I'm not sure I recall a discrete
moment.
I mean, you take it for granted because that's the only world you know, right?
The only world I know, you know, is the world of seeing and hearing voices, touching, and
all the other things.
So it's only much later, at early in my undergraduate days when I enrolled in physics and in philosophy
that I really thought about it and thought, well, this is really fundamentally very, very
mysterious and there's nothing really in physics right now that explains this transition
from the physics of the brain to feelings.
Where do the feelings come in?
So you can look at the foundational equation of quantum mechanics, general relativity,
you can look at the period table of the elements, you can look at the endless ATG seed chat
in our genes and know where is consciousness, yet I wake up every morning to a world where
I have experiences.
And so that's the heart of the ancient mind-body problem.
How do experiences get into the world?
So what is consciousness?
Experience.
So consciousness is any experience.
Some people call it subjective feelings, some people call it phenomenology, some people
call it qualia of the philosopher, but they all denote the same thing.
It feels like something in the famous word of the philosopher Thomas Nagel.
It feels like something to be a bad or to be, you know, an American or to be angry or
to be sad or to be in love or to have pain.
And that is what experience is, any possible experience.
Could be as mundane as just sitting in a chair, could be as exalted as, you know, having a
mystical moment, you know, in deep meditation, those are just different forms of experiences.
Experience.
So if you were to sit down with maybe the next, skip a couple generations of IBM Watson,
something that won Jeopardy.
What is the gap, I guess, the question is, between Watson, that might be much smarter
than you, than us, than any human alive, but may not have experience.
What is the gap?
Well, so that's a big, big question.
That's occupied people for the last, certainly last 50 years since we, you know, since the
advent of the birth of computers.
That's a question Alan Turing tried to answer, and of course he did it in this indirect way
by proposing a test, an operational test.
So, but that's not really, that's, you know, he tried to get it, what does it mean for
a person to think?
And then he had this test, right, you log him away, and then you have a communication
with them, and then you try to, to guess after a while whether that is a person or whether
it's a computer system.
There's no question that now or very soon, you know, Alexa or Siri or, you know, Google
now will pass this test, right?
And you can game it, but, you know, ultimately, certainly in your generation, there will be
machines that will speak with complete poise, that will remember everything you ever said,
they'll remember every email you ever had, like, like Samantha, remember in the movie
Her?
Yeah.
There's no question it's going to happen.
But of course, the key question is, does it feel like anything to be Samantha in the movie
Her?
Or does it feel like anything to be Watson?
And there one has to very, very strongly think, there are two different concepts here that
we call mingle.
There is a concept of intelligence, natural or artificial, and there is a concept of consciousness,
of experience, natural or artificial, those are very, very different things.
Now, historically, we associate consciousness with intelligence, why?
Because we live in a world leaving aside computers of natural selection, where we are surrounded
by creatures, either our own kin that are less or more intelligent, or we go across
species, some are more adapted to particular environment, others are less adapted, whether
it's a whale or dog, or you go talk about a parameetum or a little worm, all right.
And we see the complexity of the nervous system goes from one cell to specialized cells, to
a worm that has three net, that has 30% of its cells and nerve cells, to creature like
Arso, like a blue whale that has 100 billion, even more nerve cells.
And so, based on behavioral evidence, and based on the underlying neuroscience, we believe
that as these creatures become more complex, they are better adapted to their particular
ecological niche, and they become more conscious, partly because their brain grows, and we believe
consciousness unlike the ancient, ancient people thought most, almost every culture
thought that consciousness with intelligence has to do with your heart.
And you still see that today, you see, honey, I love you with all my heart.
But what you should actually say is they know, honey, I love you with all my lateral hypothalamus.
And for Valentine's Day, you should give your sweetheart, you know, a hypothalamic
piece of chocolate, not a heart-shaped chocolate, right.
Anyway, so we still have this language, but now we believe it's a brain.
And so we see brains of different complexity, and we think, well, they have different levels
of consciousness, they're capable of different experiences.
But now we confront a world where we know, where we're beginning to engineer intelligence,
and it's radical unclear whether the intelligence we're engineering has anything to do with
consciousness and whether it can experience anything.
Because fundamentally, what's the difference?
Intelligence is about function.
Intelligence no matter exactly how you define it, sort of adaptation to new environments,
being able to learn and quickly understand, you know, the setup of this and what's going
on and who are the actors and what's going to happen next, that's all about function.
Consciousness is not about function.
Consciousness is about being.
It's in some sense much fundamental.
You can see folks, you can see this in several cases.
You can see it, for instance, in the case of the clinic.
When you're dealing with patients who are, let's say, had a stroke or were in a traffic
accident, etc., they're pretty much immobile, Terry Shiver, you may have heard historically,
she was a person here in the 90s in Florida, her heart stood still.
She was reanimated.
Then for the next 14 years, she was what's called in a vegetative state.
There are thousands of people in a vegetative state, so they're, you know, they're like
this.
Occasionally, they open their eyes for two, three, four, five, six, eight hours, and then
close their eyes.
They have sleep-wake cycle.
Occasionally, they have behaviors, you know, but there's no way that you can establish
a lawful relationship between what you say, or the doctor says, or the mom says, and what
the patient does.
So there isn't any behavior, yet in some of these people, there is still experience.
You can design and build brain machine interfaces where you can see they still experience something.
Of course, there are these cases of locked-in state, there's this famous book called The
Diving Bell and the Butterfly, where you had an editor, a French editor, he had a stroke
in the brainstem, unable to move except his vertical eyes, eye movement.
He could just move his eyes up and down, and he dictated an entire book.
And some people even lose this at the end.
All the evidence seems to suggest that they're still in there.
So in this case, you have no behavior, you have consciousness.
Second case is tonight, like all of us, you're going to go to sleep, close your eyes, you
go to sleep, you will wake up inside your sleeping body, and you will have conscious
experiences.
They are different from everyday experience.
You might fly, you might not be surprised that you're flying.
You might meet a long-dead pet, childhood dog, and you're not surprised that you're
meeting them.
But you have conscious experience of love, of hate, they can be very emotional.
Your body during this stage, typically it's a REM, stage sends an active signal to your
motor neurons to paralyze you.
It's called etonia.
Because if you don't have that, like some patients, what do you do?
You act out your dreams.
You get, for example, a REM behavioral disorder, which is bad juju to get.
Third case is pure experience.
So I recently had this, what some people call a mystical experience.
I went to Singapore and went into a flotation tank.
So this is a big tub filled with water that's body temperature and absent salt.
You strip completely naked, you lie inside of it, you close the lid.
Complete darkness, soundproof.
So very quickly, you become bodyless because you're floating and you're naked.
You have no rings, no watch, no nothing.
You don't feel your body anymore.
It's no sound, soundless.
There's no photon, sightless, timeless.
Because after a while, early on, you actually hear your heart, but then you adapt to that.
And then the passage of time ceases.
And if you train yourself in a meditation not to think early on, you think a lot.
It's a little bit spooky.
You feel somewhat uncomfortable or you think, well, I'm going to get bored.
But if you try to not to think actively, you become mindless.
So there you are, bodyless, timeless, soundless, sightless, mindless.
But you're in a conscious experience.
You're not asleep.
You're not asleep.
You are being of pure, you're pure being.
There isn't any function.
You aren't doing any computation.
You're not remembering.
You're not projecting.
You're not planning.
Yet you are fully conscious.
You're fully conscious.
There's something going on there.
It could be just a side effect.
So what is the...
You mean epiphenomena.
So what's the...
You mean the side effect.
Meaning, what is the function of you being able to lay in this sensory-free deprivation
tank and still have a conscious experience?
Evolutionary?
Obviously, we didn't evolve with flotation tanks in our environment.
So biology is notoriously bad at asking why question, telonomical question.
Why do we have two eyes?
Why don't we have four eyes?
Which is good.
Like some teachers or three eyes or something.
Well, no.
There's probably...
There is a function to that.
But it's...
We're not very good at answering those questions.
We can speculate.
And Leslie, biology or science is very good about mechanistic question.
Why is there a charge in the universe, right?
We find a certain universe where there are positive and negative charges.
Why?
Why does quantum mechanics hold?
Why doesn't some other theory hold?
Quantum mechanics hold and our universe is very unclear why.
So telonomical question, why questions are difficult to answer.
Obviously, there's some relationship between complexity, brain processing power, and consciousness.
But however, in these cases, in these three examples I gave, one is an everyday experience
at night.
The other one is trauma.
And third one is in principle.
Everybody can have these sort of mystical experiences.
You have a dissociation of function of intelligence from consciousness.
You caught me asking a why question.
Let me ask a question that's not a why question.
You're giving a talk later today on the Turing test for intelligence and consciousness drawing
lines between the two.
So is there a scientific way to say there's consciousness present in this entity or not?
And to anticipate your answer, because you will also...
There's a neurobiological answer.
So we can test the human brain.
But if you take a machine brain that you don't know tests for yet, how would you even begin
to approach a test if there's consciousness present in this thing?
Okay, that's a really good question.
So let me take in two steps.
So as you point out for humans, let's just stick with humans, there's now a test called
the Zap and Zip.
It's a procedure where you ping the brain using transcranial magnetic stimulation.
You look at the electrical reverberations, essentially using EEG.
And then you can measure the complexity of this brain response.
And you can do this in awake people, in a sleep, normal people.
You can do it in awake people and then anesthetize them.
You can do it in patients.
And it has 100% accuracy that in all those cases when you're clear, the patient or the
person is either conscious or unconscious, the complexity is either high or low.
And then you can adopt these techniques to similar creatures like monkeys and dogs and
mice that have very similar brains.
Now of course, you point out that may not help you because we don't have a cortex.
If I send a magnetic pulse into my iPhone or my computer, it's probably going to break
something.
So we don't have that.
So what we need ultimately, we need a theory of consciousness.
We can't just rely on our intuition.
Our intuition is, well, yeah, if somebody talks, they're conscious.
However, then there are all these children, babies don't talk.
But we believe that the babies also have conscious experiences.
And then there are all these patients I mentioned and they don't talk.
When you dream, you can't talk because you're paralyzed.
So what we ultimately need, we can't just rely on our intuition.
We need a theory of conscience that tells us what is it about a piece of matter?
What is it about a piece of highly excitable matter like the brain or like a computer that
gives rise to conscious experience?
We all believe, none of us believe anymore in the old story, it's a soul.
That used to be the most common explanation that most people accept.
And still a lot of people today believe, well, there's God endowed, only us with a special
thing that animals don't have.
René Descartes famously said, a dog, if you hit it with your carriage, may yelp, may cry,
but it doesn't have this special thing.
It doesn't have the magic sauce.
It doesn't have rascogitans, the soul.
Now we believe that isn't the case anymore.
So what is the difference between brains and these guys, silicon?
And in particular, once their behavior matches.
So if you have Siri or Alexa in 20 years from now that she can talk just as good as any
possible human, what grounds do you have to say, she's not conscious in particular, if
she says, of course she will.
Well, of course I'm conscious.
You ask her, how are you doing?
And she'll say, well, you know, they'll generate some way to, of course, she'll behave like
a person.
Now there's several differences.
One is, so this relates to the problem, the very hard, why is consciousness a hard problem?
It's because it's subjective, right?
Only I have it, which only I know I have direct experience of my own consciousness.
I don't have experience in your consciousness.
Now I assume as a sort of a Bayesian personal belief and probability theory and all of that,
you know, I can do, I can do an abduction to the, to the best available facts.
I deduce your brain is very similar to mine.
If I put you in a scanner, your brain is roughly going to behave the same way as I do.
If, if, if, you know, if I gave you this musically and ask you, how does it taste, you tell
me things that, you know, that, that I would also say more or less.
So I infer based on all of that, that you're conscious.
Now with theory, I can do that.
So there I really need a theory that tells me what is it about, about any system, this
or this, that makes it conscious.
We have such a theory.
Yes.
So the integrated information theory is, but let me first, maybe as an introduction for
people who are not familiar, Descartes, can you, you talk a lot about pan, panpsychism.
Can you describe what physicalism versus dualism, this, you mentioned the soul.
What is the history of that idea?
What is the idea of panpsychism or no, the debate really, out of which panpsychism can
emerge of, of, of dualism versus physicalism.
Or do you not see panpsychism as fitting into that?
No, you can argue there's some, well, okay, so let's step back.
So panpsychism is a very ancient belief that's been around, I mean, Plato and Aristotle talks
about it, modern philosophers talk about it.
Of course, in Buddhism, the idea is very prevalent that, I mean, there are different versions
of it.
One version says everything is in soul, everything, rocks and stones and dogs and people and forests
and iPhones, all of us soul, all matter is in soul.
That's sort of one version.
Another version is that all biology, all creatures, small or large, from a single cell to a giant
sequoia tree, feel like something.
That's one I think is somewhat more realistic.
So the different versions- What do you mean by feel like something?
Well, have- Have feeling, have some kind of experience.
It feels like something, it may well be possible that it feels like something to be a paramecium.
I think it's pretty likely it feels like something to be a bee or a mouse or a dog.
Sure.
So, okay.
So that you can say that's also, so pamppsychism is very broad, right?
And you can, so some people, for example, Bertrand Russell, try to advocate this for
this idea, it's called Brazilian monism, that pamppsychism is really physics viewed from
the inside.
So the idea is that physics is very good at describing relationship among objects like
charges or like gravity, right, describes the relationship between curvature and mass
distribution.
Okay.
That's the relationship among things.
Physics doesn't really describe the ultimate reality itself.
It's just relationship among, you know, quarks or all these other stuff.
Almost from like a third person observer.
Yes.
Yes.
Yes.
And consciousness is what physics feels from the inside.
So my conscious experience, it's the way the physics of my brain, particularly my cortex,
feels from the inside.
And so if you are paramecium, you got to remember, you say paramecium, well, that's a pretty
dumb creature.
It is.
But it has already a billion different molecules, probably, you know, 5,000 different proteins
assembled in a highly, highly complex system that no single person, no computer system
so far on this planet has ever managed to accurately simulate.
Its complexity vastly escapes us.
Yes.
And it may well be that that little thing feels like a tiny bit.
Now it doesn't have a voice in the head like me.
It doesn't have expectations.
You know, it doesn't have all that complex things, but it may well feel like something.
Yeah.
So this is really interesting.
Can we draw some lines and maybe try to understand the difference between life, intelligence,
and consciousness?
How do you see all of those?
If you have to define what is a living thing, what is a conscious thing, and what is an
intelligent thing, do those intermix for you or are they totally separate?
Okay.
So A, that's a question that we don't have a full answer.
A lot of the stuff we're talking about today is full of mysteries and fascinating ones,
right?
Well, for example, you can go to Aristotle, who's probably the most important scientist
and philosopher who's ever lived in certainly in Western culture.
He had this idea.
It's called hylomorphism.
It's quite popular these days that there are different forms of soul.
The soul is really the form of something.
He says, all biological creatures have a vegetative soul.
That's life principle.
Today, we think we understand something more than it is biochemistry and nonlinear thermodynamics.
Then he says they have a sensitive soul.
Only animals and humans have also a sensitive soul or a petative soul.
They can see, they can smell, and they have drives.
They want to reproduce, they want to eat, et cetera.
And then only humans have what he called rational soul, okay?
And that idea then made it into Christendom, and then the rational soul is the one that
lives forever.
He was very unclear.
He wasn't really...
I mean, different readings of Aristotle give different...
Did he believe that rational soul was immortal or not?
I probably think he didn't.
But then, of course, that made it into... through Plato into Christianity, and then
this soul became immortal, and then became the connection to God.
Now, so you asked me essentially, what is our modern conception of these three...
Aristotle would have called them different forms.
Life, we think we know something about it, at least life on this planet, right?
Although we don't understand how it originated, but it's been difficult to rigorously pin
down.
You see this in modern definitions of death.
It's in fact, right now, there's a conference ongoing, again, that tries to define legally
and medically what is death.
It used to be very simple.
Death is you stop breathing, your heart stops beating, you're dead, right?
Totally unconversal.
If you're unsure, you wait another 10 minutes, if the patient doesn't breathe, you know,
he's dead.
Well, now we have ventilators, we have heart pacemakers, so it's much more difficult to
define what death is.
Really death is defined as the end of life, and life is defined before death, okay?
So we don't have really very good definitions.
Intelligence we don't have a rigorous definition.
We know something how to measure it, it's called IQ or G factors, right?
And we're beginning to build it in narrow sense, right?
Like Go, AlphaGo, and Watson, and you know, Google cars, and Uber cars, and all of that.
That's still narrow AI, and some people are thinking about artificial general intelligence.
And roughly as we said before, it's something to do with ability to learn and to adapt to
new environments.
But that is, as I said, also it's radical difference from experience.
And it's very unclear if you build a machine that has AGI, it's not at all a priori, it's
not at all clear that this machine will have consciousness, it may or may not.
So let's ask it the other way.
Do you think if you were to try to build an artificial general intelligence system, do
you think figuring out how to build artificial consciousness would help you get to an AGI?
Or put another way, do you think intelligent requires consciousness?
In human, it goes hand in hand.
In human or I think in biology, consciousness intelligence goes hand in hand, quite a solution
because the brain evolved to be highly complex, complexity via the theory integrated information
theory is sort of ultimately is what is closely tied to consciousness.
Ultimately, it's causal power upon itself.
And so in evolved systems, they go together.
In artificial system, particularly in digital machines, they do not go together.
And if you ask me point blank, is Alexa 20.0 in the year 2040, when she can easily pass
every Turing test, is she conscious?
No.
Even if she claims she's conscious.
In fact, you could even do a more radical version of this thought experiment.
You can build a computer simulation of the human brain.
You know what Henry Markham in the Blue Brain Project or the Human Brain Project in Switzerland
is trying to do.
Let's grant them all the success.
So in 10 years, we have this perfect simulation of the human brain every neuron is simulated.
And it has a larynx, and it has motor neurons, it has a broadcast area.
And of course, they'll talk and they'll say, hi, I just woke up, I feel great.
Even that computer simulation that can in principle map onto your brain will not be
conscious.
Why?
Because it simulates.
There's a difference between the simulated and the real.
So it simulates the behavior associated with consciousness.
It might be, it will, if it's done properly, will have all the intelligence that that particular
person they're simulating has.
But simulating intelligence is not the same as having conscious experiences.
And I gave you a really nice metaphor that engineers and physicists typically get.
I can write down Einstein's field equation, nine or 10 equations that describe the Lincoln
general relativity between curvature and mass.
I can do that.
I can run this on my laptop to predict that the sample, the black hole at the center of
our galaxy will be so massive that it will twist space time around it so no light can
escape.
It's a black hole.
But funny, have you ever wondered why doesn't this computer simulation suck me in?
It simulates gravity, but it doesn't have the causal power of gravity.
That's a huge difference.
So it's a difference between the real and the simulated, just like it doesn't get wet
inside a computer when the computer runs cold.
It simulates a weather storm.
And so in order to have artificial consciousness, you have to give it the same causal power
as the human brain.
You have to build so-called a neuromorphic machine that has hardware that is very similar
to the human brain, not a digital clock for Neumann computer.
So just to clarify though, you think that consciousness is not required to create human
level intelligence.
It seems to accompany in the human brain, but for a machine not.
So maybe just because this is AGI, let's dig in a little bit about what we mean by intelligence.
So one thing is the G factor, these kind of IQ tests of intelligence.
But I think if you, maybe another way to say, so in 2040, 2050, people will have Siri that
is just really impressive.
Do you think people will say Siri is intelligent?
Yes.
Intelligence is this amorphous thing.
So to be intelligent, it seems like you have to have some kind of connections with other
human beings in a sense that you have to impress them with your intelligence.
And there feels, you have to somehow operate in this world full of humans.
And for that, there feels like there has to be something like consciousness.
So you think you can have just the world's best natural NLP system, natural language
understanding and generation, and that will be, that will get us happy and say, you know
what, we've created an AGI.
I don't know happy, but yes, I do believe we can get what we call high level functional
intelligence, particularly sort of the G, you know, this, this fluid like intelligence
that we challenge, particularly the place like MIT, right?
In machines, I see a priori no reasons, and I see a lot of reason to believe it's going
to happen very, you know, over the next 50 years or 30 years.
So for the beneficial AI, for creating an AI system that's, so you mentioned ethics,
that is exceptionally intelligent, but also does not do, does, you know, aligns its values
with our values as humanity.
Do you think then it needs consciousness?
Yes, I think that that is a very good argument that if we're concerned about AI and the threat
of AI, I like Nick Bostrom, existentialist threat, I think having an intelligence that
has empathy, right?
Why do we find abusing a dog?
Why do most of us find that abhorrent, abusing any animal, right?
Why do we find that abhorrent because we have this thing called empathy, which if you
look at the Greek really means feeling with, I feel a pathos empathy, I have feeling with
you, I see somebody else suffer, that isn't even my specific, it's not a person, it's
not a lover, it's not my wife or my kids, it's a dog, but I feel naturally, most of
us, not all of us, most of us will feel emphatic.
And so it may well be in the long term interest of survival of Homo sapiens sapiens, that
if we do build AGI and it really becomes very powerful, that it has an emphatic response
and doesn't just exterminate humanity.
So as part of the full conscious experience to create a consciousness, artificial or in
our human consciousness, do you think fear, maybe we're going to get into the earlier
days of Nietzsche and so on, but do you think fear and suffering are essential to have consciousness?
Do you have to have the full range of experience to have a system that has experience?
Or can you have a system that only has very particular kinds of very positive experiences?
Look, you can have, in principle, people have done this in a rat where you implant an electrode
in the hypothalamus, the pleasure center of the rat and the rat stimulates itself beyond
anything else.
It doesn't care about food or natural sex or drink anymore, it just stimulates itself
because it's such a pleasurable feeling.
I guess it's like an orgasm, just you have all day long.
And so a priori, I see no reason why you need a great variety.
Now, clearly to survive, that wouldn't work.
But if I engineered artificially, I don't think you need a great variety of conscious
experience.
You could have just pleasure or just fear.
It might be a terrible existence, but I think that's possible, at least on conceptual logical
count.
Because any real creature with an artificial engineer, you want to give it fear, the fear
of extinction that we all have.
And you also want to give it a positive, a competitive state, states that you want the
machine encouraged to do because they give the machine positive feedback.
So you mentioned panpsychism to jump back a little bit, you know, everything having
some kind of mental property.
How do you go from there to something like human consciousness, so everything having
some elements of consciousness?
Is there something special about human consciousness?
So just it's not everything, like a spoon, there's no, the form of panpsychism I think
about doesn't ascribe consciousness to anything like this, the spoon on my liver.
However, it is the theory, the integrated information theory does say that system, even
to look from the outside relatively simple, at least if they have this internal causal
power, they are, they, it does feel like something.
The theory doesn't say anything what's special about human.
Biologically we know what the one thing that's special about human is we speak and we have
an overblown sense of our own importance.
We believe we're exceptional and we're just God's gift to the universe.
But behaviorally, the main thing that we have, we can plan over the long term, we have language
and that gives us enormous amount of power and that's why we are the con-dominant species
on the planet.
So you mentioned God, you grew up a devout Roman Catholic, you know, Roman Catholic family.
So you know, with consciousness, you're sort of exploring some really deeply fundamental
human things that religion also touches on.
So where does, where does religion fit into your thinking about consciousness and you've,
you've grown throughout your life and changed your views on religion as far as I understand.
Yeah, I mean, I'm not much closer to, so I'm not a Roman Catholic anymore.
I don't believe there's sort of this God, the God I was, I was educated to believe in,
you know, sit somewhere in the fullness of time.
I'll be united in some sort of everlasting bliss.
I just don't see any evidence for that.
Look, the world, the night is large and full of wonders, right?
There are many things that I don't understand.
I think many things that we as a cult, look, we don't even understand more than 4% of all
the universe, right?
Dark matter, dark energy, we have no idea what it is.
Maybe it's lost socks.
What do I know?
So, so all I can tell you is it's a sort of my current religious or spiritual sentiment
is much closer to some form of Buddhism without the reincarnation, unfortunately.
There's no evidence for reincarnation.
So can you describe the way Buddhism sees the world a little bit?
Well, so the, you know, they talk about, so when, when I spent several meetings with,
with the Dalai Lama and what always impressed me about him, he really unlike, for example,
let's see there, the Pope or some Cardinal, he always emphasized minimizing the suffering
of all creatures.
So they have this from the early beginning, they look at suffering in all creatures, not
just in people, but in, in everybody, this universal, and of course by degrees, right,
in the animal general will have less, is less capable of suffering than a, than a well-developed
normally developed human.
And they think consciousness pervades in this universe.
And they have these techniques, you know, you can think of them like mindfulness, etc.
in meditation that tries to access sort of what they claim of this more fundamental aspect
of reality.
I'm not sure it's more fundamental.
I, I think about it.
There's a physical and then there's this inside view consciousness and those are the
two aspects.
That's the only thing I've, I've access to in my life.
And you got to remember my conscious experience and your conscious experience comes prior
to anything you know about physics, comes prior to knowledge about the universe and
atoms and super strings and molecules and all of that.
The only thing you directly are acquainted with is this world that's populated with,
with things in images and, and sounds in your head and touches and all of that.
I actually have a question.
So it sounds like you kind of have a rich life.
You talk about rock climbing and it seems like you really love literature and consciousness
is all about experiencing things.
So do you think that has helped your research on this topic?
Yes, particularly if you think about it, the, the various states.
So for example, when you do rock climbing or now I do a rowing, crew rowing and a bike
every day, you can get into this thing called the zone.
And I've always, I want to, I want to, about a particular with respect to consciousness
because it's a strangely addictive state.
You want to, you want to, I mean, once people have it once, they want to keep on going back
to it and you wonder why, what is it so addicting about it?
And I think it's the experience of, of almost close to pure experience because in this,
in this zone, you're not conscious of inner voice anymore.
There's always this inner voice nagging you, right?
You have to do this, you have to do that.
You have to pay your taxes.
You had this fight with your ex and all of those things are always there.
And when you're in the zone, all of that is gone and you're just this, in this wonderful
state where you're fully out in the world, like you're, you're climbing or you're rowing
or biking or, or doing soccer or whatever you're doing.
And sort of consciousness sort of is, is you're all action or in this case of pure experience,
you're not action at all, but in both cases, you experience some aspect of, of, you touch
some basic part of, of, of conscious existence that is so basic and so deeply satisfying.
You I think you touch the root of being, that's really what you're touching there.
You're getting close to the root of being.
And that's very different from intelligence.
So what do you think about the simulation hypothesis, simulation theory, the idea that
we all live in a computer simulation?
Have you given it?
Laptures for nerds.
Laptures for nerds.
Laptures for nerds.
I think it's, it's, it's as likely as the hypothesis that engaged hundreds of scholars
for many centuries, are we all just existing in the mind of God, right?
And this is just a modern version of it.
It's, it's, it's, it's equally plausible.
People love talking about these sorts of things.
I know their book written about the simulation hypothesis.
If that's what people want to do, that's fine.
Seems rather esoteric.
It's never testable.
But it's not useful for you to think of in those terms.
So maybe connecting to the questions of free will, which you've talked about.
I think I vaguely remember you saying that the idea that there's no free will, it makes
you very uncomfortable.
So what do you think about free will?
And from the, from a physics perspective, from a consciousness perspective, what does
it all fit?
Okay.
So from the physics perspective, leaving aside quantum mechanics, we believe we live in a
fully deterministic world, right?
But then comes, of course, quantum mechanics.
So now we know that certain things are in principle, not predictable, which I, as you
said, I prefer because the idea that at the initial condition of the universe and then
everything else, we're just acting out the initial condition of the universe that doesn't,
that doesn't.
It's not a romantic notion.
Certainly not.
Right.
Now, when it comes to consciousness, I think we do have certain freedom.
We are much more constrained by physics, of course, and by our past and by our own conscious
desires and what our parents told us and what our environment tells us.
We all know that, right?
There's hundreds of experiments that show how we can be influenced.
But finally, in the, in the final analysis, when you make a life, and I'm talking really
about critical decisions where you really think, should I marry, should I go to this
school or that school, should I take this job or that job, should I cheat on my taxes
or not?
These sort of, these are things where you really deliberate.
And I think under those conditions, you are as free as you can be.
And you, when you bring your entire being, your entire conscious being to that question
and try to analyze it on all the, the various conditions, then you take, you make a decision,
you are as free as you can ever be.
That is, I think what, what free will is.
It's not a will that's totally free to do anything at once.
That's not possible.
Right.
So as Jack mentioned, you actually write a blog about books you've read, amazing books
from, I'm Russian, from Boogakov to, yeah, Neil Gaiman, Carl Sagan, Murakami.
So what is a book that early in your life transformed the way you saw the world, something
that changed your life?
Nietzsche, I guess, did that's both our Trista, because he talks about some of these problems.
You know, he was one of the first discover of the unconscious, this is, you know, a little
bit before Freud when it was in the air, you know, he makes all these claims that people
sort of under the guise or under the mass of charity actually are very non charitable.
So he is sort of really the first discoverer of the great land of the, of the unconscious.
And that, that really struck me.
And what do you think, what do you think about the unconscious?
What do you think about Freud?
What do you think about these ideas?
What's, what's just like dark matter in the universe?
What's over there in that unconscious?
A lot, I mean, much more than we think, this is what a lot of last 100 years of research
has shown.
So I think he was a genius, misguided towards the end, but he was all he started out as
a neuroscientist, right?
He contributed, he did the studies on the, on the lamprey, he contributed himself to
the neuron hypothesis, the idea that they're discrete units that we call nerve cells now.
And then he started, then he, he wrote, you know, about the unconscious.
And I think it's true.
There's lots of stuff happening.
You feel this particular when you're in a relationship and it breaks asunder, right?
And then you have this terrible, you can have love and hate and lust and anger and all of
it's mixed in.
And when you try to analyze yourself, why am I so upset?
It's very, very difficult to penetrate to those basements, those caverns in your mind,
because the prying eyes of conscience doesn't have access to those, but that there may in
the amygdala or, you know, lots of other places, they make you upset or angry or sad or depressed.
And it's very difficult to try to actually uncover the reason you can go to a shrink,
you can talk with your friend endlessly.
You construct finally a story, why this happened, why you love her or don't love her or whatever,
but you don't really know whether that's actually the, whether that actually happened because
you simply don't have access to those parts of the brain.
And they're very powerful.
Do you think that's a feature or a bug of our brain?
The fact that we have this deep, difficult to dive into subconscious?
I think it's a feature because otherwise, look, we are like any other brain or nervous
system or computer, we are severely banned limited.
If we, if everything I do, every emotion I feel, every eye movement I make, if all of
that had to be under the control of consciousness, I couldn't, I couldn't, I wouldn't be here.
Right.
So, so what you do early on your brain, you have to be conscious when you learn things
like typing or like riding on a bike.
But then what you do, you train up a route, I think that involve basal ganglia and stratum,
you train up different parts of your brain.
And then once you do it automatically, like typing, you can show you do it much faster
without even thinking about it because you've got these highly specialized, what Franz
Crick and I call zombie agents that I get sort of that taking care of that while your
consciousness can sort of worry about the abstract sense of the text you want to write.
And I think that's true for many, many things.
But for the things like all the fights you had with the ex-girlfriend, things that you
would think are not useful to still linger somewhere in the subconscious.
So that seems like a bug that it would stay there.
You think it would be better if you can analyze and then get it out of the system or just
forget it ever happened.
That seems a very buggy kind of...
Well, yeah.
In general, we don't have, and that's probably functional, we don't have an ability unless
it's extreme.
There are cases of clinical dissociations, right?
When people are heavily abused, when they completely repress the memory.
But that doesn't happen in normal people.
We don't have an ability to remove traumatic memories.
And of course, we suffer from that.
On the other hand, probably if you have the ability to constantly wipe your memory, you
probably do it to an extent that isn't useful to you.
So yeah, it's a good question.
It's a balance.
So on the books, as Jack mentioned, correct me if I'm wrong, but broadly speaking, academia
and the different scientific disciplines, certainly in engineering, reading literature
seems to be a rare pursuit.
Perhaps I'm wrong on this, but that's in my experience, most people read much more technical
texts and do not sort of escape or seek truth in literature.
It seems like you do.
So what do you think is the value?
What do you think literature adds to the pursuit of scientific truth?
Do you think it's useful for...
Give the access to a much wider array of human experiences.
How valuable do you think it is?
Well, if you want to understand human nature and nature in general, then I think you have
to better understand a wide variety of experiences, not just sitting in a lab, staring at a screen
and having a face flashed onto you for 100 milliseconds and pushing a button.
That's what I used to do.
That's what most psychologists do.
There's nothing wrong with that, but you need to consider lots of other strange states.
And literature is a shortcut for this.
Well, yeah, because literature, that's what literature is all about, all sorts of interesting
experiences that people have, the contingency of it, the fact that women experience the
world different, black people experience the world different.
The one way to experience that is reading all these different literature and try to
find out.
You see everything is so relative.
You read a book 300 years ago, they saw it with certain problems very, very differently
than us today.
We today, like any culture, think we know it all.
That's common to every culture.
Every culture believes that it's hey, they know it all.
And then you realize, well, there's other ways of viewing the universe, and some of
them may have lots of things in their favor.
So this is a question I wanted to ask about timescale or scale in general.
When you, with IIT, or in general, try to think about consciousness, try to think about
these ideas, we kind of naturally think in human timescales.
Do you, and also entities that are sized close to humans, do you think of things that are
much larger, much smaller as containing consciousness?
And do you think of things that take, you know, ages, eons to operate in their conscious
cause effect?
Cause effect.
That's a very good question.
So yeah, I think a lot about small creatures, because experimentally, you know, a lot of
people work on flies and bees, right?
So most people just think they are automata, they're just bugs, for heaven's sake, right?
But if you look at their behavior, like bees, they can recognize individual humans, they
have this very complicated way to communicate.
If you've ever been involved or you know your parents when they bought a house, what sort
of agonizing decision that is.
And bees have to do that once a year, right, when they swarm in the spring.
And then they have this very elaborate way, they have free nut scouts, they go to the
individual sites, they come back, they have this power, this dance, literally where they
dance for several days, they try to recruit other needs.
It's a very complicated decision way.
When they finally want to make a decision, the entire swarm, their scouts warm up the
entire swarm, then go to one location, they don't go to 50 locations, they go to one location
that the scouts have agreed upon by themselves.
That's awesome.
If we look at the circuit complexity, it's 10 times more denser than anything we have
in our brain.
Now they only have a million neurons, but the neurons are amazingly complex.
Complex behavior, very complicated circuitry, so there's no question, they experience something.
Their life is very different, they're tiny, they only live, you know, for, well, workers
live maybe for two months.
So I think an IAT tells you this, in principle, the substrate of consciousness is the substrate
that maximizes the cause-effect power over all possible spatial temple grains.
So when I think about, for example, do you know the science fiction story, The Black
Cloud?
Okay, it's a classic by Fred Hoyle, the astronomer.
He has this cloud intervening between the earth and the sun, and leading to some sort
of global cooling, this is written in the 50s.
It turns out, using the radio dish, they communicate with actually an entity.
It's actually an intelligent entity, and they sort of, they convince it to move away.
So here you have a radical different entity, and in principle, IAT says, well, you can
measure the integrated information, in principle at least, and yes, if the maximum of that
occurs at a time scale of months, rather than in a sort of fraction of a second, yes, and
they would experience life where each moment is a month rather than, or microsecond, rather
than a fraction of a second in the human case.
And so there may be forms of consciousness that we simply don't recognize for what they
are, because they are so radical different from anything you and I are used to.
Again, that's why it's good to read, or to watch science fiction, where we are to think
about this.
Like this, do you know Stanislav Lem, this Polish science fiction writer, he wrote Solaris
that was turned into a Hollywood movie?
His best novel, so it was in the 60s, a very, very engineering background.
His most interesting novel is called The Victorious, where human civilization, they have this mission
to this planet, and everything is destroyed, and they discover machines, humans got killed,
and then these machines took over, and there was this machine evolution, Darwinian evolution,
he talks about this very vividly.
And finally, the dominant machine intelligence organism that survived are gigantic clouds
of little hexagonal universal cellular automata.
This is written in the 60s.
So typically, they're all lying on the ground, individual by themselves, but in times of
crisis, they can communicate, they assemble into gigantic nets, into clouds of trillions
of these particles, and then they become hyper-intelligent, and they can beat anything that humans can
throw at it.
It's a very beautiful and compelling way you have an intelligence where finally the humans
leave the planet, they simply are unable to understand and comprehend this creature, and
they can say, well, either we can nuke the entire planet and destroy it, or we just have
to leave, because fundamentally, it's an alien.
It's so alien from us and our ideas that we cannot communicate with them.
Yeah, actually, in conversation cellular automatas, Stephen Wolfram brought up is that there could
be certain ideas that you already have these artificial general intelligence, like super
smart or maybe conscious beings in the cellular automata, we just don't know how to talk to
them.
So it's the language of communication, but you don't know what to do with it.
So that's one sort of view is consciousness is only something you can measure.
So it's not conscious if you can't measure it.
So you're making an ontological and an epistemic statement.
One is that it's just like seeing their multiverses that might be true, but I can't communicate
with them.
I can't have any knowledge of them.
That's an epistemic argument.
So those are two different things.
So it may well be possible.
Look, in another case that's happening right now, people are building these mini-organoids.
Do you know about this?
So you can take stem cells from under your arm, put it in a dish, add four transcription
factors, and then you can induce them to grow into large, well, large, they're a few millimeter,
they're like a half a million neurons that look like nerve cells in a dish called mini-organoids
at Harvard, at Stanford, everywhere they're building them.
It may be well be possible that they're beginning to feel like something, but we can't really
communicate with them right now.
So people are beginning to think about the ethics of this.
So yes, he may be perfectly right, but they may, it's one question, are they conscious
or not?
It's totally separate question.
How would I know?
Those are two different things.
If you could give advice to a young researcher, sort of dreaming of understanding or creating
human level intelligence or consciousness, what would you say?
Just follow your dreams, read widely.
Read widely.
No, I mean, I suppose what discipline, what is the pursuit that they should take on?
Is it neuroscience, is it computational cognitive science, is it philosophy, is it computer
science, robotics?
No, in a sense that, okay, so the only known system that have high level of intelligence
is Homo sapiens.
So if you wanted to build it, it's probably good to continue to study closely what humans
do.
So cognitive neuroscience, you know, somewhere between cognitive neuroscience on the one
hand, then some philosophy of mind, and then AI, AI, computer science, you can look at
all the original, I'd use neural networks, they all came from neuroscience, right, reinforce
whether it's snarky, minsky building is snarky, or whether it's, you know, the early Schubel
and Wiesel experiments at Harvard that then gave rise to networks and then multi-layer
networks.
So it may well be possible, in fact, some people argue that to make the next big step
in AI, once we realize the limits of deep convolutional networks, they can do certain
things but they can't really understand, they don't, they can't really, I can't really
show them one image, I can show you a single image of somebody, a pickpocket who steals
a wallet from a purse, you immediately know that's a pickpocket.
Now computer system would just say, well, it's a man, it's a woman, it's a purse, right?
Unless you train this machine on showing it a hundred thousand pickpockets, right?
So it doesn't, it doesn't have this easy understanding that you have, right?
So some people make the argument in order to go to the next step, or you really want
to build machines that understand in a way you and I, we have to go to psychology, we
need to understand how we do it and how our brains enable us to do it.
And so therefore being on the cusp, it's also so exciting to try to understand better our
nature and then to build, to take some of those inside and build them.
So I think the most exciting thing is somewhere in the interface between cognitive science,
neuroscience, AI, computer science and philosophy of mind.
Beautiful.
Yeah, I'd say if there is from the machine learning from the computer science, computer
vision perspective, many of the researchers kind of ignore the way the human brain works.
You ignore even psychology or literature or studying the brain.
I would hope Josh Tenenbaum talks about bringing that in more and more and that's, yep.
So you've worked on some amazing stuff throughout your life.
What's the thing that you're really excited about?
What's the mystery that you would love to uncover in the near term?
What are all the mysteries that you're already surrounded by?
Well, so there's a structure called the Klaustrom.
So this structure is underneath our cortex, it's yea big.
You have one on the left, on the right, underneath the insula, it's very thin.
It's like one millimeter.
It's embedded in wiring, in white matter, so it's very difficult to image.
And it has connection to every cortical region.
And Francis Crick, the last paper he ever wrote, he dictated corrections the day he
died in hospital on this paper.
We hypothesize, well, because it has this unique anatomy.
It gets input from every cortical area and projects back to every cortical area.
That the function of this structure is similar, it's just a metaphor to the role of a conductor
in a symphony orchestra.
You have all the different cortical players.
You have some that do motion, some that do theory of mind, some that infer social interaction
and color and hearing and all the different modules and cortex.
But of course, what consciousness is, consciousness puts it all together into one package, right?
The binding problem, all of that.
And this is really the function because it has a relatively few neurons compared to cortex,
but it talks, it receives input from all of them and it projects back to all of them.
And so we are testing that right now.
We've got this beautiful neuron reconstruction in the mouse called crown of thawn, crown
of thawn neurons that are in the Klaustrom that have the most widespread connection of
any neuron I've ever seen, they're very, you have individual neurons that sit in the
Klaustrom tiny, but then they have this single neuron, they have this huge axonal tree that
cover both Ipsy and contravartal cortex and trying to turn using, you know, fancy tools
like optogenetics, trying to turn those neurons on or off and study it, what happens in the
mouse.
So this thing is perhaps where the parts become the whole, the individual.
And it's one of the structures, it's a very good way of putting it, where the individual
parts turn into the whole of the whole of the conscious experience.
Well with that, thank you very much for being here today.
Thank you very much.
Thank you back at MIT.
Thank you very much.