logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 12h 13m 31s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Daniel Kahneman,
winner of the Nobel Prize in Economics for his integration of economic science with the
psychology of human behavior, judgment, and decision-making. He's the author of the popular
book Thinking Fast and Slow that summarizes in an accessible way his research of several decades,
often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness.
The central thesis of this work is the dichotomy between two modes of thought. What he calls system
one is fast, instinctive, and emotional. System two is slower, more deliberative, and more logical.
The book delineates cognitive biases associated with each of these two types of thinking.
His study of the human mind and its peculiar and fascinating limitations
are both instructive and inspiring for those of us seeking to engineer intelligence systems.
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
give it five stars on Apple Podcasts, follow on Spotify, support it on Patreon,
or simply connect with me on Twitter. Alex Friedman, spelled F-R-I-D-M-A-N.
I recently started doing ads at the end of the introduction. I'll do one or two minutes
after introducing the episode and never any ads in the middle that can break the flow of the
conversation. I hope that works for you and doesn't hurt the listening experience.
This show is presented by Cash App, the number one finance app in the App Store. I personally
use Cash App to send money to friends, but you can also use it to buy, sell, and deposit Bitcoin
in just seconds. Cash App also has a new investing feature. You can buy fractions of a stock, say $1
worth, no matter what the stock price is. Roker's services are provided by Cash App investing,
a subsidiary of Square and member SIPC. I'm excited to be working with Cash App to support
one of my favorite organizations called FIRST, best known for their FIRST robotics and legal
competitions. They educate and inspire hundreds of thousands of students in over 110 countries
and have a perfect rating and charity navigator, which means the donated money is used to maximum
effectiveness. When you get Cash App from the App Store, Google Play, and use code LEX Podcast,
you'll get $10 and Cash App will also donate $10 to FIRST, which again is an organization
that I've personally seen inspire girls and boys to dream of engineering a better world.
And now here's my conversation with Daniel Kahneman. You tell a story of an SS soldier
early in the war, World War II, in a Nazi-occupied France in Paris, where he grew up. He picked you
up and hugged you and showed you a picture of a boy, maybe not realizing that you were Jewish.
Not maybe, certainly not. So I told you I'm from the Soviet Union, that was significantly impacted
by the war as well, and I'm Jewish as well. What do you think World War II taught us about human
psychology broadly? Well, I think the only big surprise is the extermination policy genocide
by the German people. That's when you look back on it, and I think that's a major surprise.
It's a surprise because... It's a surprise that they could do it. It's a surprise that they
enough people willingly participated in that. This is a surprise. Now it's no longer a surprise,
but it's changed many people's views, I think, about human beings. Certainly for me,
the Achman trial teaches you something because it's very clear that if it could happen in Germany,
it could happen anywhere. It's not that the Germans were special. This could happen anywhere.
So what do you think that is? Do you think we're all capable of evil? We're all capable of cruelty?
I don't think in those terms. I think that what is certainly possible is you can dehumanize people
so that you treat them not as people anymore, but as animals, and the same way that you can
slaughter animals without feeling much of anything, it can be the same. When you feel that
the combination of dehumanizing the other side and having uncontrolled power over other people,
I think that doesn't bring out the most generous aspect of human nature.
So that Nazi soldier, he was a good man, and he was perfectly capable of killing a lot of people,
and I'm sure he did. But what did the Jewish people mean to Nazis? So what the dismissal of
Jewish as worthy of... Again, this is surprising that it was so extreme, but it's not one thing
in human nature. I don't want to call it evil, but the distinction between the in-group and the
out-group, that is very basic. So that's built in. The loyalty and affection towards in-group
and the willingness to dehumanize the out-group, that is in human nature. That's what I think
probably didn't need the Holocaust to teach us that, but the Holocaust is a very sharp lesson
of what can happen to people and what people can do. So the effect of the in-group and the out-group?
You know, it's clear that those were people. You could shoot them. They were not human.
There was no empathy or very, very little empathy left. So occasionally, there might have been.
And very quickly, by the way, the empathy disappeared if there was initially. And the fact
that everybody around you was doing it, that completely, the group doing it and everybody
shooting Jews, I think, that makes it permissible. Now, how much, you know, whether it could happen
in every culture or whether the Germans were just particularly efficient and disciplined,
so they could get away with it? That is a question. It's an interesting question.
Are these artifacts of history or is it human nature? I think that's really human nature.
You know, you put some people in a position of power relative to other people, and then they
become as human, they become different. But in general, in war, outside of concentration camps
in World War II, it seems that war brings out darker sides of human nature, but also the
beautiful things about human nature. Well, you know, I mean, what it brings out is the loyalty
among soldiers. I mean, it brings out the bonding, male bonding, I think is a very real thing that
happens. And so, and there is a certain thrill to friendship. And there is certainly a certain
thrill to friendship under risk and to shared risk. And so people have very profound emotions
up to the point where it gets so traumatic that little is left. So, let's talk about psychology
a little bit. In your book, Thinking Fast and Slow, you describe two modes of thought system one,
the fast instinctive and emotional one, and system two, the slower, deliberate, logical one,
at the risk of asking Darwin to discuss theory of evolution. Can you describe
distinguishing characteristics for people who have not read your book of the two systems?
Well, I mean, the word system is a bit misleading. But it's at the same time it's misleading,
it's also very useful. But what I call system one, it's easier to think of it as a family of
activities. And primarily the way I describe it is there are different ways for ideas to come to
mind. And some ideas come to mind automatically. And the example, a standard example is two plus
two, and then something happens to you. And in other cases, you've got to do something,
you've got to work in order to produce the idea. And my example, I always give the same pair of
numbers as 27 times 14, I think. You have to perform some algorithm in your head, some steps.
And it takes time. It's a very different, nothing comes to mind, except something comes to mind,
which is the algorithm, I mean, that you've got to perform. And then it's work. And it engages
short term memory and engages executive function. And it makes you incapable of doing other things
at the same time. So the main characteristic of system two, that there is mental effort involved,
and there is a limited capacity for mental effort, where a system one is effortless,
essentially, that's the major distinction. So you talk about there, you know, it's really convenient
to talk about two systems. But you also mentioned just now, and in general, that there is no
distinct two systems in the brain, from a neurobiological, even from psychology perspective.
But why does it seem to, from the experiments you've conducted, there does seem to be
kind of emergent to modes of thinking. So at some point, these kinds of systems came into
a brain architecture, maybe mammals share it. But, or do you not think of it at all in those
terms that it's all a mush and these two things just emerge, you know, evolutionary theorizing
about this is cheap and easy. So it's the way I think about it is that it's very clear that animals
have a perceptual system, and that includes an ability to understand the world, at least to
the extent that they can predict, they can't explain anything, but they can anticipate what's
going to happen. And that's the key form of understanding the world. And my crude idea is
that we, what I call system two, well, system two grew out of this. And, you know, there is
language, and there is the capacity of manipulating ideas, and the capacity of imagining futures and
of imagining counterfactual thing that haven't happened, and, and to do conditional thinking.
And there are really a lot of abilities that without language, and without the very large brain
that we have compared to others, would be impossible. Now, system one is more like what the
animals are, but system one also can talk. I mean, it has language, it understands language,
indeed, it speaks for us. I mean, you know, I'm not choosing every word as a deliberate process.
The words, I have some idea, and then the words come out, and that's automatic and effortless.
And many of the experiments you've done is to show that listen, system one exists,
and it does speak for us, and we should be careful about the voice it provides.
Well, I mean, you know, we have to trust it, because it's the speed at which it acts. System
two, if we, if we depend on system two for survival, we wouldn't survive very long,
because it's very slow. Yeah, crossing the street. Crossing the street. I mean, many things depend
on their being automatic. One very important aspect of system one is that it's not instinctive.
You use the word instinctive. It contains skills that clearly have been learned. So that
that skilled behavior, like driving a car or speaking, in fact, skilled behavior has to be
learned. And so it doesn't, you know, you don't come equipped with driving, you have to learn
how to drive. And, and you have to go through a period where driving is not automatic before
it becomes automatic. So yeah, you construct, I mean, this is where you talk about heuristic
and biases is you to make it automatic. You create a pattern, and then system one essentially
matches a new experience against the previously seen pattern. And when that matches not a good one,
that's when the cognitive all the all the mess happens. But it's most of the time it works.
And so it's pretty most of the time, the anticipation of what's going to happen next is
correct. And, and most of the time, the plan about what you have to do is correct. And so most of the
time, everything works just fine. What's interesting, actually, is that in some sense, system one is
much better as at what it does. And system two is at what it does. That is, there is that quality
of effortlessly solving enormously complicated problems, which clearly exists. So that the chess
player, a very good chess player, all the moves that come to their mind are strong moves. So all
the selection of strong moves happens unconsciously and automatically and very, very fast. And, and
all that is in system one. So the system two verifies. So along this line of thinking,
really what we are are machines that construct pretty effective system one.
You could think of it that way. So we're now talking about humans. But if we think about building
artificial intelligence systems, robots, do you think all the features and bugs that you have
highlighted in human beings are useful for constructing AI systems? So both systems are
useful for perhaps instilling in robots? What is happening these days is that actually,
what is happening in deep learning is, is more like a system one product than like a system
two product. I mean, deep learning matches patterns and anticipate what's going to happen. So it's
highly predictive. What deep learning doesn't have, and you know, many people think that this is
a critical, it doesn't have the ability to reason. So it does. And there is no system two there.
But I think very importantly, it doesn't have any causality or any way to represent
meaning and to represent real interaction. So until that is solved, what can be accomplished is
marvelous and very exciting, but limited. That's actually really nice to think of
current advances in machine learning is essentially system one advances.
So how far can we get with just system one? If we think of deep learning and artificial
systems? I mean, you know, it's very clear that deep mind is already gone way, way beyond what
people thought was possible. I think, I think the thing that has impressed me most about the
developments in AI is the speed. It's that things, at least in the context of deep learning, and maybe
this is about to slow down, but things moved a lot faster than anticipated. The transition from
solving, solving chess to solving go was, I mean, that's bewildering how quickly it went.
The move from AlphaGo to AlphaZero is sort of bewildering the speed at which they accomplish
that. Now clearly, there are, so there are many problems that you can solve that way,
but there are some problems for which you need something else.
Something like reasoning. Well, reasoning and also, you know, one of the real mysteries
psychologist Gary Marcus was also a critic of AI. I mean, he, what he points out, and I think he
has the point, is that humans learn quickly. Children don't need a million examples. They
need two or three examples. So clearly, there is a fundamental difference. And what enables,
what enables a machine to learn quickly, what you have to build into the machine, because it's
clear that you have to build some expectations or something in the machine to make it ready to learn
quickly. That's, that at the moment seems to be unsolved. I'm pretty sure that DeepMind is working
on it, but if they have solved it, I haven't heard yet. They're trying to actually, them and open
an ad, trying to start to get to using neural networks to reason. So assemble knowledge. Of
course, causality is temporal causality is out of reach to most everybody. You mentioned the
benefits of system one is essentially that it's fast, allows us to function in the world. Fast and
skilled, you know, it's skill. And it has a model of the world. You know, in a sense, I mean, there
was the earlier phase of the AI attempted to model reasoning. And they were moderately successful,
but, you know, reasoning by itself doesn't get you much. Deep learning has been much more successful
in terms of, you know, what they can do. But now it's an interesting question, whether it's
approaching its limits. What do you think? I think absolutely. So I just talked to
John Lacoon, he mentioned, you know, him. So he thinks that the limits, we're not going to hit
the limits with neural networks, that ultimately this kind of system one pattern matching will start
to start to look like system two with, without significant transformation of the architecture.
So I'm more with the, with the majority of the people who think that, yes, neural networks will
hit a limit in their capability. He, on the one hand, I have heard him tell them is a service,
essentially, that, you know, what they have accomplished is not a big deal that they have
just touched that basically, you know, they can't do unsupervised learning in an effective way.
But you're telling me that he thinks that the current, within the current architecture,
you can do causality and reasoning. So he's very much a pragmatist, in a sense,
that saying that we're very far away, that there's still, yeah, I think there's this idea that he
says is, we can only see one or two mountain peaks ahead. And there might be either a few more after
either a few more after or thousands more after. Yeah. So that kind of idea. I heard that metaphor.
Right. But nevertheless, it doesn't see a, the final answer, not fundamentally looking like
one that we currently have. So neural networks being a huge part of that.
Yeah. I mean, that's very likely because, because pattern matching is so much of what's going on.
And you can think of neural networks as processing information sequentially.
Yeah. I mean, you know, there is, there is an important aspect to, for example, you get
systems that translate and they do a very good job, but they really don't know what they're
talking about. And, and, and for that, I'm really quite surprised that for that,
you would need, you would need an AI that has sensation, an AI that is in touch with the world.
Yes. Self-awareness and maybe even something resembles consciousness kind of ideas.
Certainly awareness of, you know, awareness of what's going on so that the words have meaning or can
get are in touch with some perception or some action.
Yeah. So that's a big thing for Yan and what he refers to as grounding to the physical space.
So, so that's what we're talking about the same thing.
Yeah. So, but, so how, how you ground, I mean, the grounding without grounding,
then you get, you get a machine that doesn't know what it's talking about,
because it is talking about the world ultimately.
The question, the open question is what it means to ground. I mean, we're very
human centric in our thinking, but what does it mean for a machine to understand what it means to
be in this world? Does it need to have a body? Does it need to have a finiteness like we humans
have all of these elements? It's, it's a very, it's a no, I'm, you know, I'm not sure about
having a body, but having a perceptual system, having a body would be very helpful too. I mean,
if, if you think about human mimicking human, but having a perception, that seems to be essential
so that you can build, you can accumulate knowledge about the world. So if you can imagine,
you can imagine a human completely paralyzed, and there's a lot that the human brain could learn,
you know, with a paralyzed body. So if we got a machine that could do that, that would be a big
deal. And then the flip side of that, something you see in children and something in machine
learning world is called active learning. Maybe it is also in, is being able to play with the world.
How important for developing system one or system two, do you think it is to play with the world?
To be able to interact with the world? Well, certainly a lot, a lot of what you learn,
as you learn to anticipate the outcomes of your actions. I mean, you can see that how babies learn
it, you know, with their hands, how they, how they learn, you know, to connect, you know,
the movements of their hands with something that clearly is something that happens in the brain,
and, and, and the ability of the brain to learn new patterns. So, you know, it's the kind of thing
that you get with artificial limbs that you connect it, and then people learn to operate the artificial
limb, you know, really impressively quickly, at least from, from what I hear. So we have a system
that is ready to learn the world through action. At the risk of going into way too mysterious of
land, what do you think it takes to build a system like that? Obviously, we're very far
from understanding how the brain works, but how difficult is it to build this mind of ours?
You know, I mean, I think that Jan Le Koon's answer that we don't know how many mountains
there are, I think that's a very good answer. I think that, you know, if you, if you look at what
Ray Kurzweil is saying, that strikes me as off the wall. But, but I think people are much more
realistic than that, where actually demi-sassabis is, and Jan is, and so the people are actually doing
the work fairly realistic, I think. To maybe phrase it another way, from a perspective not
of building it, but from understanding it, how complicated are human beings in the, in the
following sense. You know, I work with autonomous vehicles and pedestrians, so we tried to model
pedestrians. How difficult is it to model a human being, their perception of the world,
the two systems they operate under, sufficiently to be able to predict whether the pedestrian
is going to cross the road or not? I'm, you know, I'm fairly optimistic about that, actually, because
what we're talking about is a huge amount of information that every vehicle has, and that feeds
into one system, into one gigantic system. And so anything that any vehicle learns becomes part of
what the whole system knows. And with a system multiplier like that, there is a lot that you
can do. So human beings are very complicated, but an, you know, system is going to make mistakes,
but human makes mistakes. I think that they'll be able to, I think they are able to anticipate
pedestrians, otherwise a lot would happen. They're able to, you know, they're able to get into a
roundabout and into the, into traffic. So they must know both to expect or to anticipate how
people will react when they're sneaking in. And there's a lot of learning that's involved in that.
Currently, the pedestrians are treated as things that cannot be hit. And they're not
treated as agents with whom you interact in a game theoretic way. So I mean, it's not, it's a
totally open problem. And every time something tries to solve it, it seems to be harder than we
think. And nobody's really tried to seriously solve the problem of that dance, because I'm not
sure if you've thought about the problem of pedestrians, but you're really putting your
life in the hands of the driver. You know, there is a dance, there's part of the dance
that would be quite complicated. But for example, when I cross the street and there is vehicle
approaching, I look the driver in the eye. And I think many people do that. And, you know, that's
a signal that I'm sending. And I would be sending that machine to an autonomous vehicle. And it
had better understand it, because it means I'm crossing. So, and there's another thing you do
that actually, so I'll tell you what you do, because we watched, I've watched hundreds of hours
of video on this, is when you step in the street, you do that before you step in the street. And
when you step in the street, you actually look away. Yeah. Now, what is that? What that's saying is,
I mean, you're trusting that the car who hasn't slowed down yet will slow down. Yeah. And you're
telling him, yeah, I'm committed. I mean, this is like in a game of tricking. So I'm committed.
And if I'm committed, I'm looking away. So there is you, you just have to stop.
So the question is whether a machine that observes that needs to understand
mortality. Here, I'm not sure that it's got to understand so much as it's got to anticipate.
So, and here, but, you know, you're surprising me, because here, I would think that maybe you
can anticipate without understanding, because I think this is clearly what's happening in playing
go or in playing chess is a lot of anticipation and there is zero understanding. So I thought that
you didn't need a model of the human and a model of the human mind to avoid hitting pedestrians.
But you are suggesting that you do. Yeah, you do. And then it's, then it's a lot harder.
So this is, and I have a follow-up question to see where your intuition lies is it seems that
almost every robot human collaboration system is a lot harder than people realize. So
do you think it's possible for robots and humans to collaborate successfully?
We talked a little bit about semi-autonomous vehicles, like in the Tesla autopilot,
but just in tasks in general. If you think we talked about current neural networks being kind
of system one, do you think those same systems can borrow humans for system two type tasks
and collaborate successfully? Well, I think that in any system where humans and the machine
interact, the human will be superfluous within a fairly short time. That is, if the machine
has advanced enough so that it can really help the human, then it may not need the human for a
long time. Now it would be very interesting if there are problems that for some reason the
machine doesn't cannot solve, but that people could solve, then you would have to build into
the machine an ability to recognize that it is in that kind of problematic situation
and to call the human. That cannot be easy without understanding. That is, it must be very
difficult to program a recognition that you are in a problematic situation without understanding
the problem. That's very true. In order to understand the full scope of situations that
are problematic, you almost need to be smart enough to solve all those problems.
It's not clear to me how much the machine will need the human. I think the example of chess is
very instructive. I mean, there was a time at which Kasparov was saying that human machine
combinations will beat everybody. Even stockfish doesn't need people and alpha zero certainly
doesn't need people. The question is, just like you said, how many problems are like chess and how
many problems are the ones where are not like chess? Well, every problem probably in the end
is like chess. The question is, how long is that transition period? I mean, that's a question I
would ask you in terms of an autonomous vehicle just driving is probably a lot more complicated
than go to solve that. Yes. And that's surprising because it's open. No, I mean, that's not surprising
to me because there is a hierarchical aspect to this, which is recognizing a situation and then
within the situation bringing up the relevant knowledge. And for that hierarchical type of
system to work, you need a more complicated system than we currently have. A lot of people think
because as human beings, this is probably the cognitive biases, they think of driving as
pretty simple because they think of their own experience. This is actually a big problem for
AI researchers or people thinking about AI because they evaluate how hard a particular problem is
based on very limited knowledge, basically how hard it is for them to do the task. And then they
take for granted, maybe you can speak to that because most people tell me driving is trivial.
And humans, in fact, are terrible at driving is what people tell me. And I see humans and humans
are actually incredible at driving and driving is really terribly difficult. So is that just
another element of the effects that you've described in your work on the psychology side?
No, I mean, I haven't really, I would say that my researchers contributed nothing to
understanding the ecology and to understanding the structure of situations and the complexity of
problems. So all we know is very clear that that goal, it's endlessly complicated, but it's very
constrained. So and in the real world, there are far fewer constraints and many more potential
surprises. So. So that's obvious because it's not always obvious to people, right? So when you
think about. Well, I mean, you know, people thought that reasoning was hard and perceiving
was easy, but you know, they quickly learned that actually modeling vision was tremendously
complicated and modeling, even proving theorems was relatively straightforward.
To push back on that a little bit on the quickly part, they haven't took several decades to learn
that and most people still haven't learned that. I mean, our intuition, of course, AI researchers
have, but you drift a little bit outside the specific AI field, the intuition is still perceptible.
Oh, yeah. No, I mean, that's true. I mean, intuitions, the intuitions of the public haven't
changed radically. And they are there, as you said, they're evaluating the complexity of problems
by how difficult it is for them to solve the problems. And that's not very little to do
with the complexities of solving them in AI. How do you think from the perspective of AI researcher,
do we deal with the intuitions of the public? So in trying to think, I mean, arguably, the
combination of hype investment and the public intuition is what led to the AI winters. I'm
sure that same could be applied to tech or that the intuition of the public leads to media hype,
leads to companies investing in the tech, and then the tech doesn't make the company's money,
and then there's a crash. Is there a way to educate people, sort of to fight the,
let's call it system one thinking? In general, no. I think that's the simple answer.
And it's going to take a long time before the understanding of what those systems can do
becomes public knowledge. And then the expectations, there are several aspects
that are going to be very complicated. The fact that you have a device that cannot explain itself
is a major, major difficulty. And we're already seeing that. I mean, this is really something
that is happening. So it's happening in the judicial system. So you have system that are clearly
better at predicting parole violations than judges, but they can't explain the reasoning.
And so people don't want to trust them. We seem to, in system one even, use cues
to make judgments about our environment. So this explainability point, do you think humans
can explain stuff? No. I mean, there is a very interesting aspect of that.
Humans think they can explain themselves. So when you say something, and I ask you,
why do you believe that? Then reasons will occur to you. But actually, my own belief
is that in most cases, the reasons have very little to do with why you believe what you believe.
So that the reasons are a story that comes to your mind when you need to explain yourself.
But people traffic in those explanations. I mean, the human interaction depends
on those shared fictions and the stories that people tell themselves.
You just made me actually realize, and we'll talk about stories in a second,
and not to be cynical about it, but perhaps there's a whole movement of people trying to do
explainable AI. And really, we don't necessarily need to explain. AI doesn't need to explain itself.
It just needs to tell a convincing story. Yeah, absolutely. The story doesn't necessarily need
to reflect the truth. It just needs to be convincing. There's something to that.
You can say exactly the same thing in a way that sounds cynical or doesn't sound cynical, I mean.
The objective of having an explanation is to tell a story that will be acceptable to people.
And for it to be acceptable and to be robustly acceptable,
it has to have some elements of truth. But the objective is for people to accept it.
It's quite brilliant, actually. But so on the stories that we tell,
sorry to ask you the question that most people know the answer to, but
you talk about two selves in terms of how life has lived, the experience self and
remembering self. Can you describe the distinction between the two?
Well, sure. I mean, there is an aspect of life that occasionally, most of the time we just live,
and we have experiences, and they're better and they're worse, and it goes on over time.
And mostly we forget everything that happens, or we forget most of what happens. Then occasionally,
you, when something ends or at different points, you evaluate the past and you form a memory,
and the memory is schematic. It's not that you can roll a film of an interaction. You construct
in effect the elements of a story about an episode. So there is the experience, and there is the
story that is created about the experience. And that's what I call the remembering. So I had the
image of two selves. So there is a self that lives, and there is a self that evaluates life.
Now, the paradox and the deep paradox in that is that we have one system or one self that
does the living, but the other system, the remembering self, is all we get to keep.
And basically, decision making and everything that we do is governed by our memories, not by
what actually happened. It's governed by the story that we told ourselves or by the story
that we're keeping. So that's the distinction. I mean, there's a lot of brilliant ideas about
the pursuit of happiness that come out of that. What are the properties of happiness which emerge
from the remembering self? There are properties of how we construct stories that are really important.
So that I studied a few, but a couple are really very striking. And one is that in stories,
time doesn't matter. There's a sequence of events or there are highlights or not.
And how long it took, you know, they lived happily ever after or three years later,
something. Time really doesn't matter. In stories, events matter, but time doesn't.
That leads to a very interesting set of problems because time is all we got to live. I mean,
you know, time is the currency of life. And yet, time is not represented, basically,
in evaluated memories. So that creates a lot of paradoxes that I've thought about.
Yeah, they're fascinating. But if you were to give advice on how one lives a happy life
based on such properties, what's the optimal?
You know, I gave up, I abandoned happiness research because I couldn't solve that problem.
I couldn't, I couldn't see. And in the first place, it's very clear that if you do talk in terms
of those two cells, then that what makes the remembering self happy and what makes experiencing
self happy are different things. And I asked the question of, suppose you're planning a vacation,
and you're just told that at the end of the vacation, you'll get an amnesic drug. So remember
nothing. And they'll also destroy all your photos. So there'll be nothing. Would you still go to the
same vacation? And, and it's, it turns out we go to vacations in large part to construct memories,
not to have experiences, but to construct memories. And it turns out that the vacation
that you would want for yourself if you knew you will not remember is probably not the same vacation
that you will want for yourself if you will remember. So I have no solution to these problems,
but clearly, those are big issues. And you've talked about issues. You've talked about sort of
how many minutes or hours you spend about the vacation. It's an interesting way to think about
it because that's how you really experience the vacation outside the being in it. But there's
also a modern, I don't know if you think about this or interact with it. There's a modern way to
magnify the remembering self, which is by posting on Instagram, on Twitter, on social networks.
A lot of people live life for the picture that you take that you post somewhere. And now thousands
of people share in it potentially, potentially millions. And then you can relive it even much
more than just those minutes. Do you think about that magnification much? You know, I'm too old
for social networks. I, you know, I've never seen Instagram. So I cannot really speak
intelligently about those things. I'm just too old. But it's interesting to watch the exact
effects you described. I think it will make a very big difference. I mean, and it will make, it will
also make a difference. And that I don't know whether it's clear that in some ways,
the devices that serve us supplant function. So you don't have to remember phone numbers.
You don't have, you really don't have to know facts. I mean, the number
of conversations I'm involved with where somebody says, well, let's look it up.
So it's, it's in a way, it's made conversations. Well, it's, it means that it's much less important
to know things. You know, it used to be very important to know things. This is changing.
So the requirements of that, that we have for ourselves and for other people are changing
because of all those supports and because, and I have no idea what Instagram does.
Well, I'll tell you, I mean, I wish I could just have the, my remembering self could enjoy this
conversation, but I'll get to enjoy it even more by having watched, by watching it and then talking
to others. It'll be about a hundred thousand people as scary as this to say, well, listen or
watch this, right? It changes things. It changes the experience of the world that you seek out
experiences, which could be shared in that way. And I haven't seen, it's the same effects that
you described. And I don't think the psychology of that magnification has been described yet
because it's a new world. You know, the sharing, there was a, there was a time when people read
books and, and, and you could assume that your friends had read the same books that you read.
So there was kind of invisible sharing. There was a lot of sharing going on. And there was a lot
of assumed common knowledge and, you know, that was built in. I mean, it was obvious that you
had read the New York Times. It was obvious that you had read the reviews. I mean, so a lot was
taken for granted that was shared. And, you know, when there were, when there were three television
channels, it was obvious that you'd seen one of them probably the same. So sharing, sharing
has always been always, was always there. It was just different.
At the risk of inviting mockery from you, let me say there that I'm also a fan of Sartre and
Camus and existentialist philosophers. And I'm joking, of course, about mockery, but from the
perspective of the two selves, what do you think of the existentialist philosophy of life? So
trying to really emphasize the experiencing self as the proper way to, or the best way to live life.
I don't know enough philosophy to answer that, but it's not, you know, the emphasis on
experience is also the emphasis in Buddhism. So that you just have got to experience things and
not to evaluate and not to pass judgment and not to score, not to keep score. So
if when you look at the grand picture of experience, you think there's something to that
that one, one of the ways to achieve contentment and maybe even happiness is letting go of
any of the things, any of the procedures of the remembering self.
Well, yeah, I mean, I think, you know, if one could imagine a life in which people don't score
themselves, it feels as if that would be a better life. As if the self scoring and, you know, how
am I doing kind of question is not is not a very happy thing to have. But I got out of that field
because I couldn't solve that problem. And, and that was because my intuition was that the experiencing
self, that's reality. But then it turns out that what people want for themselves is not experiences,
they want memories and they want a good story about their life. And so you cannot have a theory
of happiness that doesn't correspond to what people want for themselves. And when I, when I
realized that this, this was where things were going, I really sort of left the field of research.
Do you think there's something instructive about this emphasis of reliving memories
in building AI systems? So currently, artificial intelligence systems
are more like experiencing self in that they react to the environment. There's some
pattern formation, like learning, so on. But you really don't construct memories,
except in reinforcement learning every once in a while that you replay over and over.
Yeah. But you know, that would in principle would not be.
Do you think that's useful? Do you think it's a feature or a bug of human beings that we,
that we look back? Oh, I think that's definitely a feature. That's not a bug. I mean, you have to
look back in order to look forward. So without, without looking back, you couldn't, you couldn't
really intelligently look forward. You're looking for the echoes of the same kind of experience in
order to predict what the future holds. Yeah. Though Victor Franco in his book, Man's Search
for Meaning, I'm not sure if you've read, describes his experience at the concentration
camps during World War II as a way to describe that finding, identifying a purpose in life,
a positive purpose in life can save one from suffering. First of all, do you connect with
the philosophy that he describes there? Not really. I mean, the, so I can, I can really see
that somebody who has that feeling of purpose and meaning and so on, that that could sustain you.
I in general don't have that feeling. And I'm pretty sure that if I were in a concentration
camp, I'd give up and die. You know, so he talks, he's, he's a survivor. Yeah. And, you know,
he survived with that. And I'm, and I'm not sure how essential to survival the sense is, but I do
know when I think about myself that I would have given up at, oh, this isn't going anywhere.
And there is, there is a sort of character that, that, that manages to survive in conditions like
that. And then because they survive, they tell stories and it sounds as if they survive because
of what they were doing. We have no idea. They survive because of the kind of people that they
are and the other kind of people who survives and would tell themselves stories of a particular
kind. So I'm not. So do you don't think seeking purpose is a significant driver in NRP? I mean,
it's, it's a very interesting question because when you ask people whether it's very important to
have meaning in their life, they say, oh, yes, that's the most important thing. But when you ask
people what kind of a day did you have? And, and, you know, what were the experiences that you
remember? You don't get much meaning. You get social experiences. Then, and, and some people say that,
for example, in, in, in child, you know, in taking care of children, the fact that they are your
children and you're taking care of them makes a very big difference. I think that's entirely true.
But it's more because of a story that we're telling ourselves, which is a very different story when
we're taking care of our children or when we're taking care of other things. Jumping around a little
bit in doing a lot of experiments. Let me ask you a question. Most of the work I do, for examples,
in the, in the real world, but most of the clean good science that you can do is in the lab. So
that distinction, do you think we can understand the fundamentals of human behavior through
controlled experiments in the lab? If we talk about pupil diameter, for example, it's much easier to do
when you can control lighting conditions. Yeah, right. So when we look at driving, lighting variation
destroys almost completely your ability to use pupil diameter. But in the lab, for, as I mentioned,
semi-autonomous or autonomous vehicles in driving simulators, we can't, we don't capture true, honest
human behavior in that particular domain. So what's your intuition? How much of human behavior
can we study in this controlled environment of the lab? A lot, but you'd have to verify it.
You know, that your conclusions are basically limited to the situation, to the experimental
situation. Then you have to jump the big inductive leap to the real world. So, and, and that's the
flair. That's where the difference, I think, between the good psychologist and others that are
mediocre is in the sense that your experiment captures something that's important and something
that's real and others are just running experiments. So what is that? Like the birth of an idea to
his development in your mind to something that leads to an experiment. Is that similar to maybe
like what Einstein or a good physicist do is your intuition? You basically use your intuition to build up?
Yeah, but I mean, you know, it's, it's very skilled intuition. I mean, I just had that experience,
actually. I had an idea that turned out to be a very good idea a couple of days ago. And, and you,
and you have a sense of that building up. So I'm working with a collaborator. And he essentially
was saying, you know, what, what are you doing? What's going on? And I was really, I couldn't
exactly explain it, but I knew this is going somewhere. But, you know, I've been around that
game for a very long time. And so I can, you develop that anticipation that yes, this, this is
worth following. There's something here. That's part of the skill. Is that something you can reduce
to words and describing a process in the form of advice to others? No. Follow your heart,
essentially. I mean, you know, it's, it's like trying to explain what it's like to drive. It's
not, you've got to break it apart and it's not, and then you lose and then you lose the experience.
You mentioned collaboration. You've written about your collaboration with
Amos Tversky, that this is you writing the 12 or 13 years in which most of our work was joined
were years of interpersonal and intellectual bliss. Everything was interesting. Almost
everything was funny. And there was a current joy of seeing an idea take shape. So many times
in those years we shared the magical experience of one was saying something, which the other one
would understand more deeply than a speaker had done. Contrary to the old laws of information
theory, it was common for us to find that more information was received than had been sent.
I have almost never had the experience with anyone else. If you have not had it, you don't know
how marvelous collaboration can be. So let me ask a perhaps a silly question.
How does one find and create such a collaboration? That may be asking like, how does one find love,
but yeah, you have to be, you have to be lucky. And, and, and I think you have to have the character
for that because I've had many collaborations. I mean, none were as exciting as with Amos,
but I've had, and I'm having just very, so it's a skill. I think I'm good at it.
Not everybody is good at it. And then it's the luck of finding people who are also good at it.
Is there advice in a form for a young scientist
who also seeks to violate this law of information theory?
I really think it's so much luck is involved. And, you know, in those
really serious collaborations, at least in my experience, are a very personal experience.
And, and I have to like the person I'm working with. Otherwise, you know, I mean, there is that kind
of collaboration, which is like an exchange or commercial exchange of, I'm giving this,
you give me that. But the real ones are interpersonal. They're between people who like
each other and, and who like making each other think and who like the way that the other person
responds to your thoughts. You have to be lucky. Yeah. I mean, but I already noticed that even
just me showing up here, you've quickly started to digging in a particular problem I'm working on
and already new information started to emerge. Is that a process, just the process of curiosity,
of talking to people about problems and seeing? I'm curious about anything to do with AI and
robotics and, you know, and so, and I knew you were dealing with that. So I was curious.
Just follow your curiosity. Jumping around on the psychology front, the dramatic sounding
terminology of replication crisis, but really just the, at times,
this, this effect that at times studies do not, are not fully generalizable. They don't.
You are being polite. It's worse than that. But is it? So I'm actually not fully familiar
to the degree how bad it is, right? So what do you think is the source? Where do you think?
I think I know what's going on, actually. I mean, I have a theory about what's going on.
And what's going on is that there is, first of all, a very important distinction between
two types of experiments. And one type is within subjects. So the same person has
two experimental conditions. And the other type is between subjects, where some people are this
condition, other people are that condition. They're different worlds. And between subject
experiments are much harder to predict and much harder to anticipate. And the reason,
and they're also more expensive because you need more people. And it's, it's just,
so between subject experiments is where the problem is. It's not so much and within subject
experiments, it's really between. And there is a very good reason why the intuitions of researchers
about between subject experiments are wrong. And that's because when you are a researcher,
you are in a within subject situation. That is, you are imagining the two conditions and you see
the causality and you feel it. And, but in the between subjects condition, they don't, they,
they see, they live in one condition and the other one is just nowhere. So our intuitions
are very weak about between subject experiments. And that, I think, is something that people
haven't realized. And, and in addition, because of that, we have no idea about the power of
manipulations of experimental manipulations, because the same manipulation is much more powerful
when, when you are in the two conditions, then when you live in only one condition.
And so the experimenters have very poor intuitions about between subject experiments. And, and
there is something else, which is very important, I think, which is that almost all psychological
hypotheses are true. That is, in the sense that, you know, directionally, if you have a hypothesis
that A really causes B, that that it's not true that A causes the opposite of B. Maybe A just
has very little effect, but hypotheses are true mostly, except mostly they're very weak. They're
much weaker than you think when you are having images of. So the reason I'm excited about that
is that I recently heard about some some friends of mine who they essentially funded 53 studies
of behavioral change by 20 different teams of people with a very precise objective of changing
the number of times that people go to the gym, but you know, and, and the success rate was zero.
Not one of the 53 studies worked. Now, what's interesting about that is those are the best
people in the field, and they have no idea what's going on. So they are not calibrated.
They think that it's going to be powerful because they can imagine it. But actually,
it's just weak because you're focusing on on your manipulation and it feels powerful to you.
There's a thing that I've written about that's called the focusing illusion.
That is that when you think about something, it looks very important,
more important than it really is. More important than it really is. But if you don't see that
effect, the 53 studies, doesn't that mean you just report that? So what's, I guess,
the solution to that? Well, I mean, the solution is for people to trust their intuitions less
or to try out their intuitions before. I mean, experiments have to be pre-registered. And
by the time you run an experiment, you have to be committed to it. And you have to run the
experiment seriously enough in a public. And so this is happening. And the interesting thing is
what happens before and how do people prepare themselves and how they run pilot experiments.
It's going to train the way psychology is done. And it's already happening.
Do you have a hope for, this might connect to the study sample size. Do you have a hope for the
internet? Well, I mean, you know, this is really happening. M-Turk, everybody is running experiments
on M-Turk and it's very cheap and very effective. Do you think that changes psychology, essentially?
Because you think you cannot run 10,000 subjects? I mean, eventually it will. I mean, I, you know,
I can't put my finger on how exactly. But that's been true in psychology. Whenever
an important new method came in, it changes the field. So an M-Turk is really a method,
because it makes it very much easier to do something, to do some things. Is there a undergrad
students will ask me, you know, how big and your own network should be for a particular problem?
So let me ask you an equivalent question. How big, how many subjects does a study have
for it to have a conclusive result? Well, it depends on the strength of the effect.
So if you're studying visual perception or the perception of color, many of the
classic results in visual and color perception were done on three or four people. And I think
in one of them was colorblind, but partly colorblind. But on vision, you know, it's highly
reasonable. Many people don't need a lot of replications for some type of neurological
experiment. When you're studying weaker phenomena, and especially when you're
studying them between subjects, then you need a lot more subjects than people have been running.
And that is, that's one of the things that are happening in psychology now is that the power,
the statistical power of experiments is increasing rapidly. Does the between subject
as the number of subjects goes to infinity approach? Well, I mean, you know, goes to infinity is
exaggerated, but people, the standard number of subjects for an experiment, psychology,
were 30 or 40. And for a weak effect, that's simply not enough. And you may need a couple of
hundred. I mean, it's that sort of order of magnitude. What are the major disagreements
in theories and effects that you've observed throughout your career that still stand today?
Well, you've worked on several fields. But what still is out there as major
disagreement that pops into your mind? And I've had one extreme experience of, you know,
controversy with somebody who really doesn't like the work that Emma Stursky and I did,
and he's been after us for 30 years or more, at least. Do you want to talk about it?
Well, I mean, his name is Goetge Granzer. He's a well-known German psychologist. And that's the
one controversy, which I, it's been unpleasant. And no, I don't particularly want to talk about it.
But is there is there open questions, even in your own mind, every once in a while,
you know, we talked about semi-autonomous vehicles in my own mind, I see what the data says, but I
also am constantly torn. Do you have things where you or your studies have found something,
but you're also intellectually torn about what it means? And there's maybe disagreements
within your own mind about particular things. I mean, it's, you know, one of the things that
are interesting is how difficult it is for people to change their mind. Essentially,
you know, once they're committed, people just don't change their mind about anything that
matters. And that is surprisingly, but it's true about scientists. So the controversy that I
described, you know, that's been going on like 30 years, and it's never going to be resolved.
And you build a system, and you live within that system, and other systems of ideas look
foreign to you. And there is very little contact and very little mutual influence.
That happens a fair amount. Do you have a hopeful advice or message on that?
Thinking about science, thinking about politics, thinking about things that have impact on this
world. How can we change our mind? I think that, I mean, on things that matter,
which are political or religious, and people just don't, don't change their mind.
And by and large, and there is very little that you can do about it.
And what does happen is that if leaders change their mind, so for example,
the public, the American public doesn't really believe in climate change, doesn't take it very
seriously. But if some religious leaders decided this is a major threat to humanity,
that would have a big effect. So that we, we have the opinions that we have not because we
know why we have them, but because we trust some people and we don't trust other people.
And so it's much less about evidence than it is about stories.
So the way, one way to change your mind isn't at the individual level, is that the leaders of
the communities, you look up with the stories change and therefore your mind changes with them.
So there's a guy named Alan Turing came up with a Turing test. What do you think is a good test
of intelligence? Perhaps we're drifting in a topic that we're maybe philosophizing about,
but what do you think is a good test for intelligence, for an artificial intelligence
system? Well, the standard definition of, you know, of artificial general intelligence is that
it can do anything that people can do and it can do them better. Yes. What we are seeing is that in
many domains, you have domain specific and, you know, devices or programs or software and they
beat people easily in specified way. What we are very far from is that general ability,
general purpose intelligence. So we, in machine learning, people are approaching something more
general. I mean, for AlphaZero was much more general than AlphaGo. But it's still extraordinarily
narrow and specific in what it can do. So we're quite far from something that can in every domain
think like a human except better. What aspect, so the Turing test has been criticized as natural
language conversation that is too simplistic. It's easy to quote unquote pass under constraints
specified. What aspect of conversation would impress you if you heard it? Is it humor? Is it
what would impress the heck out of you if you saw it in conversation? Yeah, I mean,
certainly wit would be impressive. And humor would be more impressive than just factual
conversation, which I think is easy. And allusions would be interesting. And metaphors would be
interesting. I mean, but new metaphors, not practiced metaphors. So there is a lot that,
you know, would be sort of impressive that it's completely natural in conversation,
but that you really wouldn't expect. Does the possibility of creating a human level intelligence
or super human level intelligence system excite you, scare you? Well, I mean, how does it make
you feel? I find the whole thing fascinating. Absolutely fascinating. So exciting. I think
and exciting. It's also terrifying, you know, but I'm not going to be around to see it. And so
I'm curious about what is happening now, but also know that that predictions about it are silly.
We really have no idea what it will look like 30 years from now. No idea.
Speaking of silly, bordering on the profound, they may ask the question of, in your view,
what is the meaning of it all, the meaning of life? These descendant of great apes that we are,
why, what drives us as a civilization as a human being, as a force behind everything
that you've observed and studied? Is there any answer or is it all just a beautiful mess?
There is no answer that that I can understand. And I'm not and I'm not actively looking for one.
Do you think an answer exists? No, there is no answer that we can understand. I'm not qualified
to speak about what we cannot understand. But there is, I know that we cannot understand
reality. And I mean, there are a lot of things that we can do. I mean, gravity waves, I mean,
that's a big moment for humanity. And when you imagine that ape being able to go back to the
big bang, that's, but the why is bigger than us. The why is hopeless, really.
Danny, thank you so much. It was an honor. Thank you for speaking today. Thank you.
Thanks for listening to this conversation. And thank you to our presenting sponsor,
Cash App. Download it, use code LEX Podcast. You'll get $10 and $10 will go to first,
a STEM education nonprofit that inspires hundreds of thousands of young minds to become future
leaders and innovators. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple
podcast, follow on Spotify, support on Patreon, or simply connect with me on Twitter.
And now let me leave you with some words of wisdom from Daniel Kahneman.
Intelligence is not only the ability to reason, it is also the ability to find relevant material
and memory and to deploy attention when needed. Thank you for listening and hope to see you next time.