logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 9h 33m 5s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

You've studied the human mind, cognition, language, vision, evolution, psychology,
from child to adult, from the level of individual to the level of our entire civilization.
So I feel like I can start with a simple multiple choice question.
What is the meaning of life? Is it A, to attain knowledge, as Plato said,
B, to attain power, as Nietzsche said, C, to escape death, as Ernest Becker said,
D, to propagate our genes, as Darwin and others have said, E, there is no meaning,
as the nihilists have said, F, knowing the meaning of life is beyond our cognitive capabilities,
as Stephen Pinker said, based on my interpretation 20 years ago, and G, none of the above.
I'd say A comes closest, but I would amend that to attaining not only knowledge,
but fulfillment more generally, that is, life, health, stimulation, access to the
living, cultural, and social world. Now, this is our meaning of life. It's not the meaning of life,
if you were to ask our genes. Their meaning is to propagate copies of themselves,
but that is distinct from the meaning that the brain that they lead to sets for itself.
So, to you, knowledge is a small subset or a large subset?
It's a large subset, but it's not the entirety of human striving, because we also want to
interact with people, we want to experience beauty, we want to experience the richness of
the natural world, but understanding what makes the universe tick is way up there.
For some of us more than others, certainly for me, that's one of the top five.
So, is that a fundamental aspect? Are you just describing your own preference,
or is this a fundamental aspect of human nature, is to seek knowledge?
In your latest book, you talk about the power, the usefulness of rationality, reason, and so on.
Is that a fundamental nature of human beings, or is it something we should just strive for?
It's both. We're capable of striving for it because it is one of the things that
make us what we are, Homo sapiens, the wise man. We are unusual among animals in the degree to
which we acquire knowledge and use it to survive. We make tools, we strike agreements via language,
we extract poisons, we predict the behavior of animals, we try to get at the workings of plants,
and when I say we, I don't just mean we in the modern West, but we as a species everywhere,
which is how we've managed to occupy every niche on the planet, how we've managed to drive other
animals to extinction, and the refinement of reason in pursuit of human well-being, of
health, happiness, social richness, cultural richness, is our main challenge in the present.
That is, using our intellect, using our knowledge to figure out how the world works, how we work,
in order to make discoveries and strike agreements that make us all better off in the long run.
Right, and you do that almost undeniably and in a data-driven way in your recent book,
but I'd like to focus on the artificial intelligence aspect of things, and not just
artificial intelligence, but natural intelligence too. So 20 years ago in the book you've written
on how the mind works, you conjecture, again, am I right to interpret things? You can correct me
if I'm wrong, but you conjecture that human thought in the brain may be a result of a
massive network of highly interconnected neurons. So from this interconnectivity emerges thought,
not compared to artificial neural networks which we use for machine learning today,
is there something fundamentally more complex, mysterious, even magical about the biological
neural networks versus the ones we've been starting to use over the past 60 years and
have become to success in the past 10? There is something a little bit mysterious about
the human neural networks, which is that each one of us who is a neural network knows that we
ourselves are conscious, conscious not in the sense of registering our surroundings or even
registering our internal state, but in having subjective first-person present tense experience.
That is, when I see red, it's not just different from green, but there's a redness to it that I
feel. Whether an artificial system would experience that or not, I don't know, and I don't think I
can know. That's why it's mysterious. If we had a perfectly lifelike robot that was behaviorally
indistinguishable from a human, would we attribute consciousness to it or ought we to attribute
consciousness to it? And that's something that it's very hard to know. But putting that aside,
put inside that largely philosophical question, the question is, is there some difference between
the human neural network and the ones that we're building in artificial intelligence will mean
that we're, on the current trajectory, not going to reach the point where we've got a lifelike
robot indistinguishable from a human because the way their so-called neural networks are organized
are different from the way ours are organized. I think there's overlap, but I think there are
some big differences that their current neural networks, current so-called deep learning systems
are, in reality, not all that deep. That is, they are very good at extracting high-order statistical
regularities. But most of the systems don't have a semantic level, a level of actual understanding
of who did what to whom, why, where, how things work, what causes what else. Do you think that
kind of thing can emerge as it does? So artificial neural networks are much smaller, the number of
connections and so on than the current human biological networks, but do you think, sort of,
to go to consciousness or to go to this higher level semantic reasoning about things, do you
think that can emerge with just a larger network, with a more richly, weirdly interconnected network?
Separated in consciousness because consciousness is even a matter of complexity.
A really weird one.
Yeah, you could have, you could, you could, since we asked the question of whether shrimp
are conscious, for example, they're not terribly complex, but maybe they feel pain. So let's just
put that part of it aside. But I think sheer size of a neural network is not enough to give it
structure and knowledge, but if it's suitably engineered, then why not? That is, we're neural
networks, natural selection did a kind of equivalent of engineering of our brains,
so I don't think there's anything mysterious in the sense that no system made out of silicon
could ever do what a human brain can do. I think it's possible in principle. Whether it'll ever
happen depends not only on how clever we are in engineering these systems, but whether even,
we even want to, whether that's even a sensible goal. That is, you can ask the question, is there
any locomotion system that is as good as a human? Well, we kind of want to do better than a human,
ultimately, in terms of legged locomotion. There's no reason that humans should be our benchmark.
They're tools that might be better in some ways. It may just be not as, it may be that we can't
duplicate a natural system because at some point it's so much cheaper to use a natural system that
we're not going to invest more brain power and resources. So for example, we don't really have
a substitute, an exact substitute for wood. We still build houses out of wood. We still build
furniture out of wood. We like the look. We like the feel. It's wood has certain properties that
synthetics don't. There's not that there's anything magical or mysterious about wood.
It's just that the extra steps of duplicating everything about wood is something we just
haven't bothered because we have wood. Like why say cotton? I mean, I'm wearing cotton clothing
now feels much better than polyester. It's not that cotton has something magic in it. And it's
not that if there was, that we couldn't ever synthesize something exactly like cotton,
but at some point it's just, it's just not worth it. We've got cotton. And likewise, in the case
of human intelligence, the goal of making an artificial system that is exactly like the human
brain is a goal that we probably no one is going to pursue to the bitter end. I suspect because
if you want tools that do things better than humans, you're not going to care whether it does
something like humans. So for example, diagnosing cancer or predicting the weather, why set humans
as your benchmark? But in, in general, I suspect you also believe that even if the human should
not be a benchmark, we don't want to imitate humans in their system. There's a lot to be learned
about how to create an artificial intelligence system by studying the humans.
Yeah, I think that's right. In the same way that to build flying machines, we want to understand
the laws of aerodynamics and including birds, but not mimic the birds, but they're the same laws.
You have a view on AI, artificial intelligence and safety, that from my perspective
is refreshingly rational, or perhaps more importantly, has elements of positivity to it,
which I think can be inspiring and empowering as opposed to paralyzing.
For many people, including AI researchers, the eventual existential threat of AI is obvious,
not only possible, but obvious. And for many others, including AI researchers,
the threat is not obvious. So Elon Musk is famously in the highly concerned about AI camp,
saying things like AI is far more dangerous than nuclear weapons, and that AI will likely destroy
human civilization. So in February, he said that if Elon was really serious about AI,
the threat of AI, he would stop building self-driving cars that he's doing very
successfully as part of Tesla. Then he said, wow, if even Pinker doesn't understand the
difference between narrow AI like a car and general AI, when the latter literally has a
million times more compute power and an open-ended utility function, humanity is in deep trouble.
So first, what did you mean by the statement about Elon Musk should stop building self-driving
cars if he's deeply concerned? Not the last time that Elon Musk has fired off an
intemperate tweet. Well, we live in a world where Twitter has power.
Yes. Yeah, I think there are two kinds of existential threat that have been discussed
in connection with artificial intelligence, and I think that they're both incoherent.
One of them is a vague fear of AI takeover, that just as we subjugated animals and less
technologically advanced peoples, so if we build something that's more advanced than us,
it will inevitably turn us into pets or slaves or domesticated animal equivalents.
I think this confuses intelligence with a will to power, that it so happens that in
the intelligence system we are most familiar with, namely homo sapiens, we are products of natural
selection, which is a competitive process, and so bundled together with our problem-solving
capacity are a number of nasty traits like dominance and exploitation and maximization
of power and glory and resources and influence. There's no reason to think that sheer problem
solving capability will set that as one of its goals. Its goals will be whatever we set its goals
as, and as long as someone isn't building a megalomaniacal artificial intelligence,
there's no reason to think that it would naturally evolve in that direction.
Now, you might say, well, what if we gave it the goal of maximizing its own power source?
That's a pretty stupid goal to give an autonomous system. You don't give it that goal.
I mean, that's just self-evidently idiotic.
So, if you look at the history of the world, there's been a lot of opportunities where
engineers could instill in a system destructive power, and they choose not to because that's the
natural process of engineering.
Well, except for weapons. I mean, if you're building a weapon, its goal is to destroy people,
and so I think there are good reasons to not build certain kinds of weapons.
I think building nuclear weapons was a massive mistake.
You do. You think... So, maybe pause on that because that is one of the serious threats.
Do you think that it was a mistake in a sense that it should have been stopped early on,
or do you think it's just an unfortunate event of invention that this was invented?
Well, it's hard...
Do you think it's possible to stop, I guess, is the question I'm...
It's hard to rewind the clock because, of course, it was invented in the context of World War II,
and the fear that the Nazis might develop one first.
Then once it was initiated for that reason, it was hard to turn off, especially since
winning the war against the Japanese and the Nazis was such an overwhelming goal of every
responsible person that there's just nothing that people wouldn't have done then to ensure victory.
It's quite possible if World War II hadn't happened that nuclear weapons wouldn't have
been invented. We can't know, but I don't think it was by any means a necessity,
any more than some of the other weapons systems that were envisioned but never implemented,
like planes that would disperse poison gas over cities like crop dusters,
or systems to try to create earthquakes and tsunamis in enemy countries to weaponize the
weather, weaponize solar flares, all kinds of crazy schemes that we thought the better of.
I think analogies between nuclear weapons and artificial intelligence are fundamentally
misguided because the whole point of nuclear weapons is to destroy things.
The point of artificial intelligence is not to destroy things.
So the analogy is misleading.
So there's two artificial intelligence you mentioned.
The first one that gets highly intelligent or power-hungry.
Yeah, in a system that we design ourselves where we give it the goals, goals are external to the
means to attain the goals. If we don't design an artificially intelligent system to maximize
dominance, then it won't maximize dominance. It's just that we're so familiar with homo sapiens
where these two traits come bundled together, particularly in men, that we are apt to confuse
high intelligence with a will to power, but that's just an error.
The other fear is that we'll be collateral damage, that we'll give artificial intelligence a goal,
like make paper clips, and it will pursue that goal so brilliantly that before we can stop it,
it turns us into paper clips. We'll give it the goal of curing cancer, and it will turn us into
guinea pigs for lethal experiments, or give it the goal of world peace and its conception of world
pieces, no people, therefore no fighting, and so it will kill us all. Now, I think these are utterly
fanciful. In fact, I think they're actually self-defeating. They, first of all, assume that
we're going to be so brilliant that we can design an artificial intelligence that can cure cancer,
but so stupid that we don't specify what we mean by curing cancer in enough detail that it won't
kill us in the process. And it assumes that the system will be so smart that it can cure cancer,
but so idiotic that it can't figure out that what we mean by curing cancer is not killing everyone.
So I think that the collateral damage scenario, the value alignment problem,
is also based on a misconception. So one of the challenges, of course, we don't know how to build
either system currently, or are we even close to knowing? Of course, those things can change
overnight, but at this time, theorizing about it is very challenging in either direction,
so that's probably at the core of the problem, is without that ability to reason about the real
engineering things here at hand as your imagination runs away with things. Exactly. But let me sort
of ask, what do you think was the motivation and the thought process of Elon Musk? I build
autonomous vehicles. I study autonomous vehicles. I study Tesla autopilot. I think it is one of the
greatest, currently, large-scale application of artificial intelligence in the world. It has a
potentially very positive impact on society. So how does a person who's creating this very
good, quote unquote, narrow AI system also seem to be so concerned about this other general AI?
What do you think is the motivation there? What do you think is the thing?
Well, you probably have to ask him, and he is notoriously flamboyant, impulsive, as we have
just seen, to the detriment of his own goals of the health of a company. So I don't know what's
going on in his mind. You probably have to ask him. But I don't think the distinction between
special purpose AI and so-called general AI is relevant, that in the same way that
a special purpose AI is not going to do anything conceivable in order to attain a goal. All
engineering systems are designed to trade off across multiple goals. When we build cars in the
first place, we didn't forget to install brakes because the goal of a car is to go fast. It
occurred to people, yes, you want it to go fast, but not always. So you build in brakes, too.
Likewise, if a car is going to be autonomous and program it to take the shortest route to the
airport, it's not going to take the diagonal and mow down people and trees and fences because
that's the shortest route. That's not what we mean by the shortest route when we program it,
and that's just what an intelligence system is by definition. It takes into account multiple
constraints. The same is true, in fact, even more true, of so-called general intelligence.
That is, if it's genuinely intelligent, it's not going to pursue some goal single-mindedly,
omitting every other consideration and collateral effect. That's not artificial and general
intelligence. That's artificial stupidity. I agree with you, by the way, on the promise
of autonomous vehicles for improving human welfare. I think it's spectacular, and I'm
surprised at how little press coverage notes that in the United States alone,
something like 40,000 people die every year on the highways,
vastly more than are killed by terrorists. We spent a trillion dollars on a war to combat
deaths by terrorism, about half a dozen a year. Whereas, year in and year out, 40,000 people are
massacred on the highways, which could be brought down to very close to zero. So I'm with you on the
humanitarian benefit. Let me just mention that as a person who's building these cars,
it is a little bit offensive to me to say that engineers would be coolest enough not to engineer
safety into systems. I often stay up at night thinking about those 40,000 people that are dying,
and everything I try to engineer is to save those people's lives. So every new invention that I'm
super excited about, every new and all the deep learning literature and CVPR conferences and NIPs,
everything I'm super excited about is all grounded in making it safe and help people. So I just don't
see how that trajectory can all of a sudden slip into a situation where intelligence will be highly
negative. You and I certainly agree on that, and I think that's only the beginning of the
potential humanitarian benefits of artificial intelligence. There's been enormous attention to
what are we going to do with the people whose jobs are made obsolete by artificial intelligence,
but very little attention given to the fact that the jobs that are going to be made obsolete are
horrible jobs. The fact that people aren't going to be picking crops and making beds and driving
trucks and mining coal, these are soul-deadening jobs. We have a whole literature sympathizing
with the people stuck in these menial, mind-deadening, dangerous jobs. If we can eliminate them, this is
a fantastic boon to humanity. Now granted, you solve one problem and there's another one, namely
how do we get these people a decent income, but if we're smart enough to invent machines that can
make beds and put away dishes and handle hospital patients, I think we're smart enough to figure
out how to redistribute income to apportion some of the vast economic savings to the human beings
who will no longer be needed to make beds. Okay, Sam Harris says that it's obvious that
eventually AI will be an existential risk. He's one of the people who says it's obvious. We don't
know when the claim goes, but eventually it's obvious and because we don't know when we should
worry about it now. It's a very interesting argument in my eyes. So how do we think about
time scale? How do we think about existential threats when we don't really, we know so little
about the threat, unlike nuclear weapons perhaps, about this particular threat that it could happen
tomorrow, right? But very likely it won't. They're likely to be 100 years away. So how do we ignore
it? How do we talk about it? Do we worry about it? How do we think about those? What is it?
A threat that we can imagine. It's within the limits of our imagination,
but not within our limits of understanding to accurately predict it.
But what is the it that we're afraid of?
Sorry, AI being the existential threat.
How? Enslaving us or turning us into paper clips?
I think the most compelling from the Sam Harris perspective would be the paper clip situation.
Mm hmm. Yeah, I mean, I just think it's totally fanciful. I mean, that is don't build a system.
Don't give it don't. First of all, the code of engineering is you don't implement a system with
massive control before testing it. Now, perhaps the culture of engineering will radically change.
Then I would worry. But I don't see any signs that engineers will suddenly do idiotic things
like put a electrical power plant in control of a system that they haven't tested first.
Uh, or all of these scenarios, not only imagine a, um, almost a magically powered intelligence,
you know, including things like cure cancer, which is probably an incoherent goal because
there's so many different kinds of cancer or bring about world peace. I mean, how do you even
specify that as a goal? But the scenarios also imagine some degree of control of every molecule
in the universe, uh, which not only is itself unlikely, but we would not start to connect
these systems to infrastructure without, uh, without testing as we would any kind of engineering
system. Now, maybe some engineers will be irresponsible and we need legal and, um, uh,
regulatory and, uh, legal responsibility implemented so that engineers don't do
things that are stupid by their own standards. But the, uh, I, I've never seen enough of a
plausible scenario of, uh, existential threat to devote large amounts of brain power to,
to forestall it. So you believe in the sort of the power on mass of the engineering of reason,
as you argue in your latest book of reason and science to sort of be the very thing that
per that guides the development of new technology. So it's safe and also keeps us safe.
Yeah. If the same, you know, granted the same culture of safety that currently is part of the
engineering mindset for airplanes, for example. So yeah, I don't think that, that, uh, that that
should be thrown out the window and, uh, that untested all powerful systems should be, uh,
suddenly implemented, but there's no reason to think they are. And in fact, if you look at the
progress of artificial intelligence, it's been, you know, it's been impressive,
especially in the last 10 years or so, but the idea that suddenly there'll be a step function
that all of a sudden, uh, before we know it, it will be, um, all powerful, that there'll be some
kind of recursive self-improvement, some kind of, uh, foom, uh, is also fanciful. We certainly by
the technology that we, that we're now impresses us, such as deep learning, where you train something
on, uh, hundreds of thousands or millions of examples. They're not hundreds of thousands of,
uh, problems of which curing cancer is, uh, a typical example. Uh, and so the kind of techniques
that have allowed AI to increase in the last five years are not the kind that are going to lead to
this fantasy of, uh, of, of exponential sudden self-improvement. So it's, I think it's, it's
kind of a magical thinking. It's not based on our understanding of how AI actually works.
Now give me a chance here. So you said fanciful magical thinking. In his Ted talk, Sam Harris
says that thinking about AI killing all human civilization is somehow fun intellectually.
Now I have to say as a scientist and engineer, I don't find it fun, but when I'm having beer with
my non AI friends, there is indeed something fun and appealing about it. Like talking about
an episode of black mirror, considering, uh, if a large meteor is headed towards earth,
we were just told a large meteors headed towards earth, uh, something like this. And
can you relate to this sense of fun? And do you understand the psychology of it?
Yeah, that's right. Good question. Uh, you know, I, I personally don't find it fun. Um, I, I find
it, uh, kind of, uh, uh, actually a waste of time, uh, because there are genuine threats that we
ought to be thinking about like, like pandemics, like, like a cybersecurity vulnerabilities,
like the possibility of nuclear war and certainly climate change.
This is enough to fill many, uh, uh, conversations without, and I think there,
I think Sam did put his finger on something, namely that there is a community, uh, sometimes
called the rationality community that delights in using its brain power to come up with scenarios
that would not occur to mere mortals, to, to, to less cerebral people. Uh, so there is a kind of
intellectual thrill in finding new things to worry about that no one has, uh, worried about yet.
I actually think though that it's, uh, not only is it, is it kind of fun that doesn't give me
particular pleasure, uh, but I think there, there is, there can be a pernicious side to it,
namely that you overcome people with such, uh, dread, such fatalism, that there's so many ways to,
to, uh, to, to die, to annihilate our civilization that we may as well, uh, enjoy life while we can.
There's nothing we can do about it. If climate change doesn't do us in, then runaway robots will.
So, uh, let's enjoy ourselves now. We've got to prioritize. Uh, we have to, um,
look at threats that are close to certainty, such as climate change and distinguish those from ones
that are merely imaginable, but with infinitesimal probabilities. Uh, and we have to take into
account people's worry budget. You can't worry about everything. And if you so dread and fear
and terror and num and fatalism, it can lead to a kind of numbness. Well, they just, these problems
are overwhelming and the engineers are just going to kill us all. Um, so, uh, let's either destroy
the entire infrastructure of science, technology, uh, or let's just, um, enjoy life while we can.
So there's a certain line of worry, which I'm worried about a lot of things in engineering.
There's a certain line of worry when you cross, you're allowed to cross, uh, that it becomes
paralyzing fear as opposed to productive fear. And that's kind of what, um, you're highlighting.
They're exactly right. And we've seen some, uh, we, we know that human effort is not well calibrated
against risk, uh, in that because a basic tenet of cognitive psychology is that, uh, perception of
risk and hence perception of fear is driven by imaginability, not by data. Uh, and so we
misallocate fast amounts of resources to avoiding terrorism, which kills on average about six
Americans a year. With a one exception of nine 11, uh, we invade countries, we invent an entire new
departments of government with massive, massive, uh, expenditure resources and lives to defend
ourselves against a trivial risk. Uh, whereas guaranteed risks, and you mentioned as one of
them, you mentioned traffic fatalities, uh, and even risks that are, um, not here, but are
plausible enough to worry about like pandemics, like, um, nuclear war received far too little
attention. The, in presidential debates, there's no discussion of how to minimize the risk of, of
nuclear war, lots of discussion of terrorism, for example. Uh, and, and so we, I think it's
essential to calibrate our budget of fear worry concern planning to the actual probability of,
of harm. Yep. So let me ask this then this question. So speaking of imaginability,
you said that it's important to think about reason. And one of my favorite people who,
who likes to dip into the outskirts of reason, uh, through fascinating exploration of his
imagination is Joe Rogan. Oh yes. Uh, you, uh, so who has through reason used to believe a lot of
conspiracies and through reason has stripped away a lot of his beliefs, um, in that way. So it's
fascinating actually to watch him through rationality, kind of throw away the ideas of, uh,
big foot and, uh, 9 11. I'm not sure exactly. I don't know what he believes in. Yes. Okay.
Right. No, he's a, he's become a real force for, uh, for good. Yep. So you were on the Joe Rogan
podcast in February and had a fascinating conversation, but as far as I remember,
didn't talk much about artificial intelligence. I will be on his podcast in a couple of weeks.
Joe is very much concerned about existential threat of AI. I'm not sure if you're, uh,
this is why I was hoping that you would get into that topic. And in this way,
he represents quite a lot of people who look at the topic of AI from 10,000 foot level.
So as an exercise of, uh, communication, you said it's important to be rational and reason
about these things. Let me ask if you were to coach me as an AI researcher about how to speak
to Joe and the general public about AI, what would you advise? Well, I'd, uh, the, the short answer
would be to read the sections that I wrote in enlightenment, you know, about AI, but a longer
reason would be I think to emphasize, and I, and I think you're very well positioned as an engineer
to remind people about the culture of engineering, that it really is, uh, safety oriented, that
another discussion in enlightenment. Now, uh, I plot, uh, rates of accidental death from various
causes, plane crashes, uh, car crashes, uh, occupational accidents, even death by lightning
strikes. And they all plummet, uh, because the culture of engineering is how do you squeeze out
the, uh, the lethal risks, death by fire, death by drowning, uh, death by, uh, asphyxiation,
all of them drastically declined because of advances in engineering that I got to say,
I did not appreciate until I saw those graphs. And it is because exactly people like you who
stamp it and I think, Oh my God, it is, what am I, what am I, what I'm inventing likely to hurt
people and to deploy ingenuity to prevent that from happening. Now I'm not an engineer, although
I spent 22 years at MIT. So I know something about the culture of engineering. My understanding is
this is the way you think if you're an engineer and it's essential that that culture not be
suddenly, uh, switched off when it comes to artificial intelligence. So, I mean, that,
that could be a problem, but is there any reason to think it would be switched off?
I don't think so. And one, there's not enough engineers speaking up for this way, for this,
the excitement for, uh, the positive view of human nature and what you're trying to create
is the positivity, like everything we try to invent is trying to do good for the world.
But let me ask you about the psychology of negativity. It seems just objectively, not
considering the topic, it seems that being negative about the future makes you sound smarter
than being positive about the future. It regardless of topic, am I correct in this observation? And
if you, if so, why do you think that is? Yeah, I think that, I think there is that, that, uh,
phenomenon that, uh, as Tom Lehrer, the satirist said, always predict the worst and you'll be
hailed as a prophet. Uh, it may be part of our overall negativity bias. We are as a species
more attuned to the negative than the positive. We dread losses more than we enjoy gains. And, uh,
that might open up a space for, uh, uh, uh, profits to remind us of harms and risks and
losses that we may have overlooked. Uh, so I think there, there, uh, there, there is that
asymmetry. So you've written some of my favorite books, uh, all over the place. So starting from
enlightenment now to, uh, the better angels of our nature blank slate, how the mind works, the,
the one about language, uh, language instinct, uh, Bill Gates, big fan too, uh, said of your most
recent book that it's, uh, my new favorite book of all time. Um, so for you as an author,
what was the book early on in your life that had a profound impact on the way you saw the world?
Certainly this book enlightenment now is influenced by, um, David Deutsches, uh,
the beginning of infinity. We have rather deep reflection on, uh, knowledge and the power of
knowledge to improve the human condition. Uh, they, and with bits of wisdom, such as that
problems are inevitable, but problems are solvable given the right knowledge and that
solutions create new problems that have to be solved in their turn. That's a, I think a kind
of wisdom about the human condition that influenced the writing of this book. There's some books that
are excellent, but obscure, some of which I have on my, uh, on a page of my website. I read a book
called the history of force self published by a political scientist named James Payne on the
historical decline of violence. And that was one of the inspirations for the better angels of our
nature. Uh, the, um, what about early on? If you look back when you were maybe a teenager,
I loved a book called one, two, three infinity. When I was a young adult, I read that book by
George Gamov, the physicist, which had very accessible and humorous explanations of relativity
of a number theory of, um, uh, dimensionality, high, uh, multiple dimensional spaces, uh,
in a way that I think is still delightful 70 years after it was published. I liked that the
time life science series, uh, these were books that would arrive every month that my, my mother
subscribed to each one on a different topic. Uh, one would be on electricity. One would be on,
you know, forests, one would be on evolution. And then one was on the mind. And I was just
intrigued that there could be a science of mind. And that, that book I would, uh,
cite as an influence as well. Then later on, you fell in love with the idea of studying the
mind. Was that the thing that grabbed you? It was one of the things I would say, um, the, uh,
I read, uh, as a college student, the book reflections on language by Noam Chomsky,
spent most of his career here at MIT, uh, Richard Dawkins, two books, the blind watchmaker and the
selfish gene were enormously influential, partly for mainly for the content, but also for the,
the writing style, that the ability to explain abstract concepts in lively prose, uh, Stephen
J Gould's first collection ever since Darwin, also, uh, excellent example of, of a lively writing,
George Miller, a psychologist that most psychologists are familiar with,
came up with the idea that human memory has a capacity of a seven plus or minus two chunks.
That's probably his biggest claim to fame. But he wrote a couple of books on language
and communication that I'd read as an undergraduate, again, beautifully written and, uh, intellectually
deep. Wonderful. Stephen, thank you so much for taking the time today. My pleasure. Thanks a lot,
Lex.