This graph shows how many times the word ______ has been mentioned throughout the history of the program.
Let me ask you to this question.
Well, there's bell curve or any research on race differences.
Can that be used to increase the amount of racism in the world?
Can that be used to increase the amount of hate in the world?
My sense is there is such enormous reservoirs of hate and racism that have
nothing to do with scientific knowledge of the data that speak against that.
That, no, I don't want to give racist groups of veto power over what scientists study.
The following is a conversation with Richard Hire on the science of human intelligence.
This is a highly controversial topic, but a critically important one for understanding
the human mind. I hope you will join me in not shying away from difficult topics like this,
and instead, let us try to navigate it with empathy, rigor, and grace.
If you're watching this on video now, I should mention that I'm recording this introduction
in an undisclosed location somewhere in the world. I'm safe and happy, and life is beautiful.
This is a Lex Friedman podcast. To support it, please check out our sponsors in the description,
and now, dear friends, here's Richard Hire.
What are the measures of human intelligence, and how do we measure it?
Everybody has an idea of what they mean by intelligence. In the vernacular, what I mean
by intelligence is just being smart. How well you reason, how well you figure things out,
what you do when you don't know what to do. Those are just kind of everyday common sense
definitions of how people use the word intelligence. If you want to do research on intelligence,
measuring something that you can study scientifically is a little trickier,
and what almost all researchers who study intelligence use is the concept called
the G-Factor, general intelligence. That is what is common. That is a mental ability
that is common to virtually all tests of mental abilities.
What's the origin of the term G-Factor, by the way? It's such a funny word for such a
fundamental human thing.
The general factor, I really started with Charles Spearman, and he noticed, this is like,
boy, more than a hundred years ago, he noticed that when you tested people with different tests,
all the tests were correlated positively. He was looking at student exams and things,
and he invented the correlation coefficient, essentially. When he used it to look at student
performance on various topics, he found that all the scores were correlated with each other,
and they were all positive correlations. He inferred from this that there must be some common
factor that was irrespective of the content of the test.
Positive correlation means if you do well on the first test, you're likely to do well on the
second test, and presumably that holds for tests across even disciplines, so not within
subject, but across subjects. That's where the general comes in. Something about general
intelligence. When you were talking about measuring intelligence and trying to figure out
something difficult about this world and how to solve the puzzles of this world,
that means generally speaking, not some specific test, but across all tests.
Absolutely right. People get hung up on this because they say, well, what about the ability
to do X? Isn't that independent? They said, I know somebody who's very good at this,
but not so good at this. This other thing. There are a lot of examples like that,
but it's a general tendency, so exceptions really don't disprove. Your everyday experience
is not the same as what the data actually show. Your everyday experience, when you say, oh,
I know someone who's good at X, but not so good at Y, that doesn't contradict the statement about
he's not so good, but he's not the opposite. It's not a negative correlation.
Okay. We're not with our anecdotal data. I know a guy is really good at solving some kind of
visual thing that's not sufficient for us to understand actually the depths of that person's
intelligence. How this idea of G factor, how much evidence is there? How strong,
given across the decades that this idea has been around, how much has it been held up that there
is a universal sort of horsepower of intelligence that's underneath all of it, all the different
tests we do to try to get to this thing in the depths of the human mind, that's a universal
stable measure of a person's intelligence. You used a couple of words in there, stable.
We have to be precise with words. I was hoping we can get away with being poetic.
We can. There's a lot about research in general, not just intelligence research that is poetic.
Science has a phonetic aspect to it. And good scientists are very intuitive. They're not just,
hey, these are the numbers. You have to kind of step back and see the big picture. When it comes to
intelligence research, you asked, how well has this general concept held up? And I think I can say,
without fear of being empirically contradicted, that it is the most replicated finding in all
of psychology. Now, some cynics may say, well, big deal, psychology, we all know there's a
replication crisis in psychology, and a lot of this stuff doesn't replicate. That's all true.
There is no replication crisis when it comes to studying the existence of this general factor.
Let me tell you some things about it. It looks like it's universal that you find it in all cultures.
The way you find it, step back one step, the way you find it is to give a battery of mental tests.
What battery you choose, take a battery of any mental test you want, give it to a large number
of diverse people, and you will be able to extract statistically the commonality among
all those tests. It's done by a technique called factor analysis. People think that this may be
a statistical artifact of some kind. It is not a statistical artifact.
What is factor analysis?
Factor analysis is a way of looking at a big set of data and look at the correlation among
the different test scores, and then find empirically the clusters of scores that go together.
There are different factors. If you have a bunch of mental tests, there may be a verbal factor,
there may be a numerical factor, there may be a visual spatial factor, but those factors
have variants in common with each other. That's what's common among all the tests,
and that's what gets labeled the G factor. If you give a diverse battery of mental tests
and you extract a G factor from it, that factor usually accounts for around half of the variants.
It's the single biggest factor, but it's not the only factor, but it is the most reliable,
it is the most stable, and it seems to be very much influenced by genetics.
It's very hard to change the G factor with training or drugs or anything else.
We don't know how to increase the G factor.
Okay. You said a lot of really interesting things there.
So first, just to get people used to it in case they're not familiar with this idea,
G factor is what we mean. So often there's this term used IQ, which is the way IQ is used
and they really mean G factor in regular conversation. What we mean by IQ, we mean
intelligence, and what we mean by intelligence, we mean general intelligence and general intelligence
in the human mind from a psychology, from a serious rigorous scientific perspective,
actually means G factor. So G factor equals intelligence, just in this conversation to
define terms. Okay. So there's this stable thing called G factor. You said, now factor,
you said factor many times means a measure that's potentially could be reduced to a single
number across the different factors you mentioned. And what you said it accounts for half,
half-ish, accounts for half-ish of what? Of variance across the different set of tests.
So if you do for some reason, well on some set of tests, what does that mean? So that means
there's some unique capabilities outside of G factor that might account for that. And what are
those? What else is there besides the raw horsepower of the engine inside your mind that
generates intelligence? There are test taking skills. There are specific abilities. Someone might be
particularly good at mathematical things, mathematical concepts, even simple arithmetic
people. Some people are much better than others. You might know people who can memorize and short
term memory is another component of this. Short term memory is one of the cognitive processes
that's most highly correlated with the G factor. So all those things like memory,
test taking skills account for variability across the test performances. So you can
run but you can't hide from the thing that God gave you, the genetics. So that G factor,
science says that G factor is there. Each one of us has a G factor. Some have more than others.
I'm getting uncomfortable already. Well, IQ is a score. An IQ score is a very good estimate
of the G factor. You can't measure G directly. There's no direct measure. You estimate it from
these statistical techniques. But an IQ score is a good estimate. Why? Because a standard IQ test is
a battery of different mental abilities. You combine it into one score, and that score is
highly correlated with the G factor, even if you get better scores on some subtests than others.
Because again, it's what's common to all these mental abilities.
So a good IQ test, and I'll ask you about that, but a good IQ test tries to compress down
on that battery of tests, like tries to get a nice battery, the nice selection of variable tests
into one test. And so in that way, it sneaks up to the G factor. And that's another interesting
thing about G factor. Now you give, first of all, you have a great book on the neuroscience of
intelligence. You have a great course, which is when I first learned, you're a great teacher.
Let me just... Thank you. Your course at the teaching company, I hope I'm saying that correctly.
The intelligent brain. The intelligent brain is when I first
heard about this G factor, this mysterious thing that lurks in the darkness that we cannot
quite shine a light on, we're trying to sneak up on. So the fact that there's this measure,
stable measure of intelligence, we can't measure directly. But we can come up with a battery test
or one test that includes a battery of variable type of questions that can reliably or attempt to
estimate in a stable way that G factor. That's a fascinating idea. So for me as an AI person,
it's fascinating. It's fascinating there's something stable like that about the human mind,
especially if it's grounded in genetics. It's both fascinating that as a researcher of the human
mind and all the human psychological, sociological, ethical questions that started rising,
it makes me uncomfortable. But truth can be uncomfortable.
You know, I get that a lot about being uncomfortable talking about this. Let me go back and just say
one more empirical thing. It doesn't matter which battery of tests you use. So there are countless
tests. You can take any 12 of them at random, extract a G factor, and another 12 at random,
and extract a G factor. And those G factors will be highly correlated, like over 0.9 with each
other. So it is a ubiquitous, it doesn't depend on the content of the test is what I'm trying to say.
It is general among all those tests of mental ability. And mental abilities include things like
G's playing poker. Your skill at poker is not unrelated to G. Your skill at anything that
requires reasoning and thinking, anything from spelling, arithmetic, more complex things.
This concept is ubiquitous. And when you do batteries of tests in different cultures,
you get the same thing. So this says something interesting about the human mind,
that is a computer is designed to be general. So that means you can, so it's not, it's not easily
made specialized. Meaning, if you're going to be good at one thing, Miyamoto Masashi has this
quote, he's an ancient warrior, famous for the book of five rings in the martial arts world.
And the quote goes, if you know the way broadly, you will see it in everything. Meaning, if you
do one thing is going to generalize to everything. And that that's an interesting
thing about the human mind. So that that's what the G factor reveals. Okay, so what's the difference
if you can elaborate a little bit further between IQ and G factor, just because it's a source of
confusion for people. And IQ is a score. People use the word IQ to mean intelligence. But IQ
has a more technical meaning for people who work in the field. And it's an IQ score, a score on a
test that estimates the G factor. And the G factor is what's common among all these tests of mental
ability. So if you think about, it's not a Venn diagram, but I guess you could make a Venn diagram
out of it. But the G factor would be really at the core, what's what's common to everything.
And IQ, what IQ scores do is they allow a rank order of people on the score. And this is what
makes people uncomfortable. This is where there's a lot of controversy about whether IQ tests are
biased toward any one group or another. And a lot of the the answers to these questions are very
clear, but they also have a technical aspect of it. That's not so easy to explain.
Well, we'll talk about the fascinating and the difficult things about all of this. But
so by the way, when you say rank order, that means you get a number and that means
one person you can now compare. Like you could say that this other person is more intelligent than me.
Well, what you can say is IQ scores are interpreted really as percentiles. So that
if you have an IQ of 140 and somebody else has 70, the metric is such that you cannot say the
person with an IQ of 140 is twice as smart as a person with an IQ of 70. That would require
a ratio scale with an absolute zero. Now, you may think you know people with zero intelligence,
but in fact, there is no absolute zero on an IQ scale. It's relative to other people.
So relative to other people, somebody with an IQ score of 140 is in the upper less than 1%,
whereas somebody with an IQ of 70 is two standard deviations below the mean.
That's a different percentile. So it's similar to like in chess, you have an
ELO rating that's designed to rank order people. So you can't say it's twice.
One person, if your ELO rating is twice another person, I don't think you're twice as good at
chess. Exactly. It's not stable in that way, because it's very difficult to do these kinds
of comparisons. So what can we say about the number itself? Is that stable across tests and so on?
There are a number of statistical properties of any test. They're called psychometric properties.
You have validity, you have reliability, there are many different kinds of reliability.
They all essentially measure stability and IQ tests are stable within an individual. There are
some longitudinal studies where children were measured at age 11. And again, when they were
70 years old and the two IQ scores are highly correlated with each other. This comes from
a fascinating study from Scotland. In the 1930s, some researchers decided to get an IQ test on
every single child age 11 in the whole country, and they did. And those records were discovered in
an old storeroom at the University of Edinburgh by a friend of mine, Ian Deary, who found the records,
digitized them, and has done a lot of research on the people who are still alive today from
that original study, including brain imaging research, by the way. Really, it's a fascinating
group of people who are studied. Not to get ahead of the story, but one of the most interesting
things they found is a very strong relationship between IQ measured at age 11 and mortality.
In the 70 years later, they looked at the survival rates, and they could get death
records from everybody. Scotland has universal healthcare for everybody. And it turned out,
if you divide the people by their age 11 IQ score into quartiles and then look at how many people
are alive 70 years later, I know this is in the book, I have the graph in the book, but there are
essentially twice as many people alive in the highest IQ quartile than in the lowest IQ quartile,
true in men and women. So it makes a big difference. Now, why this is the case is not so clear
here, since everyone had access to healthcare. Well, there's a lot, and we'll talk about it.
Just the sentences you used now could be explained by nature or nurture. We don't know. Now, there's
a lot of science that starts to then dig in and investigate that question. But let me link
you on the IQ test. How are the IQ test design? How do they work? Maybe some examples for people
who are not aware. What makes a good IQ test question that sneaks up on this g-factor measure?
Well, your question is interesting because you want me to give examples of items that make good
items. And what makes a good item is not so much its content, but its empirical relationship to the
total score that turns out to be valid by other means. So for example, let me give you an odd
example from personality testing. Nice. So there's a personality test called the Minnesota
Multiphasic Personality Inventory, MMPI, been around for decades. I've heard about this test
recently because of the Johnny Depp and Amber Heard trial. I don't know if you've been paying
attention to that, but they had psychologists on the stand and they were talking, apparently
those psychologists did, again, I'm learning so much from this trial. They did different
battery of tests to diagnose personality disorders. Apparently, there's that systematic
way of doing so and the Minnesota one is one of the ones that there's the most science on.
There's a lot of great papers which were all continuously cited on the stand, which is fascinating
to watch. Sorry, a little bit of attention. That's okay. I mean, this is interesting because
you're right. It's been around for decades. There's a lot of scientific research on the
psychometric properties of the test, including what it predicts with respect to different
categories of personality disorder. But what I want to mention is the content of the items on
that test. All of the items are essentially true, false items. True or false, I prefer a shower to
a bath. True or false, I think Lincoln was a better president than Washington. What does
that have to do? The point is the content of these items, nobody knows why these items in
aggregate predict anything, but empirically they do. It's a technique of choosing items for a test
that is called dust bowl empiricism. That the content doesn't matter, but for some reason,
when you get a criterion group of people with this disorder and you compare them to people
without that disorder, these are the items that distinguish. Irrespective of content,
it's a hard concept to grasp. Well, first of all, it's fascinating.
I consider myself part psychologist because I love human-robot interaction and that's a problem.
Half of that problem is a psychology problem because there's a human. So, designing these
tests to get at the questions is the fascinating part. How do you get to... What
is dust bowl empiricism referred to? Does it refer to the final result? Yeah, so it's the test
is dust bowl empiricism, but how do you arrive at the battery of questions? I presume one of the
things now again, I'm going to the excellent testimony in that trial. They explain it because
they also explain the tests, that a bunch of the questions are kind of make you forget that
you're taking a test. It makes it very difficult for you to somehow figure out what you're supposed
to answer. Yes, it's called social desirability, but we're getting a little far afield because I
only wanted to give that example of dust bowl empiricism. When we talk about the items on an
IQ test, many of those items in the dust bowl empiricism method have no face validity. In
other words, they don't look like they measure anything. Whereas most intelligence tests,
the items actually look like they're measuring some mental ability. So, here's one of the...
Oh, so you were bringing that up as an example as what it is not?
Yes. Got it. Okay. So, I don't want to go too far afield on it.
Too far afield is actually one of the names of this podcast. So, I should mention that.
Far afield. Yeah, so anyway, sorry. So, they feel the questions look like they
pass the face validity test. And some more than others. So, for example, let me give you a couple
of things here. One of the subtests on a standard IQ test is general information.
Let me just think a little bit because I don't want to give you the actual item.
But if I said how far is it between Washington DC and Miami, Florida within 500 miles plus or
minus? Well, it's not a fact most people memorize, but you know something about geography. You say,
well, I flew there once. I know planes fly 500 miles. You can kind of make an estimate.
But it also seems like it would be very cultural. So, there's that kind of general information.
Then there's vocabulary test. What does regatta mean? And I choose that word because that word
was removed from the IQ test because people complained that disadvantaged people would
not know that word just from their everyday life. Okay, here's another example from a
different kind of subtest. What's regatta, by the way? A regatta is a sailing competition,
a competition with boats. Not necessarily sailing, but a competition with boats.
Yep, yep. I'm probably disadvantaged in that way. Okay, excellent. So, that was removed.
Anyway, what you were saying. Okay. So, here's another subtest. I'm going to repeat a string
of numbers. And when I'm done, I want you to repeat them back to me. Ready? Okay, seven, four, two,
eight, one, six. That's way too many. Seven, four, two, eight, one, six.
Okay, you get the idea. Now, the actual test starts with a smaller number,
you know, like two numbers. And then it is people get it right. You keep going,
adding to the string of numbers until they can't do it anymore. Okay, but now try this.
I'm going to say some numbers. And when I'm done, I want you to repeat them to me backwards.
I quit. Okay. Now, so I gave you some examples of the kind of items on an IQ test. Yes.
General information. I can't even remember all. General information, vocabulary,
digit span forward and digit span backward. Well, you said I can't even remember them.
That's a good question for me. What does memory have to do with your future?
Okay, well, let's hold on. Let's just talk about these examples. Now,
some of those items seem very cultural and others seem less cultural.
Which ones do you think scores on which subtests are most highly correlated with the G factor?
Well, the two advances less cultural.
Well, it turns out vocabulary is highly correlated and it turns out that digit span
backwards is highly correlated. How do you figure?
Now you have decades of research to answer the question, how do you figure?
Right. So now there's like good research that gives you intuition about what kind of questions
and get at it. Just like there's something I've done. I've actually used for research just to
send me a autonomous vehicle like whether humans are paying attention. There's a body of literature
that does like end back tests. For example, we have to put workload on the brain to do recall,
memory recall. And that helps you kind of put some work onto the brain while the person is doing
some other task and does some interesting research with that. But that's loading the memory. So there's
like research around stably what that means about the human mind. And here you're saying
recall backwards is a good predictor. Transformation.
Yeah. So you have to do some like you have to load that into your brain and not just remember it,
but do something with it. Right. Now here's another example of a different kind of test
called the HIC paradigm. And it's not verbal at all. It's a little box and there are a series of
lights arranged in a semi circle at the top of the box. And then there's a home button that you press.
And when one of the lights goes on, there's a button next to each of those lights. You take
your finger off the home button and you just press the button next to the light that goes on.
And so it's a very simple reaction time. Light goes on as quick as you can. You press the button
and you get a reaction time from the moment you lift your finger off the button that when you press
the button with where the light is, that reaction time doesn't really correlate with IQ very much.
But if you change the instructions and you say three lights are going to come on simultaneously,
I want you to press the button next to the light that's furthest from the other two. So maybe lights
one and two go on and light six goes on simultaneously. You take your finger off and you would press the
button by light six. That's that reaction time to a more complex task. It's not really hard.
Almost everybody gets it all right. But your reaction time to that is highly correlated
with the G factor. This is fascinating. So reaction time. So there's a temporal
aspect to this. So what role does time- Speed of processing. It's the speed of processing.
Is this also true for ones that take longer like five, 10, 30 seconds? Is time part of the measure
with some of this? Yes. And that is why some of the best IQ tests have a time limit. Because
if you have no time limit, people can do better. But it doesn't distinguish among people that well.
So that adding the time element is important. So speed of information processing and reaction
time is a measure of speed of information processing turns out to be related to the G factor.
But the G factor only accounts for maybe half or some amount on the test performance. For example,
I get pretty bad test anxiety. I just don't enjoy tests. I enjoy going back into my cave
and working. Like I've always enjoyed homework way more than tests. No matter how hard the
homework is, because I can go back to the cave and hide away and think deeply. There's something
about being watched and having a time limit that really makes me anxious. And I could just see the
mind not operating optimally at all. But you're saying underneath there, there's still a G factor.
There's no question. And if you get anxious taking a test, many people say, oh, I didn't do well
because I'm anxious. I hear that a lot. They say, well, fine, if you're really anxious during the
test, the score will be a bad estimate of your G factor. It doesn't mean the G factor isn't there.
That's right. And by the way, standardized tests like the SAT,
they're essentially intelligence tests. They are highly G loaded. Now, the people who make
the SAT don't want to mention that. They have enough trouble justifying standardized testing.
But to call it an intelligence test is really beyond the pale. But in fact, it's so highly
correlated because it's a reasoning test. The SAT is a reasoning test, a verbal reasoning,
mathematical reasoning. And if it's a reasoning test, it has to be related to G. But if people
go in and take a standardized test, whether it's an IQ test or the SAT, and they happen to be sick
that day with 102 fever, the score is not going to be a good estimate of their G. If they retake
the test when they're not anxious or less anxious or don't have a fever, the score will go up and
that will be a better estimate. But you can't say their G factor increased between the two tests.
Well, it's interesting. So the question is, how wide of a battery of test is required to estimate
the G factor? Well, because I'll give you as my personal example, I took the SAT and I think it
was called the ACT, where I was to also, I took SAT many times. Every single time, I got a perfect
on math. And verbal, the time limit on the verbal made me very anxious. I did not, I mean, part of
it, I didn't speak English very well. But honestly, it was like, you're supposed to remember stuff.
And like, I was so anxious. And like, as I'm reading, I'm sweating. I can't, you know, that like,
that feeling you have when you're reading a book, and you, you just read a page, and you know nothing
about what you've read, because you zoned out, that's the same feeling of like, I can't, I have to,
you're like, no, read and understand. And that anxiety is like, you start seeing like the typography
versus the content of the words. Like that was, I don't, it's interesting, because I know that
what they're measuring, I could see being correlated with something. But that anxiety or
some aspect of the performance, sure plays a, plays a factor. And I wonder how you sneak up in a
stable way. I mean, this is a broader discussion, but that's like standardized testing, how you
sneak up, how you get at the fact that I'm super anxious and still nevertheless measure some aspect
of my intelligence. I wonder, I don't, I don't know, I don't know if you can say it to that, that time
limit sure is a pain. Well, let me say this, there are two ways to approach the very real problem
that you say that some people just get anxious or not good test takers. By the way, part of,
part of testing is, you know the answer, you can figure out the answer, or you can't.
Right. If you don't know the answer, there are many reasons you don't know the answer at that
particular moment. You may have learned it once and forgotten it. You may, it may be on the tip
of your tongue and you just can't get it because you're anxious about the time limit. You may never
have learned it. You may never, you, you may have been exposed to it, but it was too complicated
and you couldn't learn it. I mean, there are all kinds of reasons here. But for an individual
to interpret your scores as an individual, whoever is interpreting the score has to take
into account various things that would affect your individual score. And that's why decisions
about college admission or anything else where tests are used are hardly ever the only criterion
to make a decision. And I think people are, college admissions letting go of that very much.
Oh yes, there, yeah. But what does that even mean? Because is it possible to design standardized
tests that do get, that are useful to college admissions? Well, they already exist. The SAT
is highly correlated with many aspects of success at college. Here's the problem. So maybe you could
speak to this, the correlation across a population versus individuals. So, you know, our criminal
justice system is designed to make sure, well, it's still, there's tragic cases where innocent
people go to jail, but you try to avoid that. In the same way with testing, it just, it would
suck for an SAT to mis-genius. Yes, and it's possible, but it's statistically unlikely.
So it really comes down to do which piece of information maximizes your decision making ability.
So, if you just use high school grades, it's okay. But you will miss some people who just don't do
well in high school, but who are actually pretty smart, smart enough to be bored silly in high
school, and they don't care, and they, their high school GPA isn't that good. So you will miss them
once that somebody who could be very able and ready for a college just doesn't do well on their
SAT. This is why you make decisions with taking in a variety of information. The other thing I
wanted to say, I talked about when you make a decision for an individual, statistically for
groups, there are many people who have a disparity between their math score and their verbal score.
That disparity, or the other way around, that disparity is called tilt. The score is tilted
one way or the other, and that tilt has been studied empirically to see what that predicts.
And in fact, you can't make predictions about college success based on tilt. And mathematics is
a good example. There are many people, especially non-native speakers of English who come to this
country, take the SATs, do very well on the math and not so well on the verbal. Well,
if they're applying to a math program, the professors there who are making the decision
or the admissions officers don't wait so much the score on verbal, especially if it's a non-native
speaker. Well, so yeah, you have to try to, in the admission process, bring in the context.
But non-native isn't really the problem. I mean, that was part of the problem for me.
But it's the anxiety was, which it's interesting. It's interesting. Oh boy,
reducing yourself down to numbers, but it's still true. It's still the truth. Well, it's a
painful truth. That same anxiety that led me to be, to struggle with the SAT verbal tests,
it's still within me in all ways of life. So maybe that's not anxiety. Maybe that's something,
you know, like personality is also pretty stable.
Personality is stable. Personality does impact the way you navigate life. There's no question.
Yeah. And we should say that the G factor in intelligence is not just about some kind of
number on a paper. It's also has to do with how you navigate life, how easy life is for you
in this very complicated world. So personality is all tied into that in some deep fundamental way.
But now you've hit the key point about why we even want to study intelligence and personality,
I think to a lesser extent, but that's my interest is more on intelligence. I went to
graduate school and wanted to study personality, but that's kind of another story how I got kind
of shifted from personality research over to intelligence research. Because it's not just
a number. Intelligence is not just an IQ score. It's not just an SAT score. It's what those numbers
reflect about your ability to navigate everyday life. It has been said that life is one long
intelligence test. And who can't relate to that? And if you doubt, see, another problem here is a
lot of critics of intelligence research, intelligence testing tend to be academics who by
and large are pretty smart people. And pretty smart people by and large have enormous difficulty
understanding what the world is like for people with IQs of 80 or 75. It is a completely different
everyday experience. Even IQ scores of 85, 90, there's a popular television program,
Judge Judy. Judge Judy deals with everyday people with everyday problems, and you can see the full
range of problem-solving ability demonstrated there. And sometimes she does it for laughs,
but it really isn't funny because there are people who are very limited in their life
navigation, let alone success by not having good reasoning skills, which cannot be taught.
We know this, by the way, because there are many efforts. You know, the United States military,
which excels at training people. I mean, I don't know that there's a better organization in the
world for training diverse people. And they won't take people with IQs under, I think, 83 is the
cutoff because they are unable to train people with lower IQs to do jobs in the military.
So one of the things that G-Factor has to do with is learning.
Absolutely. Some people learn faster than others. Some people learn more than others. Now, faster,
by the way, is not necessarily better as long as you get to the same place eventually.
But, you know, there are professional schools that want students who can learn the fastest
because they can learn more or learn deeper or all kinds of ideas about why you select people
with the highest scores. And there's nothing funnier, by the way, to listen to a bunch of
academics complain about the concept of intelligence and intelligence testing.
And then you go to a faculty meeting where they're discussing who to hire among the applicants.
And all they talk about is how smart the person is.
We'll get to that. We'll sneak up to that in different ways. But there's something
about reducing a person to a number that in part is grounded to the person's genetics
that makes people very uncomfortable.
But nobody does that. Nobody in the field actually does that. That is a worry that
is a worry like, well, I don't want to call it a conspiracy theory. I mean, it's a legitimate worry.
But it just doesn't, it just doesn't happen. Now, I had a professor in graduate school who was the
only person I ever knew who considered the students only by their test scores.
Yes. And later in his life, he kind of backed off that. But let me ask you this. So we'll jump
around. I'll come back to a book. I tend to, I've had like political discussions with people.
And actually, my friend, Michael Malis, he's an anarchist. I disagree with him on basically
everything except the fact that love is a beautiful thing in this world.
And he says this test about left versus right, whatever, it doesn't matter what the test is.
But he believes the question is, do you believe that some people are better than others?
Question is ambiguous. Do you believe some people are better than others?
And to me, sort of the immediate answer is no. It's a poetic question. It's ambiguous question,
right? Like, you know, people want to, maybe the temptation to ask better, what better,
like sports and so on. No, to me, I stand with the sort of the fondly documents of this country,
which is all men are created equal. There's a basic humanity. And there's something about
tests of intelligence, just knowing that some people are different, like the science of
intelligence that shows that some people are genetically in some stable way across a lifetime,
have a greater intelligence than others, makes people feel like some people are better than
others, and that makes them very uncomfortable. And I, maybe you can speak to that. The fact
that some people are more intelligent than others in a way that's cannot be compensated
through education, through anything you do in life, what do we do with that?
Okay, there's a lot there. We haven't really talked about the genetics of it yet, but you are
correct in that it is my interpretation of the data that genetics has a very important influence
on the G factor. And this is controversial, we can talk about it. But if you think that genetics,
that genes are deterministic, are always deterministic, that leads to kind of the worry
that you expressed. But we know now in the 21st century that many genes are not deterministic,
they're probabilistic, meaning their gene expression can be influenced. Now, whether
they're influenced only by other biological variables or other genetic variables or environmental
or cultural variables, that's where the controversy comes in. And we can come, we can discuss that
in more detail if you like. But to go to the question about better, people better, there's
zero evidence that smart people are better with respect to important aspects of life,
like honesty, even likability. I'm sure you know many very intelligent people who are not
terribly likable or terribly kind or terribly honest. Is there something to be said? So,
one of the things I've recently reread for the second time, I guess that's what the word reread
means, the rise and fall of the Third Reich, which is, I think, the best telling of the rise and
fall of Hitler. And one of the interesting things about the people that, how should I say it,
justified or maybe propped up the ideas that Hitler put forward is the fact that they were
extremely intelligent. They were the intellectual class. It was obvious that they thought very
deeply and rationally about the world. So, what I would like to say is one of the things that
shows to me is some of the worst atrocities in the history of humanity have been committed by
very intelligent people. So, that means that intelligence doesn't make you a good person.
I wonder if, you know, there's a g-factor for intelligence. I wonder if there's a g-factor
for goodness. You know, the Nietzschean good and evil, of course, that's probably harder to measure
because that's such a subjective thing, what it means to be good. And even the idea of evil is
a deeply uncomfortable thing because how do we know?
But it's independent. Whatever it is, it's independent of intelligence. So, I agree with you
about that. But let me say this. I have also asserted my belief that more intelligence is
better than less. It doesn't mean more intelligent people are better people,
but all things being equal, would you like to be smarter or less smart? So, if I had a pill,
I have two pills. I said, this one will make you smarter, this one will make you dumber.
Which one would you like? Are there any circumstances under which you would choose to be dumber?
Well, let me ask you this. That's a very nuanced and interesting question.
You know, there's been books written about this, right? Now, we'll return to the hard questions,
the interesting questions, but let me ask about human happiness. Does intelligence
lead to happiness? No.
So, okay, so back to the pill then. So, why, when would you take the pill? So, you said IQ 80,
90, 100, 110. You start going to the quartiles and is it obvious?
Isn't there diminishing returns and then it starts becoming negative?
This is an empirical question. And so, that I have advocated in many forums more research
on enhancing the G-factor. Right now, there's, there have been many claims about enhancing
intelligence with, you mentioned the NVAC training. It was a big deal a few years ago.
It doesn't work. Data is very clear. It does not work.
Or doing like memory tests, like training and so on.
Yeah, it may give you a better memory in the short run, but it doesn't impact your G-factor.
It was very popular a couple of decades ago that the idea that listening to Mozart could make you
more intelligent. There was a paper published on this with somebody I knew published this paper.
Intelligence researchers never believed it for a second.
Been hundreds of studies, all the meta-analyses, all the summaries and so on.
So, there's nothing to it. Nothing to it at all. But wouldn't it be something?
Wouldn't it be world-shaking if you could take the normal distribution of intelligence,
which we haven't really talked about yet, but IQ scores and the G-factor is thought to be a
normal distribution, and shift it to the right so that everybody is smarter.
Even a half a standard deviation would be world-shaking. Because there are many social problems
many many social problems that are exacerbated by people with lower ability to reason stuff out
and navigate everyday life. I wonder if there's a threshold. So, maybe I would push back and say
universal shifting of the normal distribution may not be the optimal way of shifting.
Maybe it's better to whatever the asymmetric kind of distributions is like really pushing the lower
up versus trying to make the people at the average more intelligent.
So, you're saying that if in fact there was some way to increase G, let's just call it
metaphorically a pill, an IQ pill, we should only give it to people at the lower end.
No, it's just intuitively I can see that life becomes easier at the lower end if it's increased.
It becomes less and less. It is a empirical scientific question, but it becomes less and
less obvious to me that more intelligence is better.
At the high end, not because it would make life easier, but it would make whatever problems
you're working on more solvable. And if you are working on artificial intelligence,
there's a tremendous potential for that to improve society.
I understand. So, at whatever problems you're working on, yes, but there's also the problem
of the human condition. There's love, there's fear, and all of those beautiful things that
sometimes if you're good at solving problems, you're going to create more problems for yourself.
I'm not exactly sure. So, ignorance is bliss is a thing. So, there might be a place,
there might be a sweet spot of intelligence given your environment, given your personality,
all of those kinds of things, and that becomes less beautifully complicated,
the more and more intelligent you become. But that's a question for literature,
not for science, perhaps. Well, imagine this. Imagine there was an IQ pill,
and it was developed by a private company, and they are willing to sell it to you.
And whatever price they put on it, you are willing to pay it because you would like to
be smarter. Yes. But just before they give you a pill, they give you a disclaimer form to sign.
Yes. Don't hold us. You understand that this pill has no guarantee that your life is going
to be better, and in fact, it could be worse. Well, yes, that's how lawyers work. But I would
love for science to answer the question, to try to predict if your life is going to be better or
worse, when you become more or less intelligent. It's a fascinating question about what is the
sweet spot for the human condition. Some of the things we see as bugs might be actually features,
maybe crucial to our overall happiness is our limitations might lead to more happiness than
less. But again, more intelligence is better at the lower end. That's something that's less
arguable and fascinating, if possible, to increase. But you know, there's virtually no
research that's based on a neuroscience approach to solving that problem. All the solutions that
have been proposed to solve that problem or to ameliorate that problem are essentially based on
the blank slate assumption that enriching the environment, removing barriers, all good things,
by the way, I'm not against any of those things. But there's no empirical evidence that they're
going to improve the general reasoning ability or make people more employable.
Have you read Flowers of Algernon? Yes.
That's to the question of intelligence and happiness.
There are many profound aspects of that story. It was a film that was very good.
The film was called Charlie for the younger people who are listening to this.
You might be able to stream it on Netflix or something. But it was a story about a person
with very low IQ who underwent a surgical procedure in the brain, and he slowly became a genius.
And the tragedy of the story is the effect was temporary. It's a fascinating story, really.
That goes in contrast to the basic human experience that each of us individually have,
but it raises the question of the full range of people who might be able to be
given different levels of intelligence. You've mentioned the normal distribution.
So let's talk about it. There's a book called The Bell Curve written in 1994,
written by psychologist Richard Hearnstein and political scientist Charles Murray.
Why was this book so controversial? This is a fascinating book. I know Charles Murray.
I've had many conversations with him. Yeah, what is the book about?
The book is about the importance of intelligence in everyday life.
That's what the book is about. It's an empirical book. It has statistical analyses of very large
databases that show that, essentially, IQ scores or their equivalent are correlated
to all kinds of social problems and social benefits. And that in itself is not where
the controversy about that book came. The controversy was about one chapter in that book,
and that is a chapter about the average difference in mean scores between black Americans and white
Americans. And these are the terms that were used in the book at the time and are still used to some
extent. And historically, or really for decades, it has been observed that disadvantaged groups
score on average lower than Caucasians on academic tests, tests of mental ability,
and especially on IQ tests. And the difference is about a standard deviation, which is about
15 points, which is a substantial difference. In the book, Hearnstein and Murray in this one
chapter assert clearly and unambiguously that whether this average difference is due to genetics
or not, they are agnostic. They don't know. Moreover, they assert they don't care because you
wouldn't treat anybody differently knowing that if there was a genetic component or not, because
that's a group average finding. Every individual has to be treated as an individual. You can't make
any assumption about what that person's intellectual ability might be from the fact of an average
group difference. They're very clear about this. Nonetheless, people took away, I'm going to choose
my words carefully because I have a feeling that many critics didn't actually read the book,
they took away that Hearnstein and Murray were saying that blacks are genetically inferior.
That was the take home message. And if they weren't saying it, they were implying it because they had
a chapter that discussed this empirical observation of a difference and isn't this horrible. And so
the reaction to that book was incendiary. What do we know about from that book and the research
beyond about race differences and intelligence? It's still the most incendiary topic in psychology.
Nothing has changed that. Anybody who even discusses it is easily called a racist just for
discussing it. It's become fashionable to find racism in any discussion like this. It's unfortunate.
The short answer to your question is there's been very little actual research on this topic
since the bell curve even before. This really became incendiary in 1969 with an article published
by an educational psychologist named Arthur Jensen. Let's just take a minute and go back to that
to see the bell curve in a little bit more historical perspective. Arthur Jensen was
a educational psychologist at UC Berkeley. I knew him as well. And in 1969 or 1968,
the Harvard Educational Review asked him to do a review article on the early childhood education
programs that were designed to raise the IQs of minority students. This was before the federally
funded Head Start program. Head Start had not really gotten underway at the time Jensen undertook
his review of what were a number of demonstration programs. And these demonstration programs were
for young children who are on kindergarten age and they were specially designed to be
cognitively stimulating to provide lunches, do all the things that people thought would
minimize this average gap of intelligence tests. There was a strong belief among virtually all
psychologists that the cause of the gap was unequal opportunity due to racism, all negative
things in the society. And if you could compensate for this, the gap would go away.
So early childhood education back then was called literally compensatory education.
Jensen looked at these programs. He was an empirical guy. He understood psychometrics
and he wrote a, it was over a hundred page article detailing these programs and the flaws in their
research design. Some of the programs reported IQ gains of on average five points, but a few
reported 10, 20 and even 30 point gains. One was called the miracle in Milwaukee.
The, that investigator went to jail ultimately for fabricating data. But the point is that
Jensen wrote an article that said, look, the opening sentence of his article is classic.
The opening sentence is, I may not quote it exactly right, but it's,
we have tried compensatory education and it has failed. And he showed that these games
were essentially nothing. You couldn't really document empirically any gains at all from these
really earnest efforts to increase IQ, but he went a step further, a fateful step further.
He said, not only have these efforts failed, but because they have had essentially no impact,
we have to reexamine our assumption that these differences are caused by environmental things
that we can address with education. We need to consider a genetic influence, whether there's a
genetic influence on this group difference. So you said that this is one of the more
controversial works ever in science? I think it's the most infamous paper in all of psychology,
I would go on to say. Because in 1969, the genetic data was very skimpy on this question,
skimpy and controversial. It's always been controversial, but it was even skimpy and
controversial. It's kind of a long story that I go into a little bit in more detail in the book
Neuroscience of Intelligence. But to say he was vilified is an understatement. I mean,
he couldn't talk at the American Psychological Association without bomb threats clearing the
lecture hall. Campus security watched him all the time. They opened his mail.
He had to retreat to a different address. This was one of the earliest kinds. This was before
the internet and kind of internet social media mobs. But it was that intense. I have written
that overnight after the publication of this article, all intelligence research became radioactive.
Nobody wanted to talk about it. Nobody was doing more research.
And then the bell curve came along and the Jensen controversy was dying down. I have stories that
Jensen told me about his interaction with the Nixon White House on this issue. This was like a
really big deal. It was some unbelievable stories, but he told me this, so I kind of believe these
stories. Nonetheless, 25 years later, all the silence basically saying that nobody wants to
do this kind of research. There's so much pressure, so much attack against this kind of research.
And here's sort of a bold, stupid, crazy people that decided to dive right back in.
I wonder how much discussion that was. Do we include this chapter or not?
Murray has said they discussed it and they felt they should include it and they were
very careful in the way they wrote it, which did them no good. So as a matter of fact,
when the bell curve came out, it was so controversial. I got a call from a television
show called Nightline, who was with a broadcaster called Ted Koppel who had this evening show,
I think it was on late at night, talked about news. It was a straight up news thing.
And producer called and asked if I would be on it to talk about the bell curve. And I said,
she asked me what I thought about the bell curve as a book. I said, look, it's a very good book.
It talks about the role of intelligence in society. And she said, no, no, what do you think about
the chapter on race? That's what we want you to talk about. I remember this conversation.
I said, well, she said, what would you say if you were on TV? And I said, well,
what I would say is that it's not at all clear if there's any genetic component to intelligence,
any differences. But if there were a strong genetic component, that would be a good thing.
And complete silence on the other end of the phone. And she said, well, what do you mean?
And I said, well, if it's the more genetic, any differences, the more it's biological. And if
it's biological, we can figure out how to fix it. I see. That's interesting. She said,
would you say that on television? Yes. I said, no. And so that was the end of that.
So that's for more like biology is within the reach of science. And the environment is a public
policy, social and all those kinds of things. From your perspective, whichever one you think is
more amenable to solutions in the short term is the one that excites you. But you're saying that
it's good. The truth of genetic differences, no matter what the between groups is a painful,
harmful, potentially, potentially dangerous thing. So let me ask you to this question,
whether it's Bell Curve or any research on race differences, can that be used to increase the
amount of racism in the world? Can that be used to increase the amount of hate in the world?
Do you think about this kind of stuff? I've thought about this a lot, not as a scientist,
but as a person. And my sense is there is such enormous reservoirs of hate and racism
that have nothing to do with scientific knowledge of the data that speak against that,
that no, I don't want to give racist groups of veto power over what scientists study.
If you think that the differences, and by the way, virtually no one disagrees that there are
differences in scores, it's all about what causes them and how to fix it. So if you think this is
a cultural problem, then you must ask the problem, do you want to change anything
about the culture? Or are you okay with the culture because you don't feel it's appropriate to change
a person's culture? So are you okay with that and the fact that that may lead to disadvantages in
school achievement? It's a question. If you think it's environmental, what are the environmental
parameters that can be fixed? I'll tell you one, lead from gasoline in the atmosphere,
lead in paint, lead in water. That's an environmental toxin that society has the
means to eliminate, and they should. Yeah, just to sort of trying to find some
insight and conclusion to this very difficult topic. Is there been research on environment
versus genetics, nature versus nurture on this question of race differences?
There is not, no one wants to do this research. First of all, it's hard research to do. Second
of all, it's a minefield. No one wants to spend their career on it. Tenured people don't want to
do it, let alone students. The way I talk about it, well, before I tell you the way I talk about it,
I want to say one more thing about Jensen. He was once asked by a journalist straight out,
are you a racist? His answer was very interesting. His answer was, I've thought about that a lot,
and I've concluded it doesn't matter. Now, I know what he meant by this.
The guts to say that, wow.
He was a very unusual person. I think he had a touch of Asperger's syndrome to tell you the
truth, because I saw him in many circumstances. He would be canceled on Twitter immediately
with that sentence. Yeah, but what he meant was, he had a hypothesis.
And with respect to group differences, he called it the default hypothesis. He said,
whatever factors affect individual intelligence are likely the same factors that affect group
differences. It was the default, but it was a hypothesis. It should be tested. And if it turned
out, empirical test didn't support the hypothesis. He was happy to move on to something else. He was
absolutely committed to that scientific ideal, that it's an empirical question.
We should look at it and let's see what happens.
The scientific method cannot be racist from his perspective. It doesn't matter what the scientists,
if they, if they follow the scientific method, it doesn't matter what they believe.
And if they are biased and they consciously or unconsciously bias the data, other people will
come along to replicate it. They will fail and the process over time will work.
So let me push back on this idea because psychology to me is full of gray areas.
And what I've observed about psychology, even replication crisis aside, is that
something about the media, something about journalism, something about the virality of
ideas in the public sphere, they misinterpret, they take up things from studies willfully or from
ignorance misinterpret findings and tell narratives around that. I personally believe, for me,
I'm not saying that broadly about science, but for me, it's my responsibility to anticipate
the ways in which findings will be misinterpreted. So I've had, I thought about this a lot because I
published papers on semi-autonomous vehicles and those, you know, cars, people dying cars.
There's people that have written me letters saying emails, nobody writes letters, I wish they did,
that I have blood in my hands because of things I would say positive or negative. There's consequences
in the same way when you're a researcher of intelligence, I'm sure you might get emails
or at least people might believe that finding your study is going to be used by a large number
of people to increase the amount of hate in the world. I think there's some responsibility on
scientists, but for me, I think there's a great responsibility to anticipate the ways things will
be misinterpreted. And there you have to first of all decide whether you want to say a thing at all,
do the study at all, publish the study at all, and to the words with which you explain it.
It's, I find this on Twitter a lot actually, which is when I write a tweet, I'm usually just doing
so innocently. I'll write it, you know, it takes me like five seconds to write it or whatever,
30 seconds to write it. And then I'll think, all right, I like close my eyes open and try to see
how will the world interpret this? Like what are the ways in which this will be misinterpreted?
And I'll sometimes adjust that tweet to see, like, yeah, so in my mind it's clear, but that's
because it's my mind from which this tweet came. But you have to think in a fresh mind that sees
this, and it's spread across a large number of other minds, how will the interpretation morph?
I mean, for a tweet that's a silly thing, it doesn't matter. But for a scientific paper and
study and finding, I think it matters. So I don't know, well, I don't know what your thoughts about
on that. Because maybe for Jensen, the data is there. What do you want me to do? This is a scientific
process has been carried out. If you think the data was polluted by bias, do other studies that
reveal the bias. But the data is there. And like, I have, I'm not a poet. I'm not a literary
writer. What do you want me to do? I'm just presenting you the data. What do you think on that spectrum?
What's the role of a scientist? The reason I do podcasts, the reason I write books for the public
is to explain what I think the data mean, and what I think the data don't mean. I don't do very
much on Twitter other than to retweet references to papers. I don't think it's my role to explain
these because they're complicated. They're nuanced. But when you decide not to do a scientific study
or not to publish a result because you're afraid the result could be harmful or insensitive,
that's not an unreasonable thought. And people will make different conclusions and decisions
about that. I wrote about this. I wrote, I'm the editor of a journal called Intelligence,
which publishes scientific papers. Sometimes we publish papers on group differences.
Those papers sometimes are controversial. These papers are written for a scientific audience.
They're not written for the Twitter audience. I don't promote them very much on Twitter.
But in a scientific paper, you have to now choose your words carefully
also because those papers are picked up by non-scientists, by writers of various kinds,
and you have to be available to discuss what you're saying and what you're not saying.
Sometimes you are successful at having a good conversation like we are today that doesn't
start out pejorative. Other times I've been asked to participate in debates where my role
would be to justify race science. Well, you can see, just start out. That was a BBC request
that I had, that I received. I have so much, it's a love-hate relationship,
mostly hate with these shallow journalism organizations. So they would want to use you
as a kind of in a debate setting to communicate as to there is race differences between groups
and make that into debate and put you in a role of... Justifying racism.
That's what they're asking me to do. Horses like educating about this field of
the science of intelligence. I want to say one more thing before we get off the normal
distribution. You also asked me, what is the science after the bell curve? The short answer is
there's not much new work, but whatever work there is supports the idea that there still
are group differences. It's arguable whether those differences have diminished at all or not.
And there is still a major problem in underperformance for school achievement
for many disadvantaged and minority students. And there so far is no way to fix it.
What do we do with this information? Is this now a task now? We'll talk about the future
on the neuroscience and the biology side, but in terms of this information as a society,
in the public policy, in the political space, in the social space, what do we do with this
information? I've thought a lot about this. The first step is to have people interested in policy
understand what the data actually show to pay attention to intelligence data. You can read
policy papers about education and using your word processor, you can search for the word
intelligence. You can search a 20,000 word document in a second and find out the word
intelligence does not appear anywhere. In most discussions about what to do about achievement
gaps, I'm not talking about test gaps, I'm talking about actual achievement gaps in schools, which
everyone agrees is a problem. The word intelligence doesn't appear among educators.
That's fascinating. As a matter of fact, in California,
there has been tremendous controversy about recent attempts to revise the curriculum for math
in high schools. We had a Stanford professor of education who was running this review assert
there's no such thing as talent, a mathematical talent. She wanted to get rid of the advanced
classes in math because not everyone could do that. Of course, this has been very controversial.
They've retreated somewhat, but the idea that a university professor was in charge of this who
believes that there's no talent, that it doesn't exist, this is rather shocking, let alone the
complete absence of intelligence data. By the way, let me tell you something about what the
intelligence data show. Let's take race out of it. Even though the origins of these studies
were a long time ago, I'm blocking on the name of the report, the Coleman Report was a famous
report about education. They measured all kinds of variables about schools, about teachers, and
they looked at academic achievement as an outcome. They found the most predictive variables of
education outcome where the variables the student brought with him or her into the school,
essentially their ability. When you combine the school and the teacher variables together,
the quality of the school, the funding of the school, the quality of the teachers, their
education, you put all the teacher and school variables together, it barely accounted for
10% of the variance. This has been replicated now. The best research we have shows that school
variables and teacher variables together account for about 10% of student academic achievement.
Now, you want to have some policy on improving academic achievement. How much money do you want
to put into teacher education? How much money do you want to put into the quality of the school
administration? You can ask the Gates Foundation because they spent a tremendous amount of money
doing that. At the end of it, because they're measurement people, they want to know the data,
they found it had no impact at all. They've pulled out of that program. Let me ask you,
this is me talking, but there's just the two of us, but I'm going to say some funny and
ridiculous things. You're surely not approving of it, but there's a movie called Clerks.
I've seen it. There's a funny scene in there where a lovely couple are talking about the
number of previous sexual partners they had. The woman says that, I believe she just had
a handful, like two or three or something like that, sexual partners, but then she also mentioned
that she, what's that called, a fallacious? What's the scientific? She gave a blow job
to 37 guys, I believe it is. That has to do with the truth. Sometimes knowing the truth
can get in the way of a successful relationship of love of some of the human flourishing.
That seems to me that's at the core here, that facing some kind of truth that's not
able to be changed, it makes it difficult to sort of, is limiting as opposed to
empowering. That's the concern. If you sort of test for intelligence and lay the data out,
it feels like you will give up on certain people. You will sort of start binning people as like,
well, this person is like, let's focus on the average people, or let's focus on the very
intelligent people. That's the concern. There's a kind of intuition that if we just don't measure
and we don't use that data, then we will treat everybody equal and give everybody equal opportunity.
If we have the data in front of us, we're likely to miss, distribute the amount of
sort of attention we allocate, resources we allocate to people. That's probably the concern.
It's a realistic concern, but I think it's a misplaced concern if you want to fix the problem.
If you want to fix the problem, you have to know what the problem is.
Now, let me tell you this. Let's go back to the bell curve, not the bell curve,
but the normal distribution. 16% of the population on average has an IQ under 85,
which means they're very hard. If you have an IQ under 85, it's very hard to find gainful employment
at a salary that sustains you at least minimally in modern life. Okay? Not impossible, but it's
very difficult. 16% of the population of the United States is about 51 or 52 million people
with IQs under 85. This is not a small issue. 14 million children have IQs under 85.
Is this something we want to ignore? Does this have any, what is the Venn diagram between,
you know, when you have people with IQs under 85 and you have achievement in school
or achievement in life? There's a lot of overlap there. This is why, to go back to the IQ pill,
if there were a way to shift that curve toward the higher end, that would have a big impact.
If I could maybe before we talk about the impact on life and so on,
some of the criticisms of the bell curve. So Stephen Jay Gould wrote that the bell curve
rests on four incorrect assumptions. It would be just interesting to get your thoughts on the four
assumptions, which are intelligence must be reducible to a single number, intelligence must
be capable of rank ordering people in a linear order, intelligence must be primarily genetically
based and intelligence must be essentially immutable. Maybe not as criticisms, but as
thoughts about intelligence. We could spend a lot of time on him.
Stephen Jay Gould, yeah. He wrote that in what about 1985, 1984. His views were overtly political,
not scientific. He was a scientist, but his views on this were overtly political,
and I would encourage people listening to this if they really want to understand his criticisms.
They should just Google what he had to say and Google the scientific reviews of his book,
The Mismeasure of Man, and they will take these statements apart. They were wrong.
Not only were they wrong, but when he asserted in his first book that there was no biological
basis essentially to IQ, by the time the second edition came around, there were studies of MRIs
of showing that brain size, brain volume were correlated to IQ scores, which he declined to
put in his book. So I'm learning a lot today. I didn't know the extent of his work. I was just
using a few little snippets of criticism. That's interesting. So there's a battle here. He wrote
a book, Mismeasure of Man. That's missing a lot of the scientific grounding. His book is highly
popular in colleges today. You can find it in any college bookstore under a signed reading.
It's highly popular. The Mismeasure of Man? Yes, highly influential.
Can you speak to the Mismeasure of Man? I'm undereducated about this. So what is this the
book basically criticizing the ideas in the bulk of it? Yeah, yeah, where those four things came
from. And it is really a book that was really taken apart point by point by a number of people
who actually understood the data. And he didn't care. He didn't care. He didn't modify anything.
It's a politically, listen, because this is such a sensitive topic, like I said, I believe
the impact of the work as it is misinterpreted has to be considered because it's not just
going to be scientific discourse. It's going to be political discourse. There's going to be
debates. There's going to be politically motivated people that will use messages in
each direction, make it the make something like the bulk of the enemy or the support for your
for one's racist beliefs. And so I think you have to consider that, but it's difficult because
you know, Nietzsche was used by Hitler to justify a lot of his beliefs. And it's not
it's not exactly on Nietzsche to anticipate Hitler or how his ideas will be misinterpreted
and used for evil. But there's a balance there. So I understand this is really interesting. I
didn't I didn't know. Is there any criticism of the book you find compelling or interesting or
challenging to use from a scientific perspective? There were factual criticisms about the nature
of the statistics that were used, the statistical analyses, these are more technical criticisms.
And they were addressed by Murray in a couple of articles where he took all the criticisms
and spoke to them and people listening to this podcast can certainly find all those online.
It's very interesting. But Murray went on to write some additional books, two in the last
couple of years, one about human diversity, where he goes through the data, refuting the idea that
race is only a social construct with no biological meaning. He discusses the data. It's a very good
discussion. You don't have to agree with it. But he presents data in a cogent way. And he talks
about the critics of that. And he talks about their data in a cogent, not personal way. It's a
very informative discussion. The book is called Human Diversity. He talks about race, and he
talks about gender, same thing about sex differences. And more recently, he's written what might be
his final say on this, a book called Facing Reality, where he talks about this again.
So, you know, he can certainly defend himself. He doesn't need me to do that. But I would urge
people who have heard about him and the bell curve, and who think they know what's in it,
you are likely incorrect, and you need to read it for yourself.
But scientifically, it's a serious subject. It's a difficult subject. Ethically, it's a
difficult subject. Everything you said here, calmly and thoughtfully, is difficult. It's
difficult for me to even consider that g-factor exists. I don't mean from, like, that somehow
g-factures inherently racist or sexist or whatever. It's just, it's difficult in the way that,
considering the fact that we die one day, it's difficult. That we are limited by our biology
is difficult. And it's, at least from an American perspective, you like to believe
that everything is possible in this world. Well, that leads us to what I think we should
do with this information. And what I think we should do with this information is unusual.
Because I think what we need to do is fund more neuroscience research on the molecular biology
of learning and memory. Because one definition of intelligence is based on how much you can learn
and how much you can remember. And if you accept that definition of intelligence,
then there are molecular studies going on now and Nobel Prize is being won on molecular biology
or molecular neurobiology of learning and memory. Now, the step those researchers,
those scientists need to take when it comes to intelligence is to focus on the concept of
individual differences. Intelligence research has individual differences as its heart,
because it assumes that people differ on this variable. And those differences are meaningful
and need understanding. Cognitive psychologists who have morphed into molecular biologists
studying learning and memory hate the concept of individual differences historically. Some now
are coming around to it. I once sat next to a Nobel Prize winner for his work on memory,
and I asked him about individual differences. And he said, don't go there. It'll set us back 50 years.
But I said, don't you think they're the key, though, to understand why can some people remember
more than others? He said, you don't want to go there. I think the 21st century will be
remembered by the technology and the science that goes to individual differences. Because we have
now data, we have now the tools to much, much better to start to measure, start to estimate,
not just on the sort of through tests and IQ test type of things, sort of outside the body
kind of things, but measuring all kinds of stuff about the body. So yeah, truly going to the molecular
biology, to the neurobiology, to the neuroscience. Let me ask you about this life. How does intelligence
correlate with or lead to or has anything to do with career success? You've mentioned these kinds
of things. And is there any data? You had an excellent conversation with Jordan Peterson,
for example. Is there any data on what intelligent means for success in life?
Life, success in life, there is a tremendous amount of validity data that looked at intelligence
test scores and various measures of life success. Now, of course, life success is a pretty broad
topic. And not everybody agrees on what success means. But there's general agreement on certain
aspects of success that can be measured. Including life expectancy, like you said.
Life expectancy, now there's life success. Life expectancy, I mean, that is such an
interesting finding. But IQ scores are also correlated to things like income. Now, okay,
so who thinks income means you're successful? That's not the point. The point is that income is one
empirical measure in this culture that says something about your level of success. Now,
you can define success in ways that have nothing to do with income. You can define success based
on your evolutionary natural selection success. But for variables, and even that, by the way,
is correlated to IQ in some studies. So however you want to define success, IQ is important.
It's not the only determinant. People get hung up on, well, what about personality? What about so
called emotional intelligence? Yes, all those things matter. The thing that matters empirically,
the single thing that matters the most is your general ability, your general mental intellectual
ability, your reasoning ability. And the more complex your vocation, the more complex your job,
the more G matters. G doesn't matter in a lot of occupations don't require complex thinking.
And there are occupations like that, and G doesn't matter. Within an occupation,
the G might not matter so much. So that if you look at all the professors at MIT,
and had a way to rank order them on, there's a ceiling effect is what I'm saying.
When you get past a certain threshold, then there's impact on wealth, for example, or career
success, however that's defined in each individual discipline. But after a certain point, it doesn't
matter. Actually, it does matter in certain things. So for example, there is a very classic study
that was started at Johns Hopkins when I was a graduate student there. I actually worked on this
study at the very beginning, the study of mathematically and scientifically precocious
youth. And they gave junior high school students, age 11 and 12, the standard SAT math exam.
And they found a very large number of students scored very high on this exam, not a large number.
They found many students when they cast the net, they're all a Baltimore, they found a number of
students who scored as high on the SAT math when they were 12 years old as incoming Hopkins
freshmen. And they said, gee, now this is interesting. What shall we do now? And on a case
by case basis, they got some of those kids into their local community college math programs.
Many of those kids went on to be very successful. And now there's a 50 year follow up of those kids.
And it turns out, these kids were in the top 1%. Okay, so everybody in this study is in the top
1%. If you take that group, that rarefied group and divide them into quartiles,
so that you have the top 25% of the top 1% and the bottom 25% of the top 1%, you can find
on measurable variables of success, the top quartile does better than the bottom quartile.
In the top 1%, they have more patents, they have more publications, they have more tenure
at universities. And this is based on their, you're dividing them based on their score at age 12.
I wonder how much interesting data is in the variability in the differences. But that's
boy, that's very interesting, but it's also, I don't know, somehow painful. I don't know why
it's so painful, that that's so, that g-factor so determinant of even in the nuanced top percent.
This is interesting that you find that painful. Do you find it painful that people with charisma
are very successful, can be very successful in life, even though having no other attributes other
than their famous and people like them? Do you find that painful? Yes, if that charisma is untrainable.
So, one of the things, again, this is like I learned psychology from the Johnny Depp trial, but
one of the things the psychologist, the personality psychologist, he can maybe speak
to this because he had interest in this for a time, is she was saying that personality,
technically speaking, is the thing that doesn't change over a lifetime. It's the thing you're,
I don't know if she was actually implying that you're born with it. Well, it's a trait.
It's a trait. It's a trait that's relatively stable over time. I think that's generally correct.
So, to the degree your personality is stable over time, yes, that too is painful.
Because what's not painful is the thing, if I'm fat, not of shape, I can exercise
and become healthier in that way. If my diet is a giant mess and that's resulting in some kind of
conditions that my body is experiencing, I can fix that by having a better diet.
That's sort of my actions, my willed actions can make a change. If charisma is part of the
personality, that's the part of the charisma that is part of the personality that is stable.
Yeah. Yeah, that's painful too. Because it's like, oh, shit, I'm stuck with this. I'm stuck with this.
Well, I mean, and this pretty much generalizes to every aspect of your being. This is who you are.
You've got to deal with it. And what it undermines, of course, is a realistic appreciation
for this undermines the fairly recent idea prevalent in this country that if you work hard,
you can be anything you want to be, which has morphed from the original idea that if you work
hard, you can be successful. Those are two different things. And now we have, if you work hard, you
can be anything you want to be. This is completely unrealistic. I'm sorry, it just is. Now you can
work hard and be successful. There's no question. But you know what? I could work very hard and I
am not going to be a successful theoretical physicist. I'm just not. That said, I mean,
because we had this conversation already, but it's good to repeat
the fact that you're not going to be a theoretical physicist. It's not judgment on your basic
humanity returning again to the old men, which means men and women are created equal. So again,
some of the differences we're talking about in quote unquote, success, wealth,
number of, whether you win a Nobel Prize or not, that doesn't put a measure on your basic
humanity and basic value and even goodness of you as a human being. Because that your basic
role in value in society is largely within your control. It's some of these measures that we're
talking about. It's good. It's good to remember this. One question about the Flynn effect.
What is it? Are humans getting smarter over the years, over the decades, over the centuries?
The Flynn effect is James Flynn passed away about a year ago, published a set of analyses
going back a couple of decades. When he first noticed this, that IQ scores, when you looked
over the years, seemed to be drifting up. Now, this was not unknown to the people who make the test
because they renorm the test periodically. And they have to renorm the test periodically
because what 10 items correct meant relative to other people 50 years ago is not the same as what
10 items mean relative today. People are getting more things correct. Now, the scores have been
drifting up about three points. IQ scores have been drifting up about three points per decade.
This is not a personal effect. This is a cohort effect. It's not for an individual.
But- The world. So what's the explanation?
This has presented intelligence researchers with a great mystery. Two questions. First,
is it effect on the 50% of the variance that's the G factor or on the other 50% and there's
evidence that it is a G factor effect. And second, what on earth causes this? And doesn't this mean
intelligence and G factor cannot be genetic because the scale of natural selection is much,
much longer than a couple of decades ago. And so it's been used to try to undermine the idea that
there can be a genetic influence on intelligence. But certainly, it can be the Flynn effect can
affect the non-genetic aspects of intelligence because genes account for maybe 50% of the variance.
Maybe higher, it could be as high as 80% for adults, but let's just say 50% for discussion.
So the Flynn effect, it's still a mystery.
It's still a mystery. That's interesting.
It's still a mystery, although the evidence is coming out. I told you before I edited a
Journal on Intelligence and we're doing a special issue in honor of James Flynn.
So I'm starting to see papers now on really the latest research on this.
I think most people who specialize in this area of trying to understand the Flynn effect
are coming to the view based on data that it has to do with advances in nutrition and health care.
And there's also evidence that the effect is slowing down and possibly reversing.
Oh, boy. So how would nutrition, nutrition would still be connected to the G factor.
So nutrition as it relates to the G factor, so the biology that leads to the intelligence.
Yes.
That would be the claim. The hypothesis being tested by the research.
Yes. And there's some evidence from infants that nutrition has made a difference.
So it's not an unreasonable connection, but does it negate the idea that there's a genetic influence,
not logically at all. But it is very interesting so that if you take an IQ test today,
but you take the score and use the tables that were available in 1940,
you're going to wind up with a much higher IQ number. So are we really smarter than a couple
of generations ago? No, but we might be able to solve problems a little better
and make use of our G because of things like Sesame Street and other curricula in school.
More people are going to school. So there are a lot of factors here that disentangle.
It's fascinating that there's not clear answers yet that as a population we're getting smarter.
When you just zoom out, that's what it looks like as a population we're getting smarter.
It's interesting to see what the effects of that are. I mean, this raises the question.
We've mentioned it many times, but haven't clearly addressed it, which is nature versus
nurture question. So how much of intelligence is nature? How much of it is nurture?
How much of it is determined by genetics versus environment?
All of it.
All of it is genetics.
No, all of it is nature and nurture.
How much of variance can you apportion to either?
Most of the people who work in this field say that the framing of that,
if the question is framed that way, it can't be answered because nature and
nurture are not too independent influences. They interact with each other and understanding
those interactions is so complex that many behavioral geneticists say it is today impossible
and always will be impossible to disentangle that no matter what kind of advances there are
in DNA technology and genomic informatics.
But there's still to push back on that. That same intuition from behavioral geneticists
would lead me to believe that there cannot possibly be a stable G factor because it's
super complex.
Many of them would assert that as a logical outcome, but because I believe there is a stable
G factor from lots of sources of data, not just one study, but lots of sources of data over decades,
I am more amenable to the idea that whatever interactions between genes and environment
exist, they can be explicated, they can be studied, and that information can be used
as a basis for molecular biology of intelligence.
Yes, and we'll do this exact question because doesn't the stability of the G factor give you
at least a hint that there is a biological basis for intelligence?
Yes, I think it's clear that the fact that an IQ score is correlated to things like
thickness of your cortex, that it's correlated to glucose metabolic rate in your brain,
that identical twins reared apart are highly similar in their IQ scores.
These are all important observations that indicate that there's
a biological basis and does anyone believe intelligence has nothing to do with the brain?
I mean, it's so obvious.
Well, indirectly definitely has to do with it, but the question is
environment interacting with the brain or is it the actual raw hardware of the brain?
Well, some would say that the raw hardware of the brain as it develops from conception
through adulthood, or at least through the childhood, that that so-called hardware that
you are assuming is mostly genetic, in fact, is not as deterministic as you might think,
that it is probabilistic and what affects the probabilities are things like in uterine environment
and other factors like that, including chance.
That chance affects the way the neurons are connecting during gestation.
It's not, hey, it's pre-programmed, so there is pushback on the concept that
genes provide a blueprint, that it's a lot more fluid.
Well, but also, yeah, so there's a lot happens in the first few months of development.
In nine months inside the mother's body and in the few months afterwards,
there's a lot of fascinating stuff, including chance and luck, like you said, how things
connect up. The question is afterwards, in your plasticity of the brain, how much adjustment
there is relative to the environment, how much that affects the G-factor, but that's where the
whole conclusions of the studies that we've been talking about is that seems to have less and less
and less of an effect as pretty quickly. Yes, and I do think there is more of a genetic,
by my view, and I'm not an expert on this, I mean, genetics is a highly technical and
complex subject. I am not a geneticist, not a behavioral geneticist, but my reading of this,
my interpretation of this, is that there is a genetic blueprint more or less, and that has
a profound influence on your subsequent intellectual development, including the G-factor.
That's not to say things can't happen to, I mean, if you think of that genes provide a potential,
fine, and then various variables impact that potential, and every parent of a newborn,
implicitly or explicitly, wants to maximize that potential. This is why you buy educational toys,
this is why you pay attention to organic baby food, this is why you do all these things
because you want your baby to be as healthy and as smart as possible, and every parent will say that.
Is there a case to be made? Can you steal me on the case that genetics is a very tiny component
of all of this, and the environment is essential? I don't think the data supports that genetics
is a tiny component. I think the data support the idea that the genetics is a very important,
and I don't say component, I say influence. Very important influence, and the environment
is a lot less than people believe. Most people believe environment plays a big role. I'm not
so sure. I guess what I'm asking you is, can you see where, what you just said, it might be wrong?
Can you imagine a world, and what kind of evidence would you need to see
to say, you know what, the intuition, the study so far, like reversing the direction.
So one of the cool things we have now, more and more, is we're getting more and more data,
and the rate of the data is escalating because of the digital world. So when you start to look at
a very large scale of data, both from the biology side and the social side, we might be discovering
some very counterintuitive things about society. We might see the edge cases that reveal that if
we actually scale those edge cases, and they become like the norm, that we'll have a complete shift
in our, like you'll see g-factor be able to be modified throughout life in the teens and later
life. So in any case, you can make or for your current intuitions are wrong.
Yes, and it's a good question because I think everyone should always be asked,
what evidence would change your mind? It's certainly not only a fair question, it is really
the key question for anybody working on any aspect of science. I think that if environment
was very important, we would have seen it clearly by now. It would have been obvious
that school interventions, compensatory education, early childhood education, all these things that
have been earnestly tried and well-funded, well-designed studies would show some effect,
and they don't. They don't. What if the school, the way we've tried school, compensatory school
sucks and we need to do that. That's what everybody said at the beginning. That's what
everybody said to Jensen. He said, well, maybe we need to start earlier. Maybe we need not do
pre-kindergarten, but pre-pre-kindergarten. It's always an infinite, well, maybe we didn't get it
right. But after decades of trying, 50 years, 50 or 60 years of trying, surely something
would have worked to the point where you could actually see a result and not need a probability
level at 0.05 on some means. That's the kind of evidence that would change my mind.
Population level interventions like schooling that you would see, this actually has an effect.
Yes. When you take adopted kids and they grow up in another family and you find out when those
adopted kids are adults, their IQ scores don't correlate with the IQ scores of their adoptive
parents, but they do correlate with their IQ scores of their biological parents whom they've
never met. These are powerful observations. It would be convincing to you if the reverse
was true. Yes. There is some data on adoption that indicates the adopted children are moving
a little bit more toward their adoptive parents. To me, the overwhelming, I have this concept
called the weight of evidence where I don't interpret any one study too much. The weight of
evidence tells me genes are important, but what does that mean? What does it mean that genes are
important? Knowing that gene expression, genes don't express themselves in a vacuum. They express
themselves in an environment. The environment has to have something to do with it, especially if the
best genetic estimates of the amount of variance are around 50 or even if it's as high as 80%,
it still leaves 20% of non-genetic. Now, maybe that is all luck. Maybe that's all chance. I could
believe that. I could easily believe that. I do think after 50 years of trying various interventions
and nothing works, including memory training, including listening to Mozart, including playing
computer games, none of that has shown any impact on intelligence test scores.
Is there data on the intelligence, the IQ of parents as it relates to the children?
Yes. There is some genetic evidence of an interaction between the parents' IQ and the
environment. High IQ parents provide an enriched environment, which then can
impact the child in addition to the genes. It's that environment. There are all these
interactions. Think about the number of books in a household. This was a variable that's
correlated with IQ. Why? Especially if the kid never reads any of the books. It's because
more intelligent people have more books in their house. If you're more intelligent
and there's a genetic component to that, the child will get those genes or some of those genes,
as well as the environment. But it's not the number of books in the house that
actually directly impacts the child. The two scenarios on this are you find that,
and this was used to get rid of the SAT test, the SAT scores highly correlated with the
social economic status of the parents. All you're really measuring is how rich the parents are.
Okay. Well, why are the parents rich?
Yes.
And so the opposite kind of syllogism is that people who are very bright make more money.
They can afford homes in better neighborhoods so their kids get better schools. Now, the kids
grow up bright. Where in that chain of events does that come from? Well, unless you have a
genetically informative research design where you look at siblings that have the same biological
parents and so on, you can't really disentangle all that. Most studies of social economic status
and intelligence do not have a genetically informed design. So any conclusions they make about
the causality of the social economic status being the cause of the IQ is a stretch. And where you
do find genetically informative designs, you find most of the variance in your outcome measures
are due to the genetic component. And sometimes the SES adds a little, but the weight of evidence is
it doesn't add very much variance to predict what's going on beyond the genetic variance.
So when you actually look at it in some... And there aren't that many studies that have
genetically informed designs, but when you do see those, the genes seem to have an advantage.
Sorry for the strange questions, but is there a connection between
fertility or the number of kids that you have and g-factor? So the kind of conventional wisdom is
people of maybe higher economic status or something like that are having fewer children.
I just loosely hear these kinds of things. Is there data that you're aware of in one direction
or another on this? Well, strange questions always get strange answers.
Yes. Do you have a strange answer for that strange question?
The answer is there were some studies that indicated the more children in a family,
the firstborn children would be more intelligent than the fourth or fifth or sixth.
It's not clear that those studies hold up over time. And of course, what you see also
is that families where there are multiple children, four, five, six, seven, really big families,
the social economic status of those families usually in the modern age is not that high.
Maybe it used to be the aristocracy used to have a lot of kids. I'm not sure exactly.
But there have been reports of correlations between IQ and fertility.
But I'm not sure that the data are very strong, that the firstborn child is always the smartest.
It seems like there's some data to that, but I'm not current on that.
How would that be explained? That would be a nurture.
Well, it could be a nurture. It could be in uterine environment. I mean...
Boy, the biology is complicated.
And this is why this... Like many areas of science, you said earlier that there are a lot
of gray areas and no definitive answers. This is not uncommon in science that the closer you look
at a problem, the more questions you get, not the fewer questions, because the universe is complicated.
And the idea that we have people on this planet who can study the first nanoseconds of the Big Bang,
that's pretty amazing. And I've always said that if they can study the first nanoseconds
of the Big Bang, we can certainly figure out something about intelligence that allows that.
I'm not sure what's more complicated, the human mind or the physics of the universe.
It's unclear to me. I think we over-emphasize.
Well, that's a very humbling statement.
Maybe it's very human-centric, egotistical statement that our mind is somehow super complicated.
But biology is a trick you want to unravel consciousness. What is that?
I've always believed that consciousness and intelligence are the two real fundamental
problems of the human brain. And therefore, I think they must be related.
Part problems like walk together, holding hands, kind of idea.
You may not know this, but I did some of the early research on anesthetic drugs with brain imaging,
trying to answer the question, what part of the brain is the last to turn off when someone loses
consciousness and is that the first part of the brain to turn on when consciousness is regained?
And I was working with an anesthesiologist named Mike Alkire. He was really brilliant at this.
These were really the first studies of brain imaging using positron emission tomography
long before fMRI. And you would inject a radioactive sugar that labeled the brain and
the harder the brain was working, the more sugar it would take up. And then you could make a picture
of glucose use in the brain. And he was amazing. He managed to do this in normal volunteers he
brought in and anesthetized as if they were going into surgery. And he managed all the human
subjects requirements on this research. And he was brilliant at this. And what we did is we had
these normal volunteers come in on three occasions. On one occasion, he gave them enough anesthetic
drug so they were a little drowsy. And on another occasion, they came in and he fully anesthetized
them. And he would say, Mike, can you hear me? And the person would say, yeah. And then we would
scan people and under no anesthetic condition. So the same person. And we were looking to see if
we could see the part of the brain turn off. He subsequently tried to do this with fMRI which
has a faster time resolution. And you could do it in real time as the person went under
and then regain consciousness where you couldn't do that with PET yet. And the results were
absolutely fascinating. We did this with different anesthetic drugs. And different drugs impacted
different parts of the brain. So we were naturally looking for the common one. And it seemed to have
something to do with the thalamus. And consciousness, this was actual data on consciousness.
Real consciousness, actual consciousness. What part of the brain turns on? What part of the brain
turns off? It's not so clear. But maybe has something to do with the thalamus. The sequence
of events seemed to have the thalamus in it. Now here's the question. Are some people more conscious
than others? Are there individual differences in consciousness? And I don't mean it in the
psychedelic sense. I don't mean it in the political consciousness sense. I just mean it in
everyday life. Do some people go through everyday life more conscious than others? And are those
the people we might actually label more intelligent? Now the other thing I was looking for is whether
the parts of the brain we were seeing in the anesthesia studies were the same parts of the
brain we were seeing in the intelligent studies. Now this was very complicated, expensive research.
We didn't really have funding to do this. We were trying to do it on the fly. I'm not sure
anybody has pursued this. I'm retired now. He's gone on to other things. But I think it's an area
of research that would be fascinating to see the parts. There are a lot more imaging studies now
of consciousness. I'm just not up on them. But basically the question is which imaging,
so newer imaging studies, to see in high resolution, spatial and temporal way,
which part of the brain lights up when you're doing intelligence tasks? And which parts of the
brain lights up when you're doing consciousness tasks and see the interplay between them?
Try to infer. That's the challenge of neuroscience. Without understanding deeply,
looking from the outside, try to infer something about how the whole thing works.
Well, imagine this. Here's a simple question. Does it take more anesthetic drug
to have a person lose consciousness if their IQ is 140 than a person with an IQ of 70?
That's an interesting way to study it. If the answer to that is a stable yes,
that's very interesting. So I tried to find out and I went to some
anesthesiology textbooks about how you dose and they dose by weight.
And what I also learned, this is a little bit off subject, anesthesiologists are never sure
if you, how deep you are. Yeah. And they usually tell by poking you with a needle and if you don't
jump, they tell the surgeon to go ahead. I'm not sure that's literally true, but it's...
Well, it might be very difficult to know precisely how deep you are. It has to do with the same kind
of measurements that you're doing with the consciousness. It's difficult to know.
So I don't lose my train of thought. I couldn't find in the textbooks anything about dosing by
intelligence. I asked my friend, the anesthesiologist, he said, no, he doesn't know.
I said, can we do a chart review and look at people using their years of education as a proxy
for IQ? Because if someone's gone to graduate school, that tells you something. You can make
some inference as opposed to someone who didn't graduate high school. Can we do a chart review?
And he says, no, they never really put down the exact dose. And no, he said, no. So to this day,
the simple question, does it take more anesthetic drug to put someone under if they have a high IQ
or less? Or less. It could go either way. Because by the way, our early PET scan studies of
intelligence found the unexpected result of an inverse correlation between glucose metabolic
rate and intelligence. It wasn't how much a brain area lit up. How much it lit up was negatively
correlated to how well they did on the test, which led to the brain efficiency hypothesis,
which is still being studied today. And there's more and more evidence that the efficiency of brain
information processing is more related to intelligence than just more activity.
Yeah, it would be interesting. Again, it's the total hypothesis of how much in the relationship
between intelligence and consciousness, it's not obvious that those two, if there's correlation,
there could be inversely correlated. Wouldn't that be funny? The consciousness factor,
the C factor plus the G factor equals one. It's a nice tradeoff. You get a tradeoff,
how deeply you experience the world versus how deeply you're able to reason through the world.
What a great hypothesis. Certainly somebody listening to this can do this study.
Even if it's the aliens analyzing humans a few centuries from now, let me ask you from an AI
perspective. I don't know how much you've thought about machines, but there's the famous touring
test, test of intelligence for machines, which is a beautiful, almost like a cute formulation of
intelligence that Alan Turing proposed. Basically conversation being if you can fool a human to
think that a machine is a human that passes the test. I suppose you could do a similar thing
for humans. If I can fool you that I'm intelligent, then that's a good test of
intelligence. You're talking to two people and the test is saying who has a higher IQ.
It's an interesting test because maybe charisma can be very useful there,
and you're only allowed to use conversation, which is the formulation of the Turing test.
Anyway, all that to say is what are good tests of intelligence for machines?
What do you think it takes to achieve human level intelligence for machines?
Well, I have thought a little bit about this, but every time I think about these things,
I rapidly reach the limits of my knowledge and imagination. When Alexa first came out,
and I think there was a competing one, well, there was Siri with Apple and Google had Alexa.
No, no, Amazon had Alexa. Amazon had Alexa. Google has Google Home.
Something. I proposed to one of my colleagues that he buy one of these, one of each,
and then ask it questions from the IQ test. Nice. But it became apparent that they all
searched the internet, so they all can find answers to questions like how far is it between
Washington and Miami, and repeat after me. Now, I don't know if you said to Alexa,
I'm going to repeat these numbers backwards to me. I don't know what would happen. I've never
done it. So one answer to your question is, you're going to try it right now. Let's try it.
No, no, no, no. Yes, Siri.
So it would actually probably go to Google search and it will be all confusing kind of
stuff. It would fail. Well, then I guess there's a test that it would fail.
Well, but that has to do more with the language of communication versus the content. So if you
did an IQ test to a person who doesn't speak English and the test was administered in English,
that's not really the test. Well, let's think about the computers that beat the Jeopardy champions.
Yeah, so that's because I happen to know how those are programmed. There's a very hard coded
and there's definitely a lack of intelligence there. There's something like IQ tests. There's a guy,
artificial intelligence researcher, Francois Chollet. He's a Google. He's one of the
seminal people in machine learning. He also has a fun aside thing, develop an IQ test for machines.
How? I haven't heard that. I'd just like to know about that.
I'll actually email you this because it'd be very interesting for you. It doesn't get much
attention because people don't know what to do with it, but it deserves a lot of attention,
which is it basically does a pattern type of tests where you have to do one standard one.
You're given three things and you have to do a fourth one, that kind of thing. You have to
understand the pattern here. And for that, it really simplifies to... So the interesting thing is
he's trying not to achieve high IQ. He's trying to achieve a pretty low bar for IQ. Things that
are kind of trivial for humans and they're actually really tough for machines, which is seeing,
playing with these concepts of symmetry, of counting. If I give you one object, two objects,
three objects, you'll know the last one is four objects. You can count them. You can cluster
objects together. It's both visually and conceptually. We could do all these things with our mind
that we take for granted, the objectness of things. We can figure out what spatially is an
object and isn't. And we can play with those ideas. And machines really struggle with that. So he
really cleanly formulated these IQ tests. I wonder what that would equate to for humans with IQ,
but it'd be a very low IQ. But that's exactly the kind of formulation like, okay, we want to be able
to solve this. How do we solve this? And he does this as a challenge and nobody's been able to...
It's similar to the Alexa Prize, which is Amazon is hosting a conversational challenge.
Nobody's been able to do well on his. But that's interesting. Those kinds of tests are interesting
because we take for granted all the ability of the human mind to play with concepts and to
formulate concepts out of novel things. So things we've never seen before. We were able to use that.
I've talked to a few people that design IQ tests online. They write IQ tests. And I was trying to
get some questions from them. And they spoke to the fact that we can't really share questions with
you because first of all, it's really hard work to come up with questions. It's really,
really hard work. It takes a lot of research, but it's novelty generating. You're constantly
coming up with really new things. And part of the point is that they're not supposed to be public.
They're supposed to be new to you when you look at them. It's interesting that the novelty is
fundamental to the hardness of the problem, at least a part of what makes the problem hard
is you've never seen it before. That's called fluid intelligence, as opposed to what's called
crystallized intelligence, which is your knowledge of facts. You know things, but can you use those
things to solve a problem? Those are two different things.
Do you think we'll be able to, because we spoke, I don't want to miss opportunity to talk about this.
We spoke about the neurobiology, the molecular biology of intelligence. Do you think one day
we'll be able to modify the biology or the genetics of a person to modify their intelligence,
to increase their intelligence? We started this conversation by talking about a pill you could
take. Do you think such a pill would exist?
A metaphorically, I do. And I am supremely confident that it's possible because I am
supremely ignorant of the complexities of neurobiology.
Ignorance is bliss.
Well, I have written that the nightmares of neurobiologists, understanding the complexities,
this cascade of events that happens at the synaptic level, that these nightmares are what
fuels some people to solve. So some people, you have to be undaunted. I mean, yeah, this is not
easy. Look, we're still trying to figure out cancer. It was only recently that they figured
out why aspirin works. These are not easy problems, but I also have the perspective
of the history of science is the history of solving problems that are extraordinarily complex.
And seem impossible at the time.
And seem impossible at the time.
And so one of the things you look at at companies like Neuralink, you have brain,
computer interfaces, you start to delve into the human mind and start to talk about machines,
measuring, but also sending signals to the human mind. You start to wonder what that has,
what impact that has on the g-factor. Modifying in small ways or in large ways, the functioning,
the mechanical electrical chemical functioning of the brain.
I look at everything about the brain. There are different levels of explanation. On one hand,
you have a behavioral level, but then you have brain circuitry. And then you have neurons.
And then you have dendrites. And then you have synapses. And then you have the neurotransmitters
and the presynaptic and the postsynaptic terminals. And then you have all the things
that influence neurotransmitters. And then you have the individual differences among people.
Yeah, it's complicated. But 51 million people in the United States
have IQs under 85 and struggle with everyday life. Shouldn't that motivate
people to take a look at this? Yeah, but I just want to linger one more time that
we have to remember that the science of intelligence,
the measure of intelligence is only a part of the human condition. The thing that makes life
beautiful and the creation of beautiful things in this world is perhaps loosely correlated,
but it's not dependent entirely on intelligence. Absolutely. I certainly agree with that.
And so for anyone sort of listening, I'm still not convinced that
sort of more intelligence is always better if you want to create beauty in this world.
I don't know. Well, I didn't say more intelligence is always better if you want to create beauty. I
just said all things being equal, more is better than less. That's all I mean. Yeah, but that's
sort of that I just want to sort of say because to me, one of the things that makes life great
is the opportunity to create beautiful things. And so I just want to sort of empower people to
do that no matter what some IQ test says. At the population level, we do need to look at IQ test
to help people and to also inspire us to take on some of these extremely difficult scientific
questions. Do you have advice for young people in high school, in college, whether they're
thinking about career or they're thinking about a life they can be proud of? Is there advice you can
give? Whether they're in the, they want to pursue psychology or biology or engineering,
or they want to be artists and musicians and poets? I can't advise anybody on that level
of what their passion is. But I can say if you're interested in psychology, if you're interested
in science and the science around the big questions of consciousness and intelligence
and psychiatric illness, we haven't really talked about brain illnesses and what we might learn from
you know, if you are trying to develop a drug to treat Alzheimer's disease, you are trying to
develop a drug to impact learning and memory, which are core to intelligence. So it could well
be that the so-called IQ pill will come from a pharmaceutical company trying to develop a drug
for Alzheimer's disease. Because that's exactly what you're trying to do, right? Yeah, just like
you said. What will that drug do in a college student that doesn't have Alzheimer's disease?
So I would encourage people who are interested in psychology, who are interested in science
to pursue a scientific career and address the big questions. And the most important thing I
can tell you, if you're going to be in kind of a research environment, is you've got to follow
the data where the data take you. You can't decide in advance where you want the data to go.
And if the data take you to places that you don't have the technical expertise to follow,
like, you know, I would like to understand more about molecular biology, but I'm not going to
become a molecular biologist now, but I know people who are. And my job is to get them interested
to take their expertise into this direction. And that it's not so easy, but
And if the data takes you to a place that's controversial, that's counterintuitive in this
world. No, I would say it's probably a good idea to still push forward boldly, but to communicate
the interpretation of the results with skill, with compassion, with a with a with a with a
greater breadth of understanding of humanity, not just the science of the impact of the results.
One famous psychologist wrote about this issue, that somehow a balance has to be found between
pursuing the science and communicating it with respect to people's sensitivities, the legitimate
sensitivities. Somehow he didn't say how somehow somehow and this is every part of that sentence,
somehow and balance is left up to the interpretation of the reader. Let me ask you,
you said big questions, the biggest or one of the biggest. We already talked about consciousness
and intelligence, one of the most fascinating, one of the biggest questions. But let's talk
about the why. Why are we here? What's the meaning of life? Oh, I'm not going to tell you.
You know, you're not going to tell me. This is very, I'm going to have to wait for your next book.
The meaning of life, you know, we do the best we can to get through the day.
And then there's just a finite number of the days. Are you afraid of the finiteness of it?
I think about it more and more as I get older. Yeah, I do. And it's one of these human things,
that it is finite. We all know it. Most of us deny it and don't want to think about it.
Sometimes you think about it in terms of estate planning. You try to do the rational thing.
Sometimes it makes you work harder because you know your time is more and more limited and you
want to get things done. I don't know where I am on that. It is just one of those things that's
always in the back of my mind. And I don't think that's uncommon.
Well, it's just like G factor in intelligence. It's a hard truth that's there. And sometimes you
kind of walk past it and you don't want to look at it, but it's still there. Yeah. Yes, you can't
escape it. And think about the G factor in intelligence is everybody knows this is true
on a personal daily basis. Even if you think back to when you were in school,
you know who the smart kids were. When you are on the phone talking to a customer service
representative that in response to your detailed question is reading a script back to you and
you get furious at this. And have you ever called this person a moron or wanted to call this person
a moron? You're not listening to me. Everybody has had the experience of dealing with people
who they think are not at their level. It's just common because that's the way human beings are.
That's the way life is. But we also have a poor estimation of our own intelligence. We have a
poor, and we're not always a great, our judgment of human character of other people is not as good
as a battery of tests. That's where bias comes in. That's where our history, our emotions,
all of that comes in. So people on the internet, there's such a thing as the internet. And people
on the internet will call each other dumb all the time. And that's the worry here is that
we give up on people. We put them in a bin just because of one interaction or some small number
of interactions as if that's it. They're hopeless. That's just in their genetics. But I think no matter
what the science here says, once again, that does not mean we should not have compassion
for our fellow man. That's exactly what the science does say. It's not opposite of what
the science says. Everything I know about psychology, everything I've learned about
intelligence, everything points to the inexorable conclusion that you have to treat people
as individuals respectfully and with compassion. Because through no fault of their own,
some people are not as capable as others. And you want to turn a blind eye to it. You want to come
up with theories about why that might be true. Fine. I would like to fix some of it as best I can.
And everybody is deserving of love. Richard, this is a good way to end it, I think.
I'm just getting warmed up here. I know. I know you can go for another many hours. But
to respect your extremely valuable time, this is an amazing conversation. Thank you for
the teaching company, the lectures you've given with the neuroscience of intelligence,
just the work you're doing. It's a difficult topic. It's a topic that's controversial and
sensitive to people and to push forward boldly. And in that nuanced way, just thank you for
everything you do. And thank you for asking the big questions of intelligence, of consciousness.
Well, thank you for asking me. I mean, there's nothing like good conversation on these topics.
Thanks for listening to this conversation with Richard Hire. To support this podcast,
please check out our sponsors in the description. And now let me leave you with some words from
Albert Einstein. It is not that I'm so smart, but I stay with the questions much longer.
Thank you for listening and hope to see you next time.