This graph shows how many times the word ______ has been mentioned throughout the history of the program.
By the time we get to 2045,
we'll be able to multiply our intelligence
many millions full.
And it's just very hard to imagine what that will be like.
The following is a conversation with Ray Kurzweil,
author, inventor, and futurist,
who has an optimistic view of our future
as a human civilization,
predicting that exponentially improving technologies
will take us to a point of a singularity,
beyond which superintelligent, artificial intelligence
will transform our world in nearly unimaginable ways.
18 years ago, in the book Singularity is Near,
he predicted that the onset of the singularity
will happen in the year 2045.
He still holds to this prediction and estimate.
In fact, he's working on a new book on this topic
that will hopefully be out next year.
This is the Lex Friedman podcast.
To support it, please check out our sponsors
in the description.
And now, dear friends, here's Ray Kurzweil.
In your 2005 book titled The Singularity is Near,
you predicted that the singularity will happen in 2045.
So now, 18 years later, do you still estimate
that the singularity will happen on 2045?
And maybe first, what is the singularity,
the technological singularity, and when will it happen?
Singularity is where computers really change our view
of what's important and change who we are.
But we're getting close to some salient things
that will change who we are.
A key thing is 2029, when computers will pass
the Turing test.
And there's also some controversy
whether the Turing test is valid, I believe it is.
Most people do believe that,
but there's some controversy about that.
But Stanford got very alarmed at my prediction about 2029.
I made this in 1999 in my book.
The Age of Spiritual Machines.
And then you repeated the prediction in 2005.
In 2005.
Yeah.
So they held an international conference,
you might have been aware of it,
of AI experts in 1999,
to assess this view.
So people gave different predictions
and they took a poll.
It was really the first time that AI experts worldwide
were polled on this prediction.
And the average poll was 100 years.
20% believed it would never happen.
And that was the view in 1999.
80% believed it would happen,
but not within their lifetimes.
There's been so many advances in AI
that the poll of AI experts has come down over the years.
So a year ago, something called meticulous,
which you may be aware of,
assessed different types of experts on the future.
They again assessed what AI experts then felt.
And they were saying 2042.
For the Turing test.
For the Turing test.
Yeah, so it's coming down.
And I was still saying 2029.
A few weeks ago, they again did another poll
and it was 2030.
So AI experts now basically agree with me.
I haven't changed at all.
I've stayed with 2029.
And AI experts now agree with me,
but they didn't agree at first.
So Alan Turing formulated the Turing test and...
Right, now what he said was very little about it.
I mean, the 1950 paper where he had articulated
the Turing test,
he just like a few lines that talk about the Turing test.
And it really wasn't very clear how to administer it.
And he said if they did it in like 15 minutes,
that would be sufficient,
which I don't really think is the case.
These large language models now,
some people are convinced by it already.
I mean, you can talk to it and have a conversation with you.
You can actually talk to it for hours.
So it requires a little more depth.
There's some problems with large language models,
which we can talk about.
But some people are convinced by the Turing test.
Now, if somebody passes the Turing test,
what are the implications of that?
Does that mean that they're sentient,
that they're conscious or not?
It's not necessarily clear what the implications are.
Anyway, I believe 2029, that's six, seven years from now,
we'll have something that passes the Turing test
and a valid Turing test,
meaning it goes for hours, not just a few minutes.
Can you speak to that a little bit?
What is your formulation of the Turing test?
You've proposed a very difficult version
of the Turing test, so what does that look like?
Basically, it's just to assess it over several hours
and also have a human judge that's fairly sophisticated
on what computers can do and can't do.
If you take somebody who's not that sophisticated
or even an average engineer,
they may not really assess various aspects of it.
So you really want the human to challenge the system?
Exactly, exactly.
On its ability to do things
that common sense reasoning, perhaps.
That's actually a key problem with large language models.
They don't do these kinds of tests
that would involve assessing chains of reasoning.
But you can lose track of that.
If you talk to them, they actually can talk to you pretty well
and you can be convinced by it.
But it's somebody that would really convince you
that it's a human, whatever that takes.
Maybe it would take days or weeks,
but it would really convince you that it's human.
Large language models can appear that way.
You can read conversations and they appear pretty good.
There are some problems with it.
It doesn't do math very well.
You can ask, how many legs did 10 elephants have?
And they'll tell you, well, okay,
each elephant has four legs and it's 10 elephants,
so it's 40 legs.
And you go, okay, that's pretty good.
How many legs do 11 elephants have?
And they don't seem to understand the question.
Do all humans understand that question?
No, that's the key thing.
I mean, how advanced the human do you want it to be?
But we do expect a human to be able
to do multi-chain reasoning,
to be able to take a few facts and put them together.
Not perfectly.
And we see that in a lot of polls
that people don't do that perfectly at all.
But so it's not very well-defined,
but it's something where it really would convince you
that it's a human.
Is your intuition that large language models
will not be solely the kind of system
that passes the Turing test in 2029?
Do we need something else?
No, I think it will be a large language model,
but they have to go beyond what they're doing now.
I think we're getting there.
And another key issue is if somebody actually passes
the Turing test validly, I would believe they're conscious.
And then not everybody would say that.
It's okay, we can pass the Turing test,
but we don't really believe that it's conscious.
That's a whole nother issue.
But if it really passes the Turing test,
I would believe that it's conscious.
But I don't believe that of large language models today.
If it appears to be conscious,
that's as good as being conscious, at least for you,
in some sense.
I mean, consciousness is not something that's scientific.
I mean, I believe you're conscious,
but it's really just a belief.
And we believe that about other humans
that at least appear to be conscious.
When you go outside of shared human assumption,
like our animals conscious,
some people believe they're not conscious,
some people believe they are conscious,
and would a machine that acts just like a human be conscious?
I mean, I believe it would be,
but that's really a philosophical belief.
It's not, you can't prove it.
I can't take an entity and prove that it's conscious.
There's nothing that you can do that would indicate that.
It's like saying a piece of art is beautiful.
You can say it, multiple people can experience
a piece of art is beautiful, but you can't prove it.
But it's also an extremely important issue.
I mean, imagine if you had something
with nobody's conscious, the world may as well not exist.
And so some people, like say, Marvin Rinsky,
said, well, consciousness is not logical,
it's not scientific, and therefore we should dismiss it.
And any talk about consciousness is just not to be believed.
But when he actually engaged with somebody who was conscious,
he actually acted as if they were conscious.
He didn't ignore that.
He acted as if consciousness does matter.
Exactly. Whereas he said it didn't matter.
Well, that's Marvin Rinsky.
Yeah.
He's full of contradictions.
But that's true of a lot of people as well.
But to you, consciousness matters.
But to me, it's very important,
but I would say it's not a scientific issue.
It's a philosophical issue.
And people have different views.
And some people believe that anything
that makes a decision is conscious.
So your light switch is conscious.
It's the level of consciousness that's low.
It's not very interesting, but that's a consciousness.
And anything, so a computer that makes
a more interesting decision, still not a human being.
But it's also conscious and at a higher level
than your light switch.
So that's one view.
There's many different views of what consciousness is.
So if a system passes the Turing test,
it's not scientific.
But in issues of philosophy,
things like ethics start to enter the picture.
Do you think there would be,
we would start contending as a human being
and contending as a human species
about the ethics of turning off such a machine?
Yeah. I mean, that's definitely come up.
It hasn't come up in reality yet.
But I'm talking about 2029.
It's not that many years from now.
And so what are our obligations to it?
It has a different, I mean, a computer that's conscious
and has a little bit different connotations
than a human.
We have a continuous consciousness.
We're in an entity that does not last forever.
Now, actually, a significant portion of humans
still exist and are therefore still conscious.
But anybody who is over a certain age
doesn't exist anymore.
That wouldn't be true of a computer program.
You could completely turn it off
and a copy of it could be stored and you could recreate it.
And so it has a different type of validity.
You could actually take it back in time.
You could eliminate its memory and have it go over again.
I mean, it has a different kind of connotation
than humans do.
Well, perhaps you can do the same thing with humans.
It's just that we don't know how to do that yet.
It's possible that we figure out all of these things
on the machine first.
But that doesn't mean the machine isn't conscious.
I mean, if you look at the way people react,
say, three CPO or other machines that are conscious in movies,
they don't actually present how it's conscious,
but we see that they are a machine
and people will believe that they are conscious
and they'll actually worry about it
if they get into trouble and so on.
So, 2029 is going to be the first year
when a major thing happens.
Right.
And that will shake our civilization
to start to consider the role of AI in this world.
Yes and no.
I mean, this one guy at Google claimed
that the machine was conscious.
That's just one person.
Right.
When it starts to happen to scale.
Well, that's exactly right,
because most people have not taken that position.
I don't take that position.
I mean, I've used different things like this
and they don't appear to me to be conscious.
As we eliminate various problems
of these large language models,
more and more people will accept that they're conscious.
So, when we get to 2029,
I think a large fraction of people
will believe that they're conscious.
So, it's not going to happen all at once.
I believe that what actually happened gradually
and it's already started to happen.
And so, that takes us one step closer to the singularity.
Another step then is in the 2030s,
when we can actually connect our neocortex,
which is where we do our thinking to computers.
And I mean, just this actually gains a lot
to being connected to computers
that will amplify its abilities.
I mean, if this did not have any connection,
it would be pretty stupid.
It could not answer any of your questions.
If you're just listening to this, by the way,
we're just holding up the all-powerful smartphone.
So, we're going to do that directly from our brains.
I mean, these are pretty good.
These already have amplified our intelligence.
I'm already much smarter than I would otherwise be
if I didn't have this.
Because I remember when I first spoke
the age of intelligent machines,
there was no way to get information from computers.
I actually would go to a library, find a book,
find the page that had an information I wanted,
and I'd go to the copier,
and my most significant information tool
was a roll of quarters where I could see the copier.
So, we're already greatly advanced that we have these things.
There's a few problems with it.
First of all, I constantly put it down,
and I don't remember where I put it.
I've actually never lost it,
but you have to find it,
and then you have to turn it on.
So, there's a certain amount of steps.
It would actually be quite useful
if someone would just listen to your conversation
and say, oh, that's so-and-so actress
and tell you what you're talking about.
So, going from active to passive
where it just permeates your whole life
exactly the way your brain does when you're awake.
Your brain is always there.
Right.
That's something that could actually just about be done today
where you would listen to your conversation,
understand what you're saying,
understand what you're not missing
and give you that information.
But another step is to actually go inside your brain.
And there are some prototypes
where you can connect your brain.
They actually don't have the amount of bandwidth
that we need.
They can work, but they work fairly slowly.
So, if it actually would connect to your neocortex
and the neocortex, which I describe
and how to create a mind,
the neocortex is actually...
It has different levels.
And as you go up the levels,
it's kind of like a pyramid.
The top level is fairly small.
And that's the level where you want to connect
these brain extenders.
So, I believe that will happen in the 2030s.
So, just the way this is greatly amplified
by being connected to the cloud,
we can connect our own brain to the cloud
and just do what we can do by using this machine.
Do you think it would look like the brain-computer interface
of like Neuralink?
Well, Neuralink is an attempt to do that.
It doesn't have the bandwidth that we need yet.
Right.
But I think...
I mean, they're going to get permission for this
because there are a lot of people who absolutely need it
because they can't communicate.
I know a couple of people like that who have ideas
and they cannot...
They cannot move their muscles and so on.
They can't communicate.
So, for them, this would be very valuable.
But we could all use it.
Basically, it'd be...
turn us into something that would be like we have a phone,
but it would be in our minds.
It would be kind of instantaneous.
And maybe communication between two people
would not require this low bandwidth mechanism of language.
Yes.
A spoken word.
Exactly.
We don't know what that would be.
Although, we do know that computers can share information
like language instantly.
They can share many, many books in a second
so we could do that as well.
If you look at what our brain does,
it actually can manipulate different parameters.
So, we talk about these large language models.
I mean, I had written that
it requires a certain amount of information
in order to be effective
and that we would not see AI really being effective
until it got to that level.
And we had large language models that were like 10 billion bytes.
It didn't work very well.
They finally got to 100 billion bytes
and now they work fairly well
and now we're going to a trillion bytes.
If you say lambda has 100 billion bytes,
what does that mean?
Well, what if you had something that had one byte,
one parameter?
Maybe you want to tell whether or not something is an elephant
or not.
And so, you put in something that would detect its trunk.
If it has a trunk, it's an elephant.
If it doesn't have a trunk, it's not an elephant.
And that would work fairly well.
There's a few problems with it
and it really wouldn't be able to tell what a trunk is.
And maybe other things other than elephants have trunks.
You might get really confused.
Yeah, exactly.
I'm not sure which animals have trunks,
but how do you define a trunk?
But yeah, that's one parameter.
You can do okay.
So, these things have 100 billion parameters.
So, they're able to deal with very complex issues.
All kinds of trunks.
Human beings actually have a little bit more than that,
but they're getting to the point where they can emulate humans.
If we were able to connect this to our neocortex,
we would basically add more of these abilities
to make distinctions.
We could ultimately be much smarter
and also be attached to information that we feel is reliable.
So, that's where we're headed.
So, you think that there will be a merger in the 30s
and increasing a lot of merging between the human brain
and the AI brain.
Exactly.
And the AI brain is really an emulation of human beings.
I mean, that's why we're creating them
because human beings act the same way.
And this is basically to amplify them.
I mean, this amplifies our brain.
It's a little bit clumsy to interact with,
but it definitely is way beyond what we had 15 years ago.
But the implementation becomes different
just like a bird versus the airplane.
Even though the AI brain is an emulation,
it starts adding features we might not otherwise have.
Like ability to consume a huge amount of information quickly.
Like look up thousands of Wikipedia articles in one take.
Exactly.
We can get, for example, to issues like simulated biology
where it can simulate many different things at once.
We already had one example of simulated biology,
which is the Moderna vaccine.
And that's going to be now the way in which we create medications.
But they were able to simulate what each example of an mRNA
would do to a human being.
And they were able to simulate that quite reliably.
And we actually simulated billions of different mRNA sequences.
And they found the ones that they were the best
and they created the vaccine.
And they did, and talked about doing it quickly,
they did that in two days.
Now, how long would a human being take to simulate
billions of different mRNA sequences?
I don't know if we could do it at all,
but it would take many years.
They did it in two days.
And one of the reasons that people didn't like vaccines
is because it was done too quickly.
It was done too fast.
And they actually included the time it took to test it out,
which was 10 months.
So they figured, okay, it took 10 months to create this.
Actually, it took us two days.
And we also will be able to ultimately do the tests
in a few days as well.
Oh, because we can simulate how the body will respond to it.
That's a little bit more complicated
because the body has a lot of different elements
and we have to simulate all of that.
But that's coming as well.
So ultimately, we could create it in a few days
and then test it in a few days and it would be done.
And we can do that with every type of medical insufficiency
that we have.
So curing all diseases,
improving certain functions of the body,
supplements, drugs, for recreation, for health,
for performance, for productivity, all that kind of stuff.
Well, that's where we're headed.
Because I mean, right now we have a very inefficient way
of creating these new medications.
But we've already shown it.
And the Moderna vaccine is actually the best
of the vaccines we've had.
And it literally took two days to create.
And we'll get to the point where we can test it out also quickly.
Are you impressed by AlphaFold and the solution
to the protein folding, which essentially is simulating,
modeling this primitive building block of life,
which is a protein, in its 3D shape?
It's pretty remarkable that they can actually predict
what the 3D shape of these things are.
But they did it with the same type of neural net
that won, for example, the GO test.
So it's all the same?
Awesome approaches.
They took that same thing and just changed the rules to chess.
And within a couple of days, it now played a master level
of chess greater than any human being.
And the same thing then worked for AlphaFold,
which no human had done.
I mean, human beings could do, the best humans could maybe do
about 18, 20% of figuring out what the shape would be.
And after a few takes, it ultimately did just about 100%.
Do you still think the singularity will happen in 2045?
And what does that look like?
Once we can amplify our brain with computers directly,
which will happen in the 2030s, that's going to keep growing.
It's another whole theme, which is the exponential growth
of computing power.
Yeah, so looking at price performance of computation
from 1939 to 2021.
Right, so that starts with the very first computer
actually created by German during World War II.
And you might have thought that that might be significant,
but actually the Germans didn't think computers were significant
and they completely rejected it.
The second one is also the ZUSA II.
And by the way, we're looking at a plot with the X-axis
being the year from 1935 to 2025.
And on the Y-axis and log scale is computation per second
per constant dollar, so dollar normalized to inflation.
And it's growing linearly on the log scale,
which means it's growing exponentially.
The third one was the British computer,
which the Allies did take very seriously
and it cracked the German code
and enables the British to win the Battle of Britain,
which otherwise absolutely would not have happened
if they hadn't cracked the code using that computer.
But that's an exponential graph,
so a straight line on that graph is exponential growth.
And you see 80 years of exponential growth.
And I would say about every five years,
and this happened shortly before the pandemic,
people saying, well, they call it Moore's Law,
which is not the correct, because it's not all Intel.
In fact, it started decades before Intel was even created.
It wasn't with transistors formed into a grid.
So it's not just transistor count or transistor size,
it's a bunch of different components.
It started with relays, then went to vacuum tubes,
then went to individual transistors,
and then to integrated circuits.
And integrated circuits actually starts
like in the middle of this graph.
And it has nothing to do with Intel.
Intel actually was a key part of this,
but a few years ago, they stopped making the fastest chips.
But if you take the fastest chip of any technology,
in that year, you get this kind of graph.
And it's definitely continuing for 80 years.
So you don't think Moore's Law broadly defined is dead.
It's been declared dead multiple times throughout this process.
I don't like the term Moore's Law because it has nothing to do
with Moore or with Intel.
But yes, the exponential growth of computing is continuing,
and has never stopped.
From various sources.
I mean, it went through World War II,
it went through global recessions.
It's just continuing.
And if you continue that out along with software gains,
which is all another issue, and they really multiply,
whatever you get from software gains,
you multiply by the computer gains,
you get faster and faster speed.
This is actually the fastest computer models that have been created.
And that actually expands roughly twice a year.
Like every six months, it expands by two.
So we're looking at a plot from 2010 to 2022.
On the x-axis is the publication date of the model,
and perhaps sometimes the actual paper associated with it.
And on the y-axis is training compute and flops.
And so basically this is looking at the increase in the,
not transistors, but the computational power of neural networks.
Yeah, it's the computational power that created these models.
And that's doubled every six months.
Which is even faster than transistor division.
Yeah.
And actually, since it goes faster than the amount of cost,
this has actually become a greater investment to create these.
But at any rate, by the time you get to 2045,
we'll be able to multiply our intelligence many millions full.
And it's just very hard to imagine what that will be like.
And that's the singularity where we can't even imagine.
Right.
That's why we call it the singularity.
The singularity in physics, something gets sucked into its singularity
and you can't tell what's going on in there,
because no information can get out of it.
There's various problems with that, but that's the idea.
It's too much beyond what we can imagine.
Do you think it's possible we don't notice
that what the singularity actually feels like
is we just live through it
with exponentially increasing cognitive capabilities.
And we almost, because everything's moving so quickly,
aren't really able to introspect that our life has changed.
Yeah.
But I mean, we will have that much greater capacity to understand things
so we should be able to look back.
Looking at history, understand history.
But we will need people basically like you and me
to actually think about these things.
Think about it.
But we might be distracted by all the other sources of entertainment and fun
because the exponential power of intellect is growing,
but also the...
There'll be a lot of fun.
The amount of ways you can have, you know...
I mean, we already have a lot of fun with computer games and so on
that are really quite remarkable.
What do you think about the digital world, the metaverse, virtual reality?
Will that have a component in this
or will most of our advancement be in physical reality?
Well, that's a little bit like Second Life,
although the Second Life actually didn't work very well
because it couldn't actually handle too many people.
And I don't think the metaverse has come to being.
I think there will be something like that.
It won't necessarily be from that one company.
I mean, there's going to be competitors.
But yes, we're going to live increasingly online,
particularly if our brains are online.
I mean, how could we not be online?
Do you think it's possible that, given this merger with AI,
most of our meaningful interactions will be in this virtual world?
Most of our life, we fall in love, we make friends,
we come up with ideas, we do collaborations, we have fun...
Action with somebody who's marrying somebody that they never met.
I think they just met her briefly before the wedding,
but she actually fell in love with this other person,
never having met them.
And I think the love is real.
That's a beautiful story.
But do you think that story is one that might be experienced
as opposed to by hundreds of thousands of people,
but instead by hundreds of millions of people?
I mean, it really gives you appreciation
for these virtual ways of communicating.
And if anybody can do it, then it's really not such a freak story.
So I think more and more people will do that.
But that's turning our back on our entire history of evolution.
The old days, we used to fall in love by holding hands
and sitting by the fire, that kind of stuff.
Actually, I have five patents on where you can hold hands,
even if you're separated.
Great.
So the touch, the sense, it's all just senses.
It's all just...
Yeah, I mean, touch is...
It's not just that you're touching someone or not,
there's a whole way of doing it and it's very subtle,
but ultimately we can emulate all of that.
Are you excited by that future?
Do you worry about that future?
I have certain worries about the future, but not...
Not that.
...virtual touch.
Well, I agree with you.
You described six stages in the evolution of information processing
in the universe as you started to describe.
Can you maybe talk through some of those stages
from the physics and chemistry to DNA and brains
and then to the very end, to the very beautiful end of this process?
Well, it actually gets more rapid.
So physics and chemistry, that's how we started.
So from the very beginning of the universe...
We had lots of electrons and various things traveling around,
and that took actually many billions of years,
kind of jumping ahead here to kind of some of the last stages
where we have things like love and creativity.
It's really quite remarkable that that happens.
But finally, physics and chemistry created biology and DNA.
And now you had actually one type of molecule
that described the cutting edge of this process.
And we go from physics and chemistry to biology.
And finally, biology created brains.
I mean, not everything that's created by biology has a brain,
but eventually brains came along.
And all of this is happening faster and faster.
Yeah.
It created increasingly complex organisms.
Another key thing is actually not just brains, but our thumb.
Because there's a lot of animals with brains even bigger than humans.
Elephants have a bigger brain.
Whales have a bigger brain.
But they've not created technology because they don't have a thumb.
So that's one of the really key elements in the evolution of humans.
This physical manipulator device that's useful for puzzle solving
in the physicals, in reality.
So I mean, I could think, I could look at a tree and go,
oh, I could actually trip that branch down and eliminate the leaves
and carve a tip on it and create technology.
And you can't do that if you don't have a thumb.
Yeah.
So thumbs and created technology.
And technology also had a memory.
And now those memories are competing with the scale and scope of human beings.
And ultimately we'll go beyond it.
And then we're going to merge human technology with human intelligence
and understand how human intelligence works, which I think we already do.
And we're putting that into our human technology.
So create the technology inspired by our own intelligence
and then that technology supersedes us in terms of its capabilities.
And we ride along.
Or do you ultimately see it as fun?
We ride along, but a lot of people don't see that.
They say, well, you've got humans and you've got machines
and there's no way we can ultimately compete with humans.
And you can already see that.
Lee Sudahl, who's like the best Go player in the world,
says he's not going to play Go anymore.
Yeah.
Because playing Go for human, that was like the ultimate in intelligence
because no one else could do that.
But now a machine can actually go way beyond him.
And so he says, well, there's no point playing it anymore.
That may be more true for games than it is for life.
I think there's a lot of benefit to working together with AI in regular life.
So if you were to put a probability on it,
is it more likely that we merge with AI or AI replaces us?
A lot of people just think computers come along and they compete with them.
We can't really compete and that's the end of it.
As opposed to them increasing our abilities.
And if you look at most technology, it increases our abilities.
I mean, look at the history of work.
Look at what people did a hundred years ago.
Does any of that exist anymore?
I mean, if you were to predict that all of these jobs would go away
and would be done by machines, people would say,
well, no one's going to have jobs and it's going to be massive unemployment.
But I show in this book that's coming out,
the amount of people that are working,
even as a percentage of the population has gone way up.
We're looking at the x-axis year from 1774 to 2024
and on the y-axis, personal income per capita in constant dollars
and it's growing super linearly.
Yeah, 2021 constant dollars and it's gone way up.
That's not what you would predict,
given that we would predict that all these jobs would go away.
But the reason it's gone up is because
we've basically enhanced our own capabilities by using these machines
as opposed to them just competing with us.
That's a key way in which we're going to be able to become far smarter
than we are now by increasing the number of different parameters
we can consider in making a decision.
I was very fortunate.
I am very fortunate to be able to get a glimpse preview of your upcoming book.
Singularity is Nearer.
One of the themes outside of just discussing the increasing
exponential growth of technology,
one of the themes is that things are getting better in all aspects of life.
You talked just about this.
One of the things you're saying is with jobs.
Let me just ask about that.
There is a big concern that automation,
especially powerful AI, will get rid of jobs.
People will lose jobs.
As you were saying, the senses throughout the history of the 20th century,
automation did not do that ultimately.
The question is, will this time be different?
Right. That is the question.
Will this time be different?
It really has to do with how quickly we can merge with this type of intelligence.
Whether Lambda or GPT-3 is out there,
and maybe it's overcome some of its key problems.
We really have an enhanced human intelligence that might be a negative scenario.
That's why we create technologies to enhance ourselves.
I believe we will be enhanced when I'm just going to sit here with 300 million modules in our neocortex.
We're going to be able to go beyond that.
That's useful, but we can multiply that by 10.
100,000 million.
You might think, what's the point of doing that?
It's like asking somebody that's never heard music,
what's the value of music?
You can't appreciate it until you've created it.
There's some worry that there will be a wealth disparity.
Only the rich people will have access to this kind of thing,
and then because of this kind of thing,
because the ability to merge will get richer exponentially faster.
That's just like cell phones.
There's like 4 billion cell phones in the world today.
In fact, when cell phones first came out,
you had to be fairly wealthy, they weren't very inexpensive.
You had to have some wealth in order to afford them.
There were these big, sexy phones.
They didn't work very well. They did almost nothing.
You can only afford these things if you're wealthy
at a point where they really don't work very well.
Achieving scale and making it inexpensive
is part of making the thing work well.
Exactly.
These are not totally cheap, but they're pretty cheap.
You can get them for a few hundred dollars.
Especially given the kind of things it provides for you.
There's a lot of people in the third world that have very little,
but they have a smartphone.
Absolutely.
And the same will be true with AI.
I see homeless people have their own cell phones.
Yeah, so your sense is any kind of advanced technology
will take the same trajectory.
Right.
It ultimately becomes cheap and will be affordable.
I probably would not be the first person to put something in my brain
to connect to computers because I think it will have limitations.
But once it's really perfected,
and at that point it will be pretty inexpensive,
it will be pretty affordable.
So in which other ways, as you outline your book,
is life getting better?
Well, I have 50 charts in there where everything is getting better.
I think there's a kind of cynicism about,
even if you look at extreme poverty, for example.
For example, this is actually a poll taken on extreme poverty.
And the people who were asked, has poverty gotten better or worse?
And the options are increased by 50%, increased by 25%, remain the same,
decreased by 25%, decreased by 50%.
If you're watching this or listening to this, try to vote for yourself.
70% thought it had gotten worse.
And that's the general impression.
88% thought it had gotten worse and remained the same.
Only 1% thought it decreased by 50%.
And that is the answer.
It actually decreased by 50%.
So only 1% of people got the right optimistic estimate of how poverty is.
Right. And this is the reality.
And it's true of almost everything you look at.
You don't want to go back 100 years or 50 years.
Things were quite miserable then, but we tend not to remember that.
So literacy rate increasing over the past few centuries across all the different nations,
nearly to 100% across many of the nations in the world.
It's gone way up. Average years of education have gone way up.
Life expectancy is also increasing.
Life expectancy was 48 in 1900.
And it's over 80 now.
And it's going to continue to go up, particularly as we get into more advanced stages of simulated biology.
For life expectancy, these trends are the same for at birth, age 1, age 5, age 10.
So it's not just the infant mortality.
And I have 50 more graphs in the book about all kinds of things.
Even spread of democracy, which might bring up some sort of controversial issues, it still has gone way up.
Well, that one is gone way up, but that one is a bumpy road, right?
Exactly. And somebody might represent democracy and go backwards.
But we basically had no democracies before the creation of the United States,
which was a little over two centuries ago, which on the scale of human history isn't that long.
Do you think superintelligence systems will help with democracy?
So what is democracy?
Democracy is giving a voice to the populace and having their ideas, having their beliefs, having their views represented.
Well, I hope so.
I mean, we've seen social networks can spread conspiracy theories, which have been quite negative, being, for example, being against any kind of stuff that would help your health.
So those kinds of ideas have, on social media, what you notice is they increase engagement, so dramatic division increases engagement.
Do you worry about AI systems that will learn to maximize that division?
I mean, I do have some concerns about this.
And I have a chapter in the book about the perils of advanced AI.
Spreading misinformation on social networks is one of them, but there are many others.
What's the one that worries you the most that we should think about to try to avoid?
Well, it's hard to choose.
We do have the nuclear power that evolved when I was a child.
I remember we would actually do these drills against a nuclear war. We'd get under our desks and put our hands behind our heads to protect us from a nuclear war.
It seems to work. We're still around.
You're protected.
But that's still a concern. And there are key dangerous situations that can take place in biology.
Someone could create a virus that's very, I mean, we have viruses that are hard to spread and they can be very dangerous.
And we have viruses that are easy to spread, but they're not so dangerous.
Somebody could create something that would be very easy to spread and very dangerous and be very hard to stop.
It could be something that would spread without people noticing because people could get it.
They'd have no symptoms and then everybody would get it.
And then symptoms would occur maybe a month later.
And that actually doesn't occur normally because if we were to have a problem with that, we wouldn't exist.
So the fact that humans exist means that we don't have viruses that can spread easily and kill us because otherwise we wouldn't exist.
And viruses don't want to do that. They want to spread and keep the host alive somewhat.
So you can describe various dangers with biology, also nanotechnology, which we actually haven't experienced yet, but there are people that are creating nanotechnology and described that in the book.
Now you're excited by the possibilities of nanotechnology, of nanobots, of being able to do things inside our body, inside our mind that's going to help.
What's exciting, what's terrifying about nanobots?
What's exciting is that that's a way to communicate with our neocortex because each neocortex is pretty small and you need a small entity that can actually get in there and establish a communication channel.
And that's going to really be necessary to connect our brains to AI within ourselves because otherwise it would be hard for us to compete with it.
In a high bandwidth way.
Yeah. And that's key actually because a lot of the things like Neuralink are really not high bandwidth yet.
So nanobots is the way you achieve high bandwidth. How much intelligence would those nanobots have?
Yeah, they don't need a lot. Just enough to basically establish a communication channel to one nanobot.
So it's primarily about communication between external computing devices and our biological thinking machine.
What worries you about nanobots? Is it similar to the viruses?
Well, I mean, this is the Great Goo Challenge.
Yes.
If you had a nanobot that wanted to create any kind of entity and repeat itself and was able to operate in a natural environment, it could turn everything into that entity and basically destroy all biological life.
So you mentioned nuclear weapons.
Yeah.
I'd love to hear your opinion about the 21st century and whether you think we might destroy ourselves. And maybe your opinion, if it has changed by looking at what's going on in Ukraine, that we could have a hot war with nuclear powers involved and the tensions building and a seeming forgetting
of how terrifying and destructive nuclear weapons are.
Do you think humans might destroy ourselves in the 21st century and if we do how? And how do we avoid it?
I don't think that's going to happen, despite the terrors of that war. It is a possibility, but I mean, I don't.
It's unlikely in your mind.
Yeah. Even with the tensions we've had with this one nuclear power plant that's been taken over, it's very tense.
But I don't actually see a lot of people worrying that that's going to happen. I think we'll avoid that.
We had two nuclear bombs go off in 45. So now we're 77 years later.
Yeah, we're doing pretty good.
We've never had another one go off through anger.
People forget. People forget the lessons of history.
Well, yeah. I mean, I am worried about it. I mean, that is definitely a challenge.
But you believe that we'll make it out and ultimately superintelligent AI will help us make it out as opposed to destroy us.
I think so. But we do have to be mindful of these dangers and there are other dangers besides nuclear weapons.
So to get back to merging with AI, we'd be able to upload our mind in a computer in a way where we might even transcend the constraints of our bodies.
So copy our mind into a computer and leave the body behind.
Let me describe one thing I've already done with my father.
That's a great story.
So we created a technology. This is public came out, I think, six years ago where you could ask any question and the release product, which I think is still on the market.
It would read 200,000 books and then and then find the one sentence in 200,000 books that best answered your question.
It's actually quite interesting. You can ask all kinds of questions and you get the best answer in 200,000 books.
But I was also able to take it and not go through 200,000 books, but go through a book that I put together, which is basically everything my father had written.
So everything he had written, I had gathered and we created a book.
Everything that Frederickers all had written. Now, I didn't think this actually would work that well because stuff he had written was stuff about how to lay out.
I mean, he did directed choral groups and music groups and he would be laying out how people should, where they should sit and how to fund this and all kinds of things that really didn't seem that interesting.
And yet, when you ask a question, it would go through it and it would actually give you a very good answer.
So I said, well, you know, who's the most interesting composer? And he said, well, definitely Brahms.
He would go on about how Brahms was fabulous and talk about the importance of music education.
So you could have an essential question and answer conversation.
I can have a conversation with him, which was actually more interesting than talking to him because if you talk to him, he'd be concerned about how they're going to lay out this property to give a choral group.
He'd be concerned about the day-to-day versus the big questions.
Exactly, yeah.
And you did ask about the meaning of life and he answered love.
Yeah.
Do you miss him?
Yes, I do.
You know, you get used to missing somebody after 52 years and I didn't really have intelligent conversations with him until later in life.
In the last few years, he was sick, which meant he was home a lot and I was actually able to talk to him about different things like music and other things.
So I miss that very much.
What did you learn about life from your father?
What part of him is with you now?
He was devoted to music and when he would create something to music, he'd put them in a different world.
Otherwise, he was very shy and if people got together, he tended not to interact with people just because of his shyness.
But when he created music, he was like a different person.
Do you have that in you, that kind of light that shines?
I got involved with technology at like age five.
You fell in love with it in the same way he did with music?
Yeah.
I remember this actually happened with my grandmother.
She had a manual typewriter and she wrote a book, One Life is Not Enough, which is actually a good title for a book I might write.
And it was about a school she had created.
Well, actually, her mother created it.
So my mother's mother's mother created the school in 1868 and it was the first school in Europe that provided higher education for girls.
It went through 14th grade.
If you were a girl and you were lucky enough to get an education at all, it would go through like ninth grade.
And many people didn't have any education as a girl.
This went through 14th grade.
Her mother created it, she took it over and the book was about the history of the school and her involvement with it.
When she presented it to me, I was not so interested in the story of the school.
But I was totally amazed with this manual typewriter.
If you were something you could put a blank piece of paper into and you could turn it into something that looked like it came from a book.
And you could actually type on it and it looked like it came from a book.
It was just amazing to me.
And I could see actually how it worked.
And I was also interested in magic.
But in magic, if somebody actually knows how it works, the magic goes away.
The magic doesn't stay there if you actually understand how it works.
But he was technology, I didn't have that word when I was five or six.
And the magic was still there for you?
The magic was still there even if you knew how it worked.
So I became totally interested in this and then went around and collected little pieces of mechanical objects.
From bicycles, from broken radios, I would go through the neighborhood.
This was an era where you would allow five or six year olds who run through the neighborhood and do this.
We don't do that anymore.
But I didn't know how to put them together.
I said, if I could just figure out how to put these things together, I could solve any problem.
And I actually remember talking to these very old girls.
I think they were 10.
And telling them, if I could just figure this out, we could fly, we could do anything.
And they said, well, you have quite an imagination.
And then when I was in third grade, so it was like eight, created like a virtual reality theater where people could come on stage and they could move their arms.
And all of it was controlled through one control box. It was all done with mechanical technology.
And it was a big hit in my third grade class.
And then I went on to do things in junior high school science fairs and high school science fairs.
I won the Westinghouse Science Talent Search.
So I mean, I became committed to technology when I was five or six years old.
You've talked about how you use lucid dreaming to think, to come up with ideas as a source of creativity.
Because you maybe talk through that, maybe the process of how to, you've invented a lot of things.
You've came up and thought through some very interesting ideas.
What advice would you give or can you speak to the process of thinking of how to think?
How to think creatively?
Well, I mean, sometimes I will think through in a dream and try to interpret that.
But I think the key issue that I would tell younger people is to put yourself in the position that what you're trying to create already exists.
And then you're explaining how it works.
Exactly.
That's really interesting.
You paint a world that you would like to exist.
You think it exists and reverse it.
And then you actually imagine you're giving a speech about how you created this.
Well, you'd have to then work backwards as to how you would create it in order to make it work.
That's brilliant. And that requires some imagination to some first principles thinking.
You have to visualize that world.
That's really interesting.
And generally, when I talk about things we're trying to invent, I would use the present tense as if it already exists.
Not just to give myself that confidence, but everybody else is working on it.
We just have to kind of do all the steps in order to make it actual.
How much of a good idea is about timing?
How much is it about your genius versus that its time has come?
Timing's very important.
I mean, that's really why I got into futurism.
I wasn't inherently a futurist, that there's not really my goal.
That's really to figure out when things are feasible.
We see that now with large scale models.
The large scale models like GPT-3, it started two years ago.
Four years ago, it wasn't feasible.
It did create GPT-2, which didn't work.
So it required a certain amount of timing having to do with this exponential growth of computing power.
So futurism, in some sense, is a study of timing.
Trying to understand how the world will evolve and when will the capacity for certain ideas emerge.
Come a thing in itself then to try to time things in the future.
But really, its original purpose was to time my products.
I mean, I did OCR in the 1970s because OCR doesn't require a lot of computation.
Optical character recognition.
Yeah, so we were able to do that in the 70s.
And I waited until the 80s to address speech recognition since it requires more computation.
So you were thinking through timing when you're developing those things as its time come.
And that's how you've developed that brain power to start to think in a futurist sense.
How will the world look like in 2045 and work backwards and how it gets there?
But that has to become a thing in itself because looking at what things will be like in the future reflects such dramatic changes in how humans will live.
That was worth communicating also.
So you developed that muscle of predicting the future and then applied broadly.
And started to discuss how it changes the world of technology, how it changes the world of human life on Earth.
In Danielle, one of your books, you write about someone who has the courage to question assumptions that limit human imagination to solve problems.
And you also give advice on how each of us can have this kind of courage.
Well, it's good that you picked that quote because I think that does symbolize what Danielle is about.
Courage. So how can each of us have that courage to question assumptions?
I mean, we see that when people can go beyond the current realm and create something that's new.
I mean, take Uber, for example, before that existed, you never thought that that would be feasible and it did require changes in the way people work.
Is there practical advice, as you give in the book, about what each of us can do to be a Danielle?
Well, she looks at the situation and tries to imagine how she can overcome various obstacles and then she goes for it.
And she's a very good communicator so she can communicate these ideas to other people.
And there's practical advice of learning to program and recording your life and things of this nature.
Become a physicist. So you list a bunch of different suggestions of how to throw yourself into this world.
Yeah. I mean, it's kind of an idea how young people can actually change the world by learning all of these different skills.
And at the core of that is the belief that you can change the world, that your mind, your body can change the world.
That's right.
And not letting anyone else tell you otherwise.
That's very good, exactly.
When we upload the story you told about your dad and having a conversation with him, we're talking about uploading your mind to the computer.
Do you think we'll have a future with something you call afterlife? We'll have avatars that mimic increasingly better and better our behavior, our appearance, all that kind of stuff.
Even those are perhaps no longer with us.
Yes. I mean, we need some information about them.
I mean, think about my father. I have what he wrote. Now, he didn't have a word processor, so he didn't actually write that much.
And our memories of him aren't perfect. So how do you even know if you've created something that's satisfactory?
Now, you could do a Frederick Kurzweil Turing test. It seems like Frederick Kurzweil to me.
But the people who remember him, like me, don't have a perfect memory.
Is there such a thing as a perfect memory? Maybe the whole point is for him to make you feel a certain way.
Yeah. Well, I think that would be the goal.
And that's the connection we have with loved ones. It's not really based on very strict definition of truth.
It's more about the experiences we share and they get more through memory.
But ultimately, they make a smile.
I think we definitely can do that. And that would be very worthwhile.
So do you think we'll have a world of replicants of copies?
There'll be a bunch of Ray Kurzweils.
Like I could hang out with one. I can download it for five bucks and have a best friend, Ray.
And you, the original copy, wouldn't even know about it.
First of all, do you think that world is feasible and do you think there's ethical challenges there?
How would you feel about me hanging out with Ray Kurzweil and you not knowing about it?
It doesn't strike me as a problem.
Would you, would that cause a problem for you?
No, I would really very much enjoy it.
No, not just hanging out with me, but if somebody hanging out with you, a replicant of you.
Well, I think I would start, it sounds exciting, but then what if they start doing better than me and take over my friend group?
And then because they may be an imperfect copy or there may be more social or these kinds of things.
And then I become like the old version that's not nearly as exciting.
Maybe they're a copy of the best version of me on a good day.
But if you hang out with a replicant of me and that turned out to be successful, I'd feel proud of that person because it was based on me.
So it's, but it is a kind of death of this version of you.
Well, not necessarily. I mean, you can still be alive, right?
But, and you would be, okay, so it's like having kids and you're proud that they've done even more than you were able to do.
Yeah, exactly.
It does bring up new issues, but it seems like an opportunity.
Well, that replicant should probably have the same rights as you do.
Well, that gets into a whole issue because when a replicant occurs, they're not necessarily going to have your rights.
And if a replicant occurs to somebody who's already dead, do they have all the obligations and that the original person had?
Do they have all the agreements that they had?
So I think you're going to have to have laws that say yes.
There has to be, if you want to create a replicant, they have to have all the same rights as human rights.
Well, you don't know. Somebody can create a replicant and say, well, it's a replicant, but I didn't bother getting their rights.
But that would be illegal. I mean, like if you do that, you have to do that in the black market.
If you want to get an official replicant.
Okay, it's not so easy. It's supposed to create multiple replicants.
The original rights maybe for one person and not for a whole group of people.
Sure. So there has to be at least one. And then all the other ones kind of share the rights.
Yeah, I just don't, I don't think that that's very difficult to conceive for us humans, the idea of this country.
Create a replicant that has certain, I mean, I've talked to people about this, including my wife, who would like to get back her father.
And she doesn't worry about who has rights to what.
She would have somebody that she could visit with and might give her some satisfaction.
And they wouldn't, she wouldn't care about any of these other rights.
What does your wife think about multiple arrears as well? Have you had that discussion?
I think ultimately that's an important question. Loved ones, how they feel about, there's, there's something about love.
That's the key thing, right? If the loved ones rejected, it's not going to work very well.
So the loved ones really are the key determinant whether or not this works or not.
But there's also ethical rules. We have to contend with the idea and we have to contend with that idea with AI.
But what's going to motivate it is, I mean, I talked to people who really miss people who are gone and they would love to get something back, even if it isn't perfect.
And that's what's going to motivate this.
And that person lives on in some form.
And the more data we have, the more we're able to reconstruct that person and allow them to live on.
And eventually as we go forward, we're going to have more and more of this data because we're going to have nanobots that are inside our neocortex and we're going to collect a lot of data.
In fact, anything that's data is always collected.
There is something a little bit sad, which is becoming, or maybe it's hopeful, which is more and more common these days, which when a person passes away, you'll have their Twitter account, you know, and you have the last tweet they tweeted, like something.
And you can recreate them now with large language models and so on. I mean, you can create somebody that's just like them and can actually continue to communicate.
I think that's really exciting because I think in some sense, like if I were to die today, in some sense, I would continue on if I continue tweeting.
I tweet, therefore I am.
Yeah, well, I mean, that's one of the advantages of a replicant. They can recreate the communications of that person.
Do you hope, do you think, do you hope humans will become a multi-planetary species?
You've talked about the phases, the six epochs, and one of them is reaching out into the stars in part.
Yes, but the kind of attempts we're making now to go to all these planetary objects doesn't excite me that much because it's not really advancing anything.
It's not efficient enough.
Yeah, we're also putting out other human beings, which is a very inefficient way to explore these other objects.
What I'm really talking about in the sixth epoch, the universe wakes up, it's where we can spread our super intelligence throughout the universe, and that doesn't mean sending very soft, squishy creatures like humans.
Yeah, the universe wakes up.
I mean, we would send intelligence masses of nanobots which can then go out and colonize these other parts of the universe.
Do you think there's intelligent alien civilizations out there that our bots might meet?
My hunch is no. Most people say yes, absolutely. The universe is too big.
And they'll cite the Drake equation, and I think in Singularity is near, I have two analyses of the Drake equation, both with very reasonable assumptions.
And one gives you thousands of advanced civilizations in each galaxy, and another one gives you one civilization, and we know of one.
A lot of the analyses are forgetting the exponential growth of computation, because we've gone from where the fastest way I could send a message to somebody was with a pony, which was what, like a century and a half ago,
to the advanced civilization we have today, and if you've accepted what I've said, go forward a few decades, you can have absolutely fantastic amount of civilization compared to a pony, and that's in a couple hundred years.
Yeah, the speed and the scale of information transfer is just, is growing exponentially, in a blink of an eye.
Now think about these other civilizations. They're going to be spread out at cosmic times.
So if something is like ahead of us or behind us, it could be ahead of us or behind us by maybe millions of years, which isn't that much.
The world is billions of years old, 14 billion or something. So even a thousand years, if two or 300 years is enough to go from a pony to fantastic amount of civilization, we would see that.
So of other civilizations that have occurred, some might be behind us, but some might be ahead of us. If they're ahead of us, they're ahead of us by thousands, millions of years, and they would be so far beyond us, they would be doing galaxy-wide engineering.
But we don't see anything doing galaxy-wide engineering.
So either they don't exist or this very universe is a construction of an alien species. We're living inside a video game.
Well, that's another explanation that, yes, you've got some teenage kids in other civilizations.
Do you find compelling the simulation hypothesis as a thought experiment that we're living in a simulation?
The universe is computational. So we are an example in a computational world. Therefore, it is a simulation.
It doesn't necessarily mean an experiment by some high school kid in another world, but it nonetheless is taking place in a computational world.
And everything that's going on is basically a form of computation.
So you really have to define what you mean by this whole world being a simulation.
Well, then it's the teenager that makes the video game. Us humans with our current limited cognitive capability have strived to understand ourselves and we have created religions.
We think of God. Whatever that is, do you think God exists? And if so, who is God?
I alluded to this before. We started out with lots of particles going around and there's nothing that represents love and creativity.
And somehow we've gotten into a world where love actually exists and that has to do actually with consciousness because you can't have love without consciousness.
So to me, that's God, the fact that we have something where love where you can be devoted to someone else and really feel the love that's God.
And if you look at the Old Testament, it was actually created by several different rabbinets in there. And I think they've identified three of them.
One of them dealt with God as a person that you can make deals with and he gets angry and he wrecks vengeance on various people.
But two of them actually talk about God as a symbol of love and peace and harmony and so forth.
That's how they describe God. So that's my view of God, not as a person in the sky that you can make deals with.
It's whatever the magic that goes from basic elements to things like consciousness and love.
Do you think one of the things I find extremely beautiful and powerful is cellular automata, which you also touch on.
Do you think whatever the heck happens in cellular automata where interesting, complicated objects emerge, God is in there too?
The emergence of love in this seemingly primitive universe?
Well, that's the goal of creating a replicant is that they would love you and you would love them.
There wouldn't be much point of doing it if that didn't happen.
But all of it, I guess what I'm saying about cellular automata is it's primitive building blocks and they somehow create beautiful things.
Is there some deep truth to that about how our universe works?
Is the emergence from simple rules, beautiful complex objects can emerge?
Is that the thing that made us as we went through all the six phases of reality?
That's a good way to look at it.
It just makes some point to the whole value of having a universe.
Do you think about your own mortality? Are you afraid of it?
Yes, but I keep going back to my idea of being able to expand human life quickly enough in advance of our getting there, longevity, escape velocity, which we're not quite at yet.
But I think we're actually pretty close, particularly with, for example, doing simulated biology.
I think we can probably get there within, say, by the end of this decade.
And that's my goal.
Do you hope to achieve the longevity, escape velocity? Do you hope to achieve immortality?
Well, immortality is hard to say. I can't really come on your program saying, I've done it.
I've achieved immortality because it's never forever.
A long time, a long time of living well.
But we'd like to actually advance human life expectancy, advance my life expectancy more than a year, every year.
And I think we can get there within, by the end of this decade.
How do you think we'd do it?
Practical things, in transcend the nine steps to living well forever, your book, you describe just that.
There's practical things like health, exercise, all those things.
I mean, we live in a body that doesn't last forever.
There's no reason why it can't, though.
And we're discovering things, I think, that will extend it.
But you do have to deal with, I mean, I've got various issues.
Went to Mexico 40 years ago, developed Salmonella.
They created pancreatitis, which gave me a strange form of diabetes.
It's not type one diabetes because it's an autoimmune disorder that destroys your pancreas.
I don't have that.
But it's also not type two diabetes because type two diabetes is your pancreas works fine,
but your cells don't absorb the insulin well.
I don't have that either.
The pancreatitis I had partially damaged my pancreas, but it was a one time thing.
It didn't continue.
And I've learned now how to control it.
But so that's just something that I had to do in order to continue to exist.
Since your particular biological system, you had to figure out a few hacks,
and the idea is that science would be able to do that much better, actually.
Yeah.
So, I mean, I do spend a lot of time just tinkering with my own body to keep it going.
So, I do think I'll last till the end of this decade, and I think we'll achieve longevity escape velocity.
I think that we'll start with people who are very diligent about this.
Eventually, it'll become sort of routine that people will be able to do it.
So, if you're talking about kids today, or even people in their 20s or 30s,
it's really not a very serious problem.
I have had some discussions with relatives who were like almost 100,
and saying, well, we're working on it as quickly as possible,
but I don't know if that's going to work.
Is there a case, this is a difficult question,
but is there a case to be made against living forever
that a finite life, that mortality is a feature, not a bug,
that living a shorter...
So, dying makes ice cream taste delicious,
makes life intensely beautiful more than...
Most people believe that way,
except if you present a death of anybody they care about or love,
they find that extremely depressing.
And I know people who feel that way 20, 30, 40 years later,
they still want them back.
So, I mean, death is not something to celebrate,
but we've lived in a world where people just accept this.
Life is short. You see it all the time on TV.
It's short. You have to take advantage of it.
And nobody accepts the fact that you could actually go beyond normal lifetimes.
But any time we talk about death or a death of a person,
even one death is a terrible tragedy.
If you have somebody that lives 200 years old,
we still love them in return.
And there's no limitation to that.
In fact, these kinds of trends are going to provide
greater and greater opportunity for everybody,
even if we have more people.
So, let me ask about an alien species or a superintelligent AI,
500 years from now, that will look back.
And remember Ray Kurzweil, version zero,
before the replicants spread.
How do you hope they'll remember you in a Hitchhiker's Guide to the Galaxy
summary of Ray Kurzweil? What do you hope your legacy is?
Well, I mean, I do hope to be around.
Some version of you, yes.
Do you think you'll be the same person around?
I mean, am I the same person I was when I was 20 or 10?
You would be the same person in that same way,
but yes, we're different.
All you have of that person is your memories,
which are probably distorted in some way.
Maybe you just remember the good parts, depending on your psyche.
You might focus on the bad parts,
might focus on the good parts.
Right, but I mean, I still have a relationship to the way I was when I was younger.
How will you and the other superintelligent AIs remember you of today from 500 years ago?
What do you hope to be remembered by this version of you,
before the singularity?
Well, I think it's expressed well in my books,
trying to create some new realities that people will accept.
I mean, that's something that gives me great pleasure
and greater insight into what makes humans valuable.
I'm not the only person who's tempted to comment on that.
And optimism that permeates your work, optimism about the future,
is ultimately that optimism paves the way for building a better future.
I agree with that.
So you asked your dad about the meaning of life and he said,
love, let me ask you the same question.
What's the meaning of life?
Why are we here, this beautiful journey that we're on in Phase 4,
reaching for Phase 5 of this evolution and information processing? Why?
I think I'd give the same answers as my father.
Because if there was no love and we didn't care about anybody,
there'd be no point existing.
Love is the meaning of life.
The AI version of your dad had a good point.
Well, I think that's a beautiful way to end it, right?
Thank you for your work. Thank you for being who you are.
Thank you for dreaming about a beautiful future and creating it along the way.
And thank you so much for spending a really valuable time with me today.
This was awesome.
Well, it was my pleasure and you have some great insights both into me and into humanity as well.
So I appreciate that.
Thanks for listening to this conversation with Ray Coorswell.
To support this podcast, please check out our sponsors in the description.
And now, let me leave you with some words from Isaac Asimov.
It is change, continuous change, inevitable change that is the dominant factor in society today.
No sensible decision could be made any longer without taking into account not only the world as it is,
but the world as it will be.
This in turn means that our statesmen, our businessmen, our every man
must take on a science-fictional way of thinking.
Thank you for listening and hope to see you next time.