This graph shows how many times the word ______ has been mentioned throughout the history of the program.
The following is a conversation with Ben Gertzel,
one of the most interesting minds
in the artificial intelligence community.
He's the founder of SingularityNet,
designer of OpenCog AI framework,
formerly a director of research
at the Machine Intelligence Research Institute,
and chief scientist of Hanson Robotics,
the company that created the Sophia robot.
He has been a central figure in the AGI community
for many years, including in his organizing
and contributing to the conference
and artificial general intelligence,
the 2020 version of which is actually happening this week,
Wednesday, Thursday and Friday.
It's virtual and free.
I encourage you to check out the talks
including by Yosha Bach from episode 101 of this podcast.
Quick summary of the ads, two sponsors,
The Jordan Harbinger Show and Masterclass.
Please consider supporting this podcast
by going to jordanharbinger.com slash lex
and signing up at masterclass.com slash lex.
Click the links, buy all the stuff.
It's the best way to support this podcast
and the journey I'm on in my research and startup.
This is the artificial intelligence podcast.
If you enjoy it, subscribe on YouTube,
review it with five stars on an Apple podcast,
support it on Patreon or connect with me on Twitter
and Lex Friedman, spelled without the E, just F-R-I-D-M-A-N.
As usual, I'll do a few minutes of ads now
and never any ads in the middle
that can break the flow of the conversation.
This episode is supported by The Jordan Harbinger Show.
Go to jordanharbinger.com slash lex.
It's how he knows I sent you.
On that page, there's links to subscribe to it,
on Apple podcast, Spotify and everywhere else.
I've been binging on his podcast.
Jordan is great.
He gets the best out of his guests,
dives deep, calls them out when it's needed
and makes the whole thing fun to listen to.
He's interviewed Kobe Bryant, Mark Cuban,
Neil deGrasse Tyson, Karek Asparov, and many more.
This conversation with Kobe is a reminder
how much focus and hard work is acquired for greatness
in sport, business and life.
I highly recommend the episode if you want to be inspired.
Again, go to jordanharbinger.com slash lex.
It's how Jordan knows I sent you.
This show sponsored by a masterclass
and masterclass.
Sign up at masterclass.com slash lex
to get a discount and to support this podcast.
When I first heard about masterclass,
I thought it was too good to be true.
For 180 bucks a year,
you get an all access pass to watch courses from
to list some of my favorites.
Chris Hadfield on Space Exploration,
Neil deGrasse Tyson on Scientific Thinking
and Communication, Will Wright,
creator of the greatest city building game ever,
Sam City and Sims on Game Design,
Carlos Santana on Guitar, Karek Asparov,
the greatest chess player ever on chess,
Daniel Negrano on poker, and many more.
Chris Hadfield explaining how rockets work
and the experience of being launched into space alone
is worth the money.
Once again, sign up at masterclass.com slash lex
to get a discount and to support this podcast.
And now here's my conversation
with Ben Gretzel.
What books, authors, ideas had a lot of impact on you
in your life in the early days?
You know what got me into AI and science fiction
such in the first place wasn't a book,
but the original Star Trek TV show,
which my dad watched with me like in its first run,
it would have been 1968, 69 or something.
And that was incredible because every show
that visited a different alien civilization
with different culture and weird mechanisms.
But that got me into science fiction
and there wasn't that much science fiction
to watch on TV at that stage.
So that got me into reading the whole literature
of science fiction from the beginning
of the previous century until that time.
And I mean, there was so many science fiction writers
who were inspirational to me.
I'd say if I had to pick two,
it would have been Stanislaw Lem, the Polish writer.
Yeah, Solaris.
And then he had a bunch of more obscure writings
on superhuman AIs that were engineered.
Solaris was sort of a superhuman,
naturally occurring intelligence.
Then Philip K. Dick, who ultimately my fandom
for Philip K. Dick is one of the things
that brought me together with David Hansen,
my collaborator on robotics projects.
So Stanislaw Lem was very much an intellectual, right?
So he had a very broad view of intelligence
going beyond the human and into what I would call
open-ended superintelligence.
The Solaris superintelligent ocean was intelligent
in some ways more generally intelligent than people,
but in a complex and confusing way
so that human beings could never quite connect to it.
But it was still probably very, very smart.
And then the Golem IV supercomputer
in one of Lem's books, this was engineered by people,
but eventually it became very intelligent
in a different direction than humans
and decided that humans were kind of trivial
and not that interesting.
So it put some impenetrable shield around itself,
shut itself off from humanity
and then issued some philosophical screed
about the pathetic and hopeless nature of humanity
and all human thought and then disappeared.
Now, Philip K. Dick, he was a bit different.
He was human focused, right?
His main thing was human compassion
and the human heart and soul are going to be the constant
that will keep us going through whatever aliens we discover
or telepathy machines or super AIs or whatever it might be.
So he didn't believe in reality,
like the reality that we see may be a simulation
or a dream or something else we can't even comprehend,
but he believed in love and compassion
as something persistent through the various simulated realities.
So those two science fiction writers
had a huge impact on me.
Then a little older than that,
I got into Dostoevsky and Friedrich Nietzsche and Rambo
then a bunch of more literary type writing.
Can we talk about some of those things?
So on the Solaris side, Stanislaus Lem,
this kind of idea of there being intelligences out there
that are different than our own,
do you think their intelligences may be all around us
that we're not able to even detect?
So this kind of idea of maybe you can comment also
on Stephen Wolfram thinking that there's computations
all around us and we're just not smart enough
to kind of detect their intelligence
or appreciate their intelligence.
Yeah, so my friend Hugo DeGaris,
who I've been talking to about these things
for many decades since the early 90s,
he had an idea he called SIPI,
the search for intra-particulate intelligence.
So the concept there was as AIs get smarter
and smarter and smarter,
assuming the laws of physics as we know them now
are still what these super intelligences
perceived hold and are bound by.
As they get smarter and smarter,
they're gonna shrink themselves littler and littler
because special relativity makes it,
it's no sort of communicate
between two spatially distant points.
So they're gonna get smaller and smaller,
but then ultimately what does that mean?
The minds of the super, super, super intelligences,
they're gonna be packed into the interaction
of elementary particles or quarks
or the partons inside quarks or whatever it is.
So what we perceive as random fluctuations
on the quantum or subquantum level
may actually be the thoughts
of the micro, micro, micro, miniaturized super intelligences.
Because there's no way we can tell random from structured,
but with an algorithmic information
more complex in our brains, right?
We can't tell the difference.
So what we think is random
could be the thought processes
of some really tiny super minds.
And if so, there's not a damn thing we can do about it,
except try to upgrade our intelligences
and expand our minds
so that we can perceive more of what's around us.
But if those random fluctuations,
like even if we go to like quantum mechanics,
if that's actually super intelligent systems,
aren't we then part of the soup of super intelligence?
So we're, aren't we just like a finger
of the entirety of the body of the super intelligent system?
It could be, I mean, a finger is a strange metaphor.
I mean, we...
A finger is dumb is what I mean.
But a finger is also useful
and is controlled with intent by the brain,
whereas we may be much less than that, right?
I mean, yeah, we may be just some random epiphenomenon
that they don't care about too much.
Like think about the shape of the crowd
emanating from a sports stadium or something, right?
There's some emergent shape to the crowd.
It's there.
You could take a picture of it.
It's kind of cool.
It's irrelevant to the main point of the sports event
or where the people are going
or what's on the minds of the people
making that shape in the crowd, right?
So we may just be some semi-arbitrary,
higher level pattern popping out
of a lower level hyper intelligent self-organization.
And I mean, so be it, right?
I mean, that's one thing that-
Still a fun ride.
Yeah, I mean, the older I've gotten,
the more respect I've achieved
for our fundamental ignorance.
I mean, mine and everybody else's.
I mean, I look at my two dogs,
two beautiful little toy poodles
and they watch me sitting at the computer typing.
They just think I'm sitting there wiggling my fingers
to exercise and maybe or guarding the monitor on the desk
that they have no idea that I'm communicating
with other people halfway around the world,
let alone, you know, creating complex algorithms
running in RAM on some computer server
in St. Petersburg or something, right?
That although they're right there,
they're right there in the room with me.
So what things are there right around us
that were just too stupid or closed minded
to comprehend probably quite a lot?
You're very poodle could also be communicating
across multiple dimensions with other beings
and you're too unintelligent to understand
the kind of communication mechanism they're going through.
There have been various TV shows and science fiction novels
poisoning cats, dolphins, mice and whatnot
are actually super intelligences here to observe that.
I would guess as one or the other quantum physics founders
said those theories are not crazy enough to be true.
The reality is probably crazier than that.
Beautifully put.
So on the human side with Philip K. Dick
and in general, where do you fall on this idea
that love and just the basic spirit of human nature
persists throughout these multiple realities?
Are you on the side?
Like the thing that inspires you
about artificial intelligence is it the human side
of somehow persisting through all of the different systems
we engineer or is AI inspire you to create something
that's greater than human, that's beyond human,
that's almost non-human?
I would say my motivation to create AGI
comes from both of those directions actually.
So when I first became passionate about AGI
when I was, it would have been two or three years old
after watching robots on Star Trek,
I mean, then it was really a combination of intellectual
curiosity, like can a machine really think,
how would you do that?
And yeah, just ambition to create something much better
than all the clearly limited and fundamentally defective
humans I saw around me.
Then as I got older and got more enmeshed in the human world
and got married, had children,
so my parents began to age, I started to realize,
well, not only will AGI let you go far beyond
the limitations of the human,
but it could also stop us from dying and suffering
and feeling pain and tormenting ourselves mentally.
So you can see AGI has amazing capability to do good
for humans, as humans, alongside with its capability
to go far, far beyond the human level.
So I mean, both aspects are there,
which makes it even more exciting and important.
So you mentioned Dostoevsky and Nietzsche,
where did you pick up from those guys?
I mean.
That would probably go beyond the scope
of a brief interview, certainly.
But both of those are amazing thinkers
who one will necessarily have a complex relationship with,
right, so I mean Dostoevsky on the minus side,
he's kind of a religious fanatic
and he's sort of helped squash the Russian nihilist movement,
which was very interesting,
because what nihilism meant originally
in that period of the mid-late 1800s in Russia
was not taking anything fully 100% for grand.
It was really more like what we'd call Bayesianism now,
where you don't want to adopt anything
as a dogmatic certitude and always leave your mind open
and how Dostoevsky parodied nihilism
was a bit different, right?
That he parodied is people who believe absolutely nothing.
So they must assign an equal probability weight
to every proposition, which doesn't really work.
So on the one hand, I didn't really agree with Dostoevsky
on his sort of religious point of view.
On the other hand, if you look at his understanding
of human nature and sort of the human mind
and heart and soul, it's really unparalleled.
He had an amazing view of how human beings
construct a world for themselves
based on their own understanding
and their own mental predisposition.
And I think if you look in the brothers Karamazov,
in particular, the Russian literary theorist,
Mikhail Bakhtin, wrote about this
as a polyphonic mode of fiction,
which means it's not third person,
but it's not first person from any one person really.
There are many different characters in the novel
and each of them is sort of telling part of the story
from their own point of view.
So the reality of the whole story is an intersection,
like synergetically of the many different characters,
worldviews, and that really, it's a beautiful metaphor
and even a reflection, I think,
of how all of us socially create our reality.
Like each of us sees the world in a certain way.
Each of us, in a sense, is making the world as we see it
based on our own minds and understanding,
but it's polyphony, like in music,
where multiple instruments are coming together
to create the sound.
The ultimate reality that's created
comes out of each of our subjective understandings,
intersecting with each other.
And that was one of the many beautiful things
in Dostoevsky.
So maybe a little bit to mention,
you have a connection to Russia and the Soviet culture.
I mean, I'm not sure exactly what the nature of the connection is,
but they're at least the spirit
of your thinking is in there.
Well, my ancestry is three quarters Eastern European Jewish.
So I mean, three of my great grandparents emigrated
to New York from Lithuania and sort of border regions
of Poland, which are in and out of Poland,
in around the time of World War I.
And they were socialists and communists as well as Jews,
mostly Menshevik, not Bolshevik.
And they sort of, they fled at just the right time to the US
for their own personal reasons.
And then almost all or maybe all of my extended family
that remained in Eastern Europe was killed
either by Hitlens or Stalin's minions at some point.
So the branch of the family that emigrated to the US
was pretty much the only one.
So how much of the spirit of the people is in your blood still?
Like, when you look in the mirror, do you see, what do you see?
Meat.
I see a bag of meat that I want to transcend
by uploading into some sort of superior reality.
But very, I mean, yeah, very clearly.
Well put.
I mean, I'm not religious in a traditional sense,
but clearly the Eastern European Jewish tradition
was what I was raised in.
I mean, there was, my grandfather Leo as well
was a physical chemist to work with Lannis Pauling
and a bunch of the other early greats in quantum mechanics.
I mean, he was into X-ray diffraction.
He was on the material science side,
experimentalist rather than a theorist.
His sister was also a physicist.
My father's father, Victor Gertzel, was a PhD in psychology
who had the unenviable job of giving soca therapy
to the Japanese in internment camps in the US
in World War II, like to counsel them
why they shouldn't kill themselves,
even though they'd had all their stuff taken away
and been imprisoned for no good reason.
So I mean, there's a lot of Eastern European Jewish tradition
in my background.
One of my great uncles was, I guess,
conductor of San Francisco Orchestra.
So there was a lot of Mickey Salkins,
a bunch of music and they're also,
and clearly this culture was all about learning
and understanding the world
and also not quite taking yourself too seriously
while you do it, right?
There's a lot of Yiddish humor in there.
So I do appreciate that culture,
although the whole idea that,
like the Jews are the chosen people of God,
never resonated with me too much.
The graph of the Gertzel family,
I mean, just the people I've encountered
just doing some research and just knowing your work
through the decades, it's kind of fascinating.
I'm just the number of PhDs.
Yeah, yeah, I mean, my dad is a sociology professor
who recently retired from Rutgers University,
but clearly that gave me a head start in life.
I mean, my grandfather gave me all his quantum mechanics books
and I was like seven or eight years old, you know?
I remember going through them
and it was all the old quantum mechanics,
like Rutherford Adams and stuff.
So I got to the part of wave functions,
which I didn't understand, although I was a very bright kid.
And I realized he didn't quite understand it either,
but at least, like he pointed me to some professor
he knew at UPenn nearby who understood these things, right?
So that's an unusual opportunity for a kid to have, right?
And my dad, he was programming FORTRAN
when I was 10 or 11 years old on like HP 3000,
the mainframes at Rutgers University.
So I got to do linear regression and FORTRAN on punch cards
that when I was in middle school, right?
Because he was doing, I guess, analysis of demographic
and sociology data.
So yes, certainly that gave me a head start
and a push towards science beyond what would have been
the case with many, many different situations.
When did you first fall in love with AI?
Is it the programming side of FORTRAN?
Is it maybe the sociology psychology
that you picked up from your dad?
I fell in love with AI when I was probably three years old
when I saw a robot on Star Trek.
It was turning around in a circle going error, error,
error, error, error, because Spock and Kirk had tricked
into a mechanical breakdown by presenting
with a logical paradox.
And I was just like, well, this makes no sense.
This AI is very, very smart.
It's been traveling all around the universe,
but these people could trick it
with a simple logical paradox.
Like what if, you know, if the human brain
can get beyond that paradox, why can't this AI?
So I felt the screenwriters of Star Trek
had misunderstood the nature of intelligence.
And I complained to my dad about it,
and he wasn't gonna say anything one way or the other.
But, you know, before I was born,
when my dad was at Antioch College
in the middle of the US,
he led a protest movement called SLAM,
Student League Against Mortality.
They were protesting against death,
wandering across the campus.
So he was into some futuristic things even back then,
but whether AI could confront logical paradoxes
or not, he didn't know.
But that, you know, when I, 10 years after that,
if something, I discovered Douglas Hofstadter's book,
Gordel S. Scherbach, and that was sort of
to the same point of AI and paradox and logic, right?
Because he was over and over
with Gordel's incompleteness theorem.
And can an AI really fully model itself reflexively,
or does that lead you into some paradox?
Can the human mind truly model itself reflexively,
or does that lead you into some paradox?
So when, I think that book, Gordel S. Scherbach,
which I think I read when it first came out,
it would have been 12 years old or something.
I remember it was like 16 hour day,
I read it cover to cover, and then re-read it.
Oh, really?
I re-read it after that,
because there was a lot of weird things
with little formal systems in there
that were hard for me at the time.
But that was the first book I read
that gave me a feeling for AI
as like a practical academic or engineering discipline
that people were working in.
Because before I read Gordel S. Scherbach,
I was into AI from the point of view
of a science fiction fan.
And I had the idea, well, it may be a long time
before we can achieve immortality in superhuman AGI.
So I should figure out how to build a spacecraft
traveling close to the speed of light,
go far away, then come back to the Earth in a million years
when technology is more advanced
and we can build these things.
Reading Gordel S. Scherbach,
well, it didn't all ring true to me, a lot of it did,
but I could see like there are smart people right now
at various universities around me
who are actually trying to work on building
what I would now call AGI,
although Hofstra didn't call that.
So really it was when I read that book,
which would have been probably middle school,
that then I started to think,
well, this is something that I could practically work on.
Yeah, it's supposed to flying away and waiting it out.
You can actually be one of the people
that actually builds this system.
Yeah, exactly.
And if you think about, I mean,
I was interested in what we'd now call nanotechnology
and in human immortality and time travel,
all the same cool things as every other
like science fiction loving kid.
But AI seemed like if Hofstra did it was right,
you just figure out the right program, sit there and type it.
Like you don't need to spend stars
into weird configurations
or get government approval to cut people up
and fiddle with their DNA or something, right?
It's just programming.
And then of course that can achieve anything else.
There's another book from back then,
which was by Gerald Feinbaum,
who was a physicist at Princeton.
And that was the Prometheus project.
And this book was written in the late 1960s,
though I encountered it in the mid-70s.
But what this book said is in the next few decades,
humanity is gonna create superhuman thinking machines,
molecular nanotechnology and human immortality.
And then the challenge we'll have is what to do with it?
Do we use it to expand human consciousness
in a positive direction?
Or do we use it just to further vapid consumerism?
And what he proposed was that the UN
should do a survey on this.
And the UN should send people out to every little village
in remotest Africa or South America
and explain to everyone what technology
was gonna bring the next few decades
and the choice that we had about how to use it
and let everyone on the whole planet vote
about whether we should develop, you know,
super AI nanotechnology and immortality
for expanded consciousness or for rampant consumerism.
And needless to say, that didn't quite happen.
And I think this guy died in the mid-80s,
so he didn't even see his ideas start
to become more mainstream.
But it's interesting, many of the themes I'm engaged with now
from AGI and immortality,
even to trying to democratize technology
as I've been pushing forward singularity
in my work in the blockchain world,
many of these themes were there in, you know,
Feinbaum's book in the late 60s even.
And of course, Valentin Turchin,
a Russian writer and a great Russian physicist,
who I got to know when we both lived in New York
in the late 90s and early arts.
I mean, he had a book in the late 60s in Russia,
which was The Phenomenon of Science,
which laid out all these same things as well.
And Val died in, I don't remember,
2004, 2005 or something of Parkinson'sism.
So, yeah, it's easy for people to lose track now
of the fact that the futurist and the singularitarian
advanced technology ideas that are now almost mainstream
and are on TV all the time.
I mean, these are not that new, right?
They're sort of new in the history of the human species.
But I mean, these were all around in fairly mature form
in the middle of the last century,
were written about quite articulately
by fairly mainstream people who were professors
at top universities.
It's just until the enabling technologies got
to a certain point, then you couldn't make it real.
So, and even in the 70s, I was sort of seeing that
and living through it, right?
From Star Trek to Douglas Hofstadter,
things were getting very, very practical
from the late 60s to the late 70s.
And the first computer I bought,
you could only program with hexadecimal machine code
and you had to solder it together.
And then like a few years later, there's punch cards.
And a few years later, you could get like Atari 400
and Commodore VIC-20 and you could type on a keyboard
and program in higher level languages
alongside the assembly language.
So, these ideas have been building up a while
and I guess my generation got to feel them build up
which is different than people coming into the field now
for whom these things have just been part
of the ambiance of culture for their whole career
or even their whole life.
Well, it's fascinating to think about,
there being all of these ideas kind of swimming
almost with the noise all around the world,
all the different generations.
And then some kind of non-linear thing happens
where they percolate up
and capture the imagination of the mainstream.
And that seems to be what's happening with AI now.
I mean, Nietzsche who you mentioned
had the idea of the Superman, right?
But he didn't understand enough about technology
to think you could physically engineer a Superman
by piecing together molecules in a certain way.
He was a bit vague about how the Superman would appear,
but he was quite deep at thinking about
what the state of consciousness
and the mode of cognition of a Superman would be.
He was a very astute analyst of how the human mind
constructs the illusion of a self,
how it constructs the illusion of free will,
how it constructs values like good and evil
out of its own desire to maintain
and advance its own organism.
He understood a lot about how human minds work.
Then he understood a lot
about how post-human minds would work.
I mean, the Superman was supposed to be a mind
that would basically have complete root access
to its own brain and consciousness
and be able to architect its own value system
and inspect and fine tune all of its own biases.
So that's a lot of powerful thinking there,
which then fed in and sort of seeded all
of post-modern continental philosophy
and all sorts of things have been very valuable
in development of culture and indirectly even of technology.
But of course, without the technology there,
it was all some quite abstract thinking.
So now we're at a time in history
when a lot of these ideas can be made real,
which is amazing and scary, right?
It's kind of interesting to think,
what do you think Nietzsche would if he was born
a century later or transported through time?
What do you think he would say about AI?
I mean-
Well, those are quite different.
If he's born a century later,
we're transported through time.
Well, he'd be on like TikTok and Instagram
and he would never write the great works he's written.
So let's transport him through time.
Maybe also Sprach Zarathustra
would be a music video, right?
I mean, who knows?
Yeah, but if he was transported through time,
do you think, that'd be interesting actually to go back.
You just made me realize that it's possible
to go back and read Nietzsche with an eye of,
is there some thinking about artificial beings?
I'm sure that he had inklings.
I mean, with Frankenstein before him,
I'm sure he had inklings of artificial beings
somewhere in the text.
It'd be interesting to try to read his work
to see if Superman was actually an AGI system.
Like if he had inklings of that kind of thinking.
He didn't.
He didn't.
No, I would say not.
I mean, he had a lot of inklings
of modern cognitive science, which are very interesting.
If you look in like the third part of the collection
that's been titled The Will to Power,
I mean, in book three there,
there's very deep analysis of thinking processes,
but he wasn't able to do that.
I mean, he had inklings of processes,
but he wasn't so much of a physical tinkerer type guy.
He was very abstract.
Do you think, what do you think about The Will to Power?
Do you think human, what do you think drives humans?
Is it?
Oh, an unholy mix of things.
I don't think there's one pure, simple,
and elegant objective function driving humans
by any means.
If we look at, I know,
it's hard to look at humans in an aggregate,
but do you think overall humans are good,
or do we have both good and evil within us
that depending on the circumstances,
depending on the whatever can percolate to the top?
Good and evil are very ambiguous, complicated,
and in some ways, silly concepts.
But if we could dig into your question
from a couple of directions.
So I think if you look in the evolution,
humanity is shaped both by individual selection
and what biologists will call group selection,
like tribe level selection, right?
So individual selection has driven us
in a selfish DNA sort of way,
so that each of us does to a certain approximation
what will help us propagate our DNA to future generations.
I mean, that's why I've got to have four kids so far
and probably that's not the last one.
On the other hand.
I like the ambition.
Tribal, like group selection means humans in a way
will do what will advocate for the persistence
of the DNA of their whole tribe or their social group.
And in biology, you have both of these, right?
Like a, and you can see say in ant colony or beehive,
there's a lot of group selection
in the evolution of those social animals.
On the other hand, say a big cat
or some very solitary animal,
it's a lot more biased toward individual selection.
Humans are an interesting balance.
And I think this reflects itself
in what we would view as selfishness
versus altruism to some extent.
So we just have both of those objective functions
contributing to the makeup of our brains
and then as Nietzsche analyzed in his own way
and others have analyzed in different ways.
I mean, we abstract this as well.
We have both good and evil within us, right?
Cause a lot of what we view as evil
is really just selfishness.
A lot of what we view as good is altruism,
which means doing what's good for the tribe.
And on that level, we have both of those just baked into us.
And that's how it is.
Of course, there are psychopaths and sociopaths
and people who get gratified by the suffering of others.
And that's a different thing.
Yeah, those are exceptions.
But I think at core, we're not purely selfish.
We're not purely altruistic.
We are a mix and that's the nature of it.
And we also have a complex constellation of values
that are just very specific to our evolutionary history.
Like we love waterways and mountains
and the ideal place to put a house
is in a mountain overlooking the water, right?
And we care a lot about our kids
and we care a little less about our cousins
and even less about our fifth cousins.
I mean, there are many particularities to human values
which whether they're good or evil
depends on your perspective.
Really say, I spent a lot of time in Ethiopia
in Addis Ababa where we have one of our AI development offices
for my singularity net project.
And when I walked through the streets in Addis,
there's people lying by the side of the road,
like just living there by the side of the road,
dying probably of curable diseases
without enough food or medicine.
And when I walk by them, you know, I feel terrible.
I give them money.
When I come back home to the developed world,
they're not on my mind that much.
I do donate some, but I mean,
I also spend some of the limited money I have
enjoying myself in frivolous ways
rather than donating it to those people who are right now,
like starving, dying and suffering on the roadside.
So does that make me evil?
I mean, it makes me somewhat selfish
and somewhat altruistic.
And we each balance that in our own way, right?
So that's, whether that will be true of all possible AGI's
is a subtler question, but that's how humans are.
So you have a sense, you kind of mentioned
that there's a selfish, I'm not gonna bring up
the whole iron-rand idea of selfishness
being the core virtue.
That's a whole interesting kind of tangent
that I think we'll just distract ourselves on.
I have to make one amusing comment.
Sure.
Or comment that has amused me anyway.
So the, yeah, I have extraordinary negative respect
for iron-rand.
Negative, what's a negative respect?
But when I worked with a company called Geneshant,
which was evolving flies to have extraordinary long lives
in Southern California.
So we had flies that were evolved by artificial selection
to have five times the lifespan of normal fruit flies.
But the population of super long-lived flies
was physically sitting in a spare room
at an iron-rand elementary school in Southern California.
So that was just like, well, if I saw this in a movie,
I wouldn't believe it, right?
Well, yeah, the universe has a sense of humor
in that kind of way.
Humor fits in somehow into this whole absurd existence.
But you mentioned the balance between selfishness
and altruism as kind of being innate.
Do you think it's possible
that's kind of an emergent phenomenon,
those peculiarities of our value system?
How much of it is innate?
How much of it is something we collectively,
kind of like a Dusty F. Ski novel,
bring to life together as a civilization?
I mean, the answer to nature versus nurture is usually both.
And of course, it's nature versus nurture
versus self-organization, as you mentioned.
So clearly, there are evolutionary roots
to individual and group selection
leading to a mix of selfishness and altruism.
On the other hand, different cultures manifest that
in different ways,
while we all have basically the same biology.
And if you look at sort of pre-civilized cultures,
you have tribes like the Yanomamo in Venezuela,
which their culture is focused on killing other tribes.
And you have other Stone Age tribes
that are mostly peaceable
and have big taboos against violence.
So you can certainly have a big difference
in how culture manifests these innate biological characteristics.
But still, there's probably limits
that are given by our biology.
I used to argue this with my great-grandparents,
who were Marxists, actually,
because they believed in the withering away of the state.
They believed that as you move from capitalism
to socialism to communism,
people would just become more social-minded
so that a state would be unnecessary.
And people would just give,
everyone would give everyone else what they needed.
Setting aside that that's not what the various Marxist experiments
on the planet seem to be heading toward in practice,
just as a theoretical point,
I was very dubious that human nature could go there.
At that time, when my great-grandparents were alive,
I was just like, you know, I'm a cynical teenager.
I think humans are just jerks.
The state is not going to wither away.
If you don't have some structure
keeping people from screwing each other over,
they're going to do it.
So now I actually don't quite see things that way.
I mean, I think my feeling now, subjectively,
is the culture aspect is more significant
than I thought it was when I was a teenager.
And I think you could have a human society
that was dialed dramatically further toward self-awareness,
you know, self-awareness, other awareness, compassion,
and sharing than our current society.
And of course, greater material abundance helps.
But to some extent, material abundance
is a subjective perception also
because many Stone Age cultures
perceive themselves as living in great material abundance.
They had all the food and water they wanted.
They lived in a beautiful place.
They had sex lives.
They had children.
I mean, they had abundance without any factories, right?
So I think humanity probably would be capable
of fundamentally more positive
and joy-filled mode of social existence
than what we have now.
Clearly, Marx didn't quite have the right idea
about how to get there.
I mean, he missed a number of key aspects
of human society and its evolution.
And if we look at where we are in society now,
how to get there is a quite different question
because there are very powerful forces pushing people
in different directions
than a positive, joyous, compassionate existence, right?
So if we were trying to, you know,
Elon Musk is dreams of colonizing Mars at the moment.
So maybe you'll have a chance to start a new civilization
with a new governmental system.
And certainly there's quite a bit of chaos.
We're sitting now, I don't know what the date is,
but this is June.
There's quite a bit of chaos
and all different forms going on in the United States
and all over the world.
So there's a hunger for new types of governments,
new types of leadership, new types of systems.
And so what are the forces at play
and how do we move forward?
Yeah, I mean, colonizing Mars, first of all,
it's a super cool thing to do.
We should be doing it.
So you love the idea.
Yeah, I mean, it's more important than making
chocolatey or chocolates and sexier lingerie
and many of the things that we spend a lot more resources on
as a species, right?
So I mean, we certainly should do it.
I think the possible futures in which a Mars colony
makes a critical difference for humanity
are very few.
I mean, I think, I mean,
assuming we make a Mars colony,
people go live there in a couple of decades.
I mean, their supplies are going to come from Earth.
The money to make the colony came from Earth
and whatever powers are supplying the goods there
from Earth are going to in effect be in control
of that Mars colony.
Of course, there are outlier situations where, you know,
Earth gets nuked into oblivion
and somehow Mars has been made self-sustaining by that point
and then Mars is what allows humanity to persist.
But I think that those are very, very, very unlikely.
Do you don't think it could be a first step on a long journey?
Of course, it's a first step on a long journey,
which is awesome.
I'm guessing the colonization of the rest of the physical universe
will probably be done by AGI's
that are better designed to live in space
than by the meat machines that we are.
But I mean, who knows?
We may cry or preserve ourselves in some superior way
to what we know now
and like shoot ourselves out to Alpha Centaurium beyond.
I mean, that's all cool.
It's very interesting and it's much more valuable
than most things that humanity is spending its resources on.
On the other hand, with AGI,
we can get to a singularity before the Mars colony
becomes sustaining for sure,
possibly before it's even operational.
So your intuition is that that's the problem
if we really invest resources
and we can get to faster than a legitimate,
full, like self-sustaining colonization of Mars.
Yeah, and it's very clear that we will to me
because there's so much economic value
in getting from their AI toward AGI,
whereas the Mars colony, there's less economic value
until you get quite far out into the future.
So I think that's very interesting.
I just think it's somewhat off to the side.
I mean, just as I think, say, art and music
are very, very interesting,
and I want to see resources go into amazing art and music
being created, and I'd rather see that
than a lot of the garbage that society spends their money on.
On the other hand, I don't think Mars colonization
or inventing amazing new genres of music
is not one of the things that is most likely
to make a critical difference in the evolution
of human or non-human life in this part of the universe
over the next decade.
Do you think AGI is really...?
AGI is by far the most important thing
that's on the horizon,
and then technologies that have direct ability
to enable AGI or to accelerate AGI
are also very important.
For example, say quantum computing.
I don't think that's critical to achieve AGI,
but certainly you could see how the right quantum computing architecture
could massively accelerate AGI,
similar to other types of nanotechnology, right?
Now, the quest to cure aging and end disease
while not in the big picture as important as AGI,
of course, it's important to all of us as individual humans,
and if someone made a super longevity pill
and distributed it tomorrow,
that would be huge and a much larger impact
than a Mars colony is going to have for quite some time.
But perhaps not as much as an AGI system.
No, because if you can make a benevolent AGI,
then all the other problems are solved.
I mean, once it's as generally intelligent as humans,
it can rapidly become massively more generally intelligent
than humans, and then that AGI should be able to solve
science and engineering problems much better than human beings,
as long as it is in fact motivated to do so.
That's why I said a benevolent AGI.
There could be other kinds.
Maybe it's good to step back a little bit.
I mean, we've been using the term AGI.
People often cite you as the creator,
at least the popularizer of the term AGI,
artificial general intelligence.
Can you tell the origin story of the term?
Sure, sure.
So, yeah, I would say I launched the term AGI
upon the world for what it's worth
without ever fully being in love with the term.
Right.
What happened is I was editing a book,
and this process started around 2001 or two.
I think the book came out 2005, funny.
I was editing a book which I provisionally was
titled Real AI.
And I mean, the goal was to gather together
fairly serious academic-ish papers on the topic
of making thinking machines that could really think
in the sense like people can or even more broadly
than people can.
So then I was reaching out to other folks
that I had encountered here or there
who were interested in that,
which included some other folks who I knew
from the transhumist and singularitarian world
like Peter Vos, who has a company AGI
Incorporated still in California,
and included Shane Leg, who had worked for me
at my company WebMind in New York in the late 90s,
who by now has become rich and famous.
He was one of the co-founders of Google DeepMind.
But at that time, Shane was...
I think he may have been...
have just started doing his PhD with Marcus Hooter,
who at that time hadn't yet published his book
Universal AI, which sort of gives a mathematical
foundation for artificial general intelligence.
So I reached out to Shane and Marcus and Peter Vos
and Pei Wang, who was another former employee of mine
who had been Douglas Hofstadter's PhD student,
who had his own approach to AGI.
And a bunch of some Russian folks reached out to these guys
and they contributed papers for the book.
But that was my provisional title, but I never loved it
because in the end, you know, I was doing some...
what we would now call narrow AI as well,
like applying machine learning to genomics data
or chat data for sentiment analysis.
I mean, that work is real.
And in a sense, it's really AI.
It's just a different kind of AI.
Ray Kurzweil wrote about narrow AI versus strong AI.
But that seemed weird to me because, first of all,
narrow and strong are not antennas, right?
That's right.
I mean, but secondly, strong AI was used
in the cognitive science literature to mean the hypothesis
that digital computer AIs could have true consciousness
like human beings.
So there was already a meaning to strong AI,
which was complexly different but related, right?
So we were tossing around on an email list
whether what title it should be.
And so we talked about narrow AI, broad AI,
wide AI, narrow AI, general AI.
And I think it was either Shane Leg or Peter Vos
on the private email discussion we had.
He said, well, why don't we go with AGI,
artificial general intelligence?
And Pei Wang wanted to do GAI,
general artificial intelligence,
because in Chinese it goes in that order.
But we figured gay wouldn't work
in US culture at that time, right?
So we went with AGI.
We used it for the title of that book.
And part of Peter and Shane's reasoning
was you have the G factor in psychology,
which is IQ, general intelligence, right?
So you have a meaning of GI, general intelligence,
in psychology.
So then you're looking like artificial GI.
So then...
Oh, that makes a lot of sense.
Yeah, we used that for the title of the book.
And so I think maybe both Shane and Peter
think they invented the term.
But then later, after the book was published,
this guy, Mark Gooberd, came up to me
and he's like, well, I published an essay
with the term AGI in 1997 or something.
And so I'm just waiting for some Russian to come out
and say they published that in 1953, right?
I mean, that term...
For sure.
That term is not dramatically innovative or anything.
It's one of these obvious in hindsight things,
which is also annoying in a way,
because, you know,
Joe Shabak, who you interviewed, is a close friend of mine.
He likes the term synthetic intelligence,
which I like much better,
but it hasn't actually caught on, right?
Because, I mean, artificial is a bit off to me,
because artificial is like a tool or something,
but not all AGIs are going to be tools.
I mean, they may be now,
but we're aiming toward making them agents rather than tools.
And in a way, I don't like the distinction
between artificial and natural,
because, I mean, we're part of nature also,
and machines are part of nature.
I mean, you can look at evolved versus engineered,
but that's a different distinction.
Then it should be engineered general intelligence, right?
And then general, well,
if you look at Marcus Hoeter's book,
Universal AI, what he argues there is, you know,
within the domain of computation theory,
which has limited been interesting.
So if you assume computable environments
and computable reward functions,
then he articulates what would be a truly general intelligence,
a system called AIXI, which is quite beautiful.
IXI.
IXI, and that's the middle name
of my latest child, actually.
What's the first name?
First name is QORXI, which my wife came out with,
but that's an acronym
for Quantum Organized Rational Expanding Intelligence.
IXI is the middle name.
His middle name is Xiphanes, actually,
which means the former principal underlying IXI.
But in any case...
You're giving Elon Musk's new child a run for president.
Well, I did it first.
He copied me with this new freakish name.
But now, if I have another baby, I'm going to have to outdo him.
Outdo him.
It's becoming an arms race of weird geeky baby names.
We'll see what the babies think about it, right?
Yeah.
But, I mean, my oldest son, Zarathustra, loves his name,
and my daughter, Sharazad, loves her name.
So far, basically, if you give your kids weird names...
They live up to it.
Well, you're obliged to make the kids weird enough
that they like the names, right?
It directs their upbringing in a certain way.
But, yeah, anyway, I mean,
what Mark has shown in that book
is that a truly general intelligence,
theoretically as possible,
but would take infinite computing power.
So then, the artificial is a little off.
The general is not really achievable within physics
as we know it, and I mean,
physics as we know it may be limited,
but that's what we have to work with now.
Intelligence...
Infinitely general, you mean.
Like, information processing perspective, yeah.
Yeah, intelligence is not very well-defined either, right?
I mean, what does it mean?
I mean, in AI now, it's fashionable to look at it
as maximizing an expected reward over the future,
but that sort of definition is pathological in various ways.
And my friend David Weinbaum, a.k.a. Weaver,
he had a beautiful PhD thesis on open-ended intelligence
trying to conceive intelligence in a...
Without a reward.
Yeah, he's just looking at it differently.
He's looking at complex self-organizing systems
and looking at an intelligence system
as being one that, you know,
revises and grows and improves itself
in conjunction with its environment
without necessarily there being one objective function
it's trying to maximize,
although over certain intervals of time,
it may act as if it's optimizing a certain objective function.
Very much Solaris from Stanislav Lem's novels, right?
So, yeah, the point is artificial, general, and intelligence.
Don't work.
They're all bad.
On the other hand, everyone knows what AI is.
Yeah.
And AGI seems immediately comprehensible
to people with a technical background.
So, I think that the term has served as sociological function.
Now it's out there everywhere, which baffles me.
It's like KFC. I mean, that's it.
We're stuck with AGI probably for a very long time
until AGI systems take over and rename themselves.
Yeah.
And then we'll be biological.
We're stuck with GPUs too,
which mostly have nothing to do with graphics anymore, right?
I wonder what the AGI system will call us humans.
That was maybe...
Grandpa.
GPs.
Grandpa processing unit.
Biological grandpa processing unit.
Okay, so maybe also just a comment on AGI
representing, before even the term existed,
representing a kind of community.
You've talked about this in the past,
sort of AI is coming in waves,
but there's always been this community of people who dream
about creating general, human-level,
super-intelligent systems.
Can you maybe give your sense of the history of this community
as it exists today, as it existed before this deep learning revolution,
all throughout the winters and the summers of AI?
Sure.
First, I would say, as a side point,
the winters and summers of AI are greatly exaggerated by Americans.
And if you look at the publication record
of the artificial intelligence community since, say, the 1950s,
you would find a pretty steady growth in advance of ideas and papers.
What's thought of as an AI winter or summer was sort of
how much money is the U.S. military pumping into AI,
which was meaningful.
On the other hand, there was AI going on in Germany, UK,
and Japan, and Russia, all over the place,
while U.S. military got more and less enthused about AI.
So, I mean...
That happened to be, just for people who don't know,
the U.S. military happened to be the main source of funding for AI research.
So, another way to phrase that is it's up and down of funding
for artificial intelligence research.
And I would say the correlation between funding and intellectual advance
was not 100%, right?
Because, I mean, in Russia, as an example, or in Germany,
there was less dollar funding than in the U.S.,
but many foundational ideas were laid out,
and that was more theory than implementation, right?
And U.S. really excelled at sort of breaking through from theoretical papers
to working implementations,
which did go up and down somewhat with U.S. military funding.
But still, I mean, you can look...
In the 1980s, Dietrich Dörner in Germany
had self-driving cars on the Autobahn, right?
And, I mean, this...
It was a little early with regard to the car industry,
to catch on, such as has happened now.
But, I mean, that whole advancement of self-driving car technology in Germany
was pretty much independent of AI military summers and winters in the U.S.
So, there's been more going on in AI globally
than not only most people on the planet realized,
but then most new AI PhDs realized,
because they've come up within a certain subfield of AI
and haven't had to look so much beyond that.
But I would say, when I got my PhD in 1989 in mathematics,
I was interested in AI already.
In Philadelphia, by the way.
Yeah, I started at NYU, then I transferred to Philadelphia,
to Temple University, good old North Philly.
North Philly, yeah.
Pearl of the U.S., right?
You never stopped at a red light, man,
because you were afraid if you stopped at a red light,
some more carjacket, so you just drive through every red light.
Yeah.
Every day driving or bicycling to Temple from my house
was like a new adventure, right?
But, yeah, the reason I didn't do a PhD in AI
was what people were doing in the academic AI field then
was just astoundingly boring and seemed wrong-headed to me.
It was really like rule-based expert systems and production systems.
Actually, I loved mathematical logic.
I had nothing against logic as the cognitive engine for an AI.
But the idea that you could type in the knowledge
that AI would need to think seemed just completely stupid
and wrong-headed to me.
I mean, you can use logic if you want,
but somehow the system has got to be...
Automated.
Learning, right? It should be learning from experience.
And the AI field then was not interested in learning from experience.
I mean, some researchers certainly were.
I mean, I remember in mid-80s,
I discovered a book by John Andreas,
which was...
It was about a reinforcement learning system called Purpose,
P-U-R-R-P-U-S-S,
which was an acronym that I can't even remember
what it was for, Purpose anyway.
But, I mean, that was a system that was supposed to be an AGI
and basically by some sort of fancy,
like Markov decision process learning,
it was supposed to learn everything just from the bits coming into it
and learn to maximize its reward and become intelligent, right?
So, that was there in academia back then,
but it was like isolated, scattered, weird people.
But all these isolated, scattered, weird people in that period,
I mean, they laid the intellectual grounds for what happened later.
I look at John Andreas at University of Canterbury
with his Purpose Reinforcement Learning Markov system.
He was the PhD supervisor for John Cleary in New Zealand.
Now, John Cleary worked with me when I was at Waikato University in 1993 in New Zealand,
and he worked with Ian Whitten there,
and they launched WECA,
which was the first open-source machine learning toolkit,
which was launched in, I guess, 1993 or 1994 when I was at Waikato University.
Written in Java, unfortunately.
Written in Java, which was a cool language back then, right?
Yeah, I guess it's still, well, it's not cool anymore, but it's powerful.
I find, like most programmers now, I find Java unnecessarily bloated.
But back then, it was like Java or C++, basically.
Object-oriented, so it's nice.
Java was easier for students.
Amusingly, a lot of the work on WECA when we were in New Zealand was funded by a U.S.
sorry, a New Zealand government grant to use machine learning
to predict the menstrual cycles of cows.
So, in the U.S., all the grant funding for AI was about how to kill people or spy on people.
In New Zealand, it's all about cows or kiwi fruits, right?
Yeah.
So, yeah, anyway, I mean, John Andreas had his probability theory-based reinforcement
learning proto-AGI.
John Cleary was trying to do much more ambitious probabilistic AGI systems.
Now, John Cleary helped do WECA, which is the first open-source machine learning tool,
gets it a predecessor for TensorFlow and Torch and all these things.
Also, Shane Legg was at Waikato working with John Cleary and Ian Whitton and this whole group
and then working with my own companies, my company WebMind, an AI company I had in the late 90s
with a team there at Waikato University, which is how Shane got his head full of AGI,
which led him to go on and with Demosis Abbas found DeepMind.
So, what you can see through that lineage is in the 80s and 70s,
John Andreas was trying to build probabilistic reinforcement learning AGI systems.
The technology, the computers just weren't there to support it.
His ideas were very similar to what people are doing now.
But, you know, although he's long since passed away and didn't become that famous outside of Canterbury,
I mean, the lineage of ideas passed on from him to his students to their students.
You can go trace directly from there to me and to DeepMind, right?
So, there was a lot going on in AGI that did ultimately lay the groundwork for what we have today,
but there wasn't a community, right?
And so, when I started trying to pull together an AGI community,
it was in, I guess, the early arts when I was living in Washington, D.C.
and making a living doing AI consulting for various U.S. government agencies.
And I organized the first AGI workshop in 2006.
I mean, it wasn't like it was literally in my basement or something.
I mean, it was in the conference room at a Marriott in Bethesda.
It's not that edgy or underground, unfortunately, but still...
How many people attended?
About 60 or something.
That's not bad.
I mean, D.C. has a lot of AI going on.
Probably until the last five or 10 years, much more than Silicon Valley,
although it's just quiet because of the nature of what happens in D.C.
where your business isn't driven by PR.
Mostly, when something starts to work really well,
it's taken black and becomes even more quiet, right?
But yeah, the thing is that really had the feeling of a group of starry-eyed mavericks
huddled in a basement plotting how to overthrow the narrow AI establishment.
And for the first time, in some cases, coming together with others
shared their passion for AGI and the technical seriousness about working on it, right?
I mean, that's very, very different than what we have today.
I mean, now it's a little bit different.
We have AGI conference every year, and there's several hundred people rather than 50.
Now it's more like this is the main gathering of people who want to achieve AGI
and I think that large-scale nonlinear regression is not the golden path to AGI.
So I mean it's...
A.K. and neural networks.
Yeah, yeah, yeah.
Well, certain architecture is for learning using neural networks.
So yeah, the AGI conferences are sort of now the main concentration
of people not obsessed with deep neural nets and deep reinforcement learning
but still interested in AGI.
Not the only ones.
I mean, there's other little conferences and groupings interested in human-level AI
and cognitive architectures and so forth.
It's been a big shift.
Back then, you couldn't really...
It'll be very, very edgy then to give a university department seminar
that mentioned AGI or human-level AI.
It was more like you had to talk about something more short-term and immediately practical
than in the bar after the seminar, you could bullshit about AGI
in the same breath as time travel or the simulation hypothesis or something, right?
Whereas now AGI is not only in the academic seminar room,
like you have Vladimir Putin knows what AGI is.
He's like, Roshan needs to become the leader in AGI, right?
So national leaders and CEOs of large corporations.
I mean, the CTO of Intel, Justin Ratner, this was years ago,
Singularity Summit conference, 2008 or something.
He's like, we believe Ray Kurzweil, the Singularity will happen in 2045
and it will have Intel inside.
I mean, so it's gone from being something which is the pursuit of like crazed mavericks,
and science fiction fanatics to being, you know, a marketing term
for large corporations and the national leaders, right?
Which is an astounding transition.
But yeah, in the course of this transition, I think a bunch of sub-communities have formed
and the community around the AGI conference series is certainly one of them.
It hasn't grown as big as I might have liked it to.
On the other hand, you know, sometimes a modest-sized community
can be better for making intellectual progress also.
You get it with Society for Neuroscience conference.
You have 35 or 40,000 neuroscientists.
On the one hand, it's amazing.
On the other hand, you're not going to talk to the leaders of the field there
if you're an outsider.
In the same sense, the AAAI, the artificial intelligence,
the main kind of generic artificial intelligence conference is too big.
It's too amorphous.
Like, it doesn't make...
Well, yeah, and NIPPS has become a company advertising outlet now.
So yeah, I mean, to comment on the role of AGI in the research community,
I'd still, if you look at NeurIPS, if you look at CVPR, if you look at these iClear,
you know, AGI is still seen as the outcast.
I would still... I would say in these main machine learning,
in these main artificial intelligence conferences amongst the researchers,
I don't know if it's an accepted term yet.
What I've seen bravely, you mentioned Shane Leg, is DeepMind and then OpenAI
are the two places that are, I would say, unapologetically so far.
I think it's actually changing, unfortunately,
but so far they've been pushing the idea that the goal is to create an AGI.
Well, they have billions of dollars behind them.
So, I mean, in the public mind, that certainly carries some umph, right?
But they also have really strong researchers, right?
They do. They're great teams. I mean, DeepMind in particular.
DeepMind has Marcus Hodder walking around.
I mean, there's all these folks who basically their full-time position
involves dreaming about creating AGI.
I mean, Google Brain has a lot of amazing AGI-oriented people also.
I mean, so I'd say from a public marketing view, DeepMind and OpenAI
are the two large well-funded organizations that have put the term and concept AGI
out there sort of as part of their public image.
But I mean, they're certainly not.
There are other groups that are doing research that seems just as AGI-ish to me.
I mean, including a bunch of groups in Google's main Mountain View office.
So, yeah, it's true.
AGI is somewhat away from the mainstream now.
But if you compare to where it was, you know, 15 years ago,
there's been an amazing mainstreaming.
You could say the same thing about super longevity research,
which is one of my application areas that I'm excited about.
I mean, I've been talking about this since the 90s,
but working on this since 2001.
And back then, really, to say you're trying to create therapies
to allow people to live hundreds or thousands of years,
you were way, way, way, way out of the industry academic mainstream.
But now, you know, Google had project Calico, Craig Venture,
at Human Longevity Incorporated.
And then once the suits come marching in, right?
I mean, once there's big money in it, then people are forced to take it seriously
because that's the way modern society works.
So, it's still not as mainstream as cancer research.
Just as AGI is not as mainstream as automated driving or something.
But the degree of mainstreaming that's happened in the last, you know,
10 to 15 years is astounding to those of us who've been at it for a while.
Yeah, but there's a marketing aspect to the term,
but in terms of actual full force research that's going on under the header of AGI,
it's currently, I would say, dominated, maybe you can disagree,
dominated by neural networks research that the nonlinear regression, as you mentioned.
Like, what's your sense with OpenCog, with your work in general?
I was a logic based systems and expert systems.
For me, it always seemed to capture a deep element of intelligence
that needs to be there.
Like you said, it needs to learn, it needs to be automated somehow,
but that seems to be missing from a lot of research currently.
So, what's your sense?
I guess one way to ask this question,
what's your sense of what kind of things will an AGI system need to have?
Yeah, that's a very interesting topic that I've thought about for a long time.
And I think there are many, many different approaches that can work
for getting to human level AGI.
So, I don't think there's like one golden algorithm,
one golden design that can work.
And I mean, flying machines is the much more analogy here, right?
Like, I mean, you have airplanes, you have helicopters, you have balloons,
you have stealth bombers that don't look like regular airplanes,
you've got all blimps.
Birds too.
Birds, yeah, and bugs, right?
Yeah.
And I mean, there are certainly many kinds of flying machines.
And there's a catapult that you can just launch.
And there's bicycle powered like flying machines, right?
Yeah.
So, now these are all analysable by a basic theory of aerodynamics, right?
So, one issue with AGI is we don't yet have the analog of the theory of aerodynamics.
And that's what Marcus Hoeter was trying to make with the AXE
and his general theory of general intelligence.
But that theory in its most clearly articulated parts really only works
for either infinitely powerful machines or almost or insanely impractically powerful machines.
So, I mean, if you were going to take a theory-based approach to AGI,
what you would do is say, well, let's take what's called, say, AXE-TL,
which is Hoeter's AXE machine that can work on merely insanely much processing power
rather than infinitely much processing power.
What does TL stand for?
Time and length.
Okay.
So, you're basically how...
Like constrain some of them.
Yeah.
How AXE works basically is each action that it wants to take
before taking that action, it looks at all its history.
Yeah.
And then it looks at all possible programs that it could use to make a decision.
Yeah.
And it decides, like, which decision program would have let it make the best decisions
according to its reward function over its history.
And it uses that decision program to make the next decision, right?
It's not afraid of infinite resources.
It's searching through the space of all possible computer programs
in between each action and each next action.
Now, AXE-TL searches through all possible computer programs
that have runtime less than T and length less than L,
which is still an impractically humongous space, right?
So, what you would like to do to make an AGI,
and what will probably be done 50 years from now to make an AGI,
is say, okay, well, we have some constraints.
We have these processing power constraints,
and we have space and time constraints on the program.
We have energy utilization constraints,
and we have this particular class of environments that we care about,
which may be, say, manipulating physical objects on the surface of the Earth,
communicating in human language, whatever our particular, not annihilating humanity,
whatever our particular requirements happen to be.
If you formalize those requirements in some formal specification language,
you should then be able to run an automated program specializer on AXE-TL,
specialize it to the computing resource constraints
and the particular environment and goal,
and then it will spit out the specialized version of AXE-TL
to your resource restrictions in your environment, which will be your AGI, right?
And that, I think, is how our super AGI will create new AGI systems, right?
But that's a very...
It just seems really inefficient.
That's a very Russian approach, by the way.
The whole field of program specialization came out of Russia.
Can you backtrack?
So, what is program specialization?
So, it's basically...
Well, take sorting, for example.
You can have a generic program for sorting lists.
But what if all your lists, you care about a length, 10,000 or less?
Got it.
You can run an automated program specializer on your sorting algorithm,
and it will come up with the algorithm that's optimal for sorting lists of length 1,000 or less,
or 10,000 or less, right?
It's kind of like, isn't that the kind of the process of evolution
is a program specializer to the environment?
So, you're kind of evolving human beings?
Well, exactly.
I mean, your Russian heritage is showing that.
So, with Alexander Vityaev and Peter Anokin and so on,
I mean, there's a long history of thinking about evolution that way also, right?
So, my point is that what we're thinking of is a human-level general intelligence.
If you start from narrow AIs, like are being used in the commercial AI field now,
then you're thinking, okay, how do we make it more and more general?
On the other hand, if you start from AXI or Schmidhuber's Gertel machine,
or these infinitely powerful but practically infeasible AIs,
then getting to a human-level AGI is a matter of specialization.
It's like, how do you take these maximally general learning processes,
how do you specialize them so that they can operate within the resource constraints that you have,
but will achieve the particular things that you care about?
Because we humans are not maximally general intelligences, right?
If I ask you to run a maze in 750 dimensions, you'll probably be very slow.
Whereas at two dimensions, you're probably way better, right?
I mean, because our hippocampus has a two-dimensional map in it, right?
And it does not have a 750-dimensional map in it.
I mean, we're a peculiar mix of generality and specialization, right?
We'll probably start quite general at birth.
Obviously, it's still narrow, but more general than we are at age 20 and 30 and 40 and 50 and 60.
I don't think that. I think it's more complex than that,
because in some sense, a young child is less biased
and the brain has yet to crystallize into appropriate structures
for processing aspects of the physical and social world.
On the other hand, a young child is very tied to their sensorium,
whereas we can deal with abstract mathematics, like 750 dimensions,
and the young child cannot, because they haven't grown what Piaget called the formal capabilities.
They haven't learned to abstract yet, right?
And the ability to abstract gives you a different kind of generality than what a baby has.
So there's both more specialization and more generalization
that comes with the development process, actually.
I mean, I guess just the trajectories of the specialization are most controllable at the young age,
I guess is one way to put it.
Do you have kids?
No.
They're not as controllable as you think.
So you think it's interesting.
I think, honestly, a human adult is much more generally intelligent than a human baby.
Babies are very stupid.
I mean, they're cute, which is why we put up with their repetiveness and stupidity.
And they have what the Zen guys would call a beginner's mind, which is a beautiful thing,
but that doesn't necessarily correlate with a high level of intelligence.
On the plot of cuteness and stupidity, there's a process that allows us to put up with their stupidity
as they become more intelligent.
So by the time you're an ugly old man like me, you've got to get really, really smart to compensate.
To compensate, okay, cool.
But yeah, going back to your original question, so the way I look at human level AGI is,
yeah, how do you specialize, you know, unrealistically inefficient superhuman brute force learning processes
to the specific goals that humans need to achieve and the specific resources that we have.
And both of these, the goals and the resources and the environments, I mean, all this is important.
On the resources side, it's important that the hardware resources we're bringing to bear are very different than the human brain.
So the way I would want to implement AGI on a bunch of neurons in a vat that I could rewire arbitrarily
is quite different than the way I would want to create AGI on, say, a modern server form of CPUs and GPUs,
which in turn may be quite different than the way I would want to implement AGI on, you know,
whatever quantum computer we'll have in 10 years,
supposing someone makes a robust quantum turing machine or something, right?
So I think, you know, there's been co-evolution of the patterns of organization in the human brain
and the physiological particulars of the human brain over time.
And when you look at neural networks, that is one powerful class of learning algorithms,
but it's also a class of learning algorithms that evolve to exploit the particulars of the human brain as a computational substrate.
If you're looking at the computational substrate of a modern server form,
you won't necessarily want the same algorithms that you want on the human brain.
And, you know, from the right level of abstraction,
you could look at maybe the best algorithms on the brain and the best algorithms on a modern computer network
as implementing the same abstract learning and representation processes,
but, you know, finding that level of abstraction is its own AGI research project then, right?
So that's about the hardware side and the software side, which follows from that.
Then, regarding one of the requirements, I wrote the paper years ago on what I called the embodied communication prior,
which was quite similar in intent to Yoshua Bengio's recent paper on the consciousness prior,
except I didn't want to wrap up consciousness in it,
because to me, the quella problem and subjective experience is a very interesting issue also, which we can chat about.
But I would rather keep that philosophical debate distinct from the debate of what kind of biases
do you want to put in the general intelligence to give it human-like general intelligence.
And I'm not sure Yoshua Bengio is really addressing that kind of consciousness.
He's just using the term.
I love Yoshua to pieces like he's by far my favorite of the lines of deep learning,
but he's such a good, hearted guy and a great thinker.
Yeah, for sure.
I am not sure he has plummed to the depths of the philosophy of consciousness.
No, he's using it as a sexy term.
Yeah, yeah, yeah.
So what I called it was the embodied communication prior.
Can you maybe explain it a little bit?
Yeah, yeah.
What I meant was, what are we humans evolved for?
You can say being human, but that's very abstract, right?
I mean, our minds control individual bodies, which are autonomous agents,
moving around in a world that's composed largely of solid objects, right?
And we've also evolved to communicate via language with other solid object agents
that are going around doing things collectively with us in a world of solid objects.
And these things are very obvious.
But if you compare them to the scope of all possible intelligences,
or even all possible intelligences that are physically realizable,
that actually constrains things a lot.
So if you start to look at, you know,
how would you realize some specialized or constrained version of universal general intelligence
in a system that has, you know, limited memory and limited speed of processing,
but whose general intelligence will be biased toward controlling a solid object agent,
which is mobile in a solid object world for manipulating solid objects
and communicating via language with other similar agents in that same world, right?
Then starting from that, you're starting to get a requirements analysis
for human level general intelligence.
And then that leads you into cognitive science.
And you can look at, say, what are the different types of memory that the human mind and brain has?
And this has matured over the last decades.
And I got into this a lot.
So after getting my PhD in math, I was an academic for eight years.
I was in departments of mathematics, computer science, and psychology.
When I was in the psychology department at the University of Western Australia,
I was focused on cognitive science of memory and perception.
Actually, I was teaching neural nets and deep neural nets.
And it was multi-layered perceptrons, right?
Psychology.
Cognitive science.
It was cross-disciplinary among engineering, math, psychology, philosophy, linguistics, computer science.
But yeah, we were teaching psychology students to try to model the data
from human cognition experiments using multi-layer perceptrons,
which was the early version of a deep neural network.
Very, very, yeah, recurrent backprop was very, very slow to train back then, right?
So this is the study of these constraint systems that are supposed to deal with physical objects.
So if you look at cognitive psychology, you can see there's multiple types of memory,
which are to some extent represented by different subsystems in the human brain.
So we have episodic memory, which shakes into account our life history and everything that's happened to us.
We have declarative or semantic memory, which is like facts and beliefs abstracted from the particular situations that they occurred in.
There's sensory memory, which to some extent is sense modality specific,
and then to some extent is unified across sense modalities.
There's procedural memory, memory of how to do stuff, like how to swing the tennis racket, right?
There's motor memory, but it's also a little more abstract than motor memory.
It involves cerebellum and cortex working together.
Then there's memory linkage with emotion, which has to do with linkages of cortex and limbic system.
There's specifics of spatial and temporal modeling connected with memory,
which has to do with hippocampus and thalamus connecting to cortex.
And the basal ganglia, which influences goals.
So we have specific memory of what goals, sub-goals, and sub-sub-goals we want to perceive in which context in the past.
Human brain has substantially different subsystems for these different types of memory
and substantially differently tuned learning, like differently tuned modes of long-term potentiation
to do with the types of neurons and neurotransmitters in the different parts of the brain
correspond to these different types of knowledge.
These different types of memory and learning in the human brain,
I mean you can back these all into embodied communication for controlling agents in worlds of solid objects.
So if you look at building an AGI system, one way to do it, which starts more from cognitive science than neuroscience,
is to say, okay, what are the types of memory that are necessary for this kind of world?
Yeah, yeah, necessary for this sort of intelligence.
What types of learning work well with these different types of memory?
And then how do you connect all these things together, right?
And of course the human brain did it incrementally through evolution
because each of the sub-networks of the brain, and when it's not really the lobes of the brain,
it's the sub-networks, each of which is widely distributed,
each of the sub-networks of the brain co-evolves with the other sub-networks of the brain,
both in terms of its patterns of organization and the particulars of the neurophysiology.
So they all grew up communicating and adapting to each other.
It's not like they were separate black boxes that were then glommed together, right?
Whereas as engineers, we would tend to say, let's make the declarative memory box here
and the procedural memory box here and the perception box here and wire them together.
And when you can do that, it's interesting, I mean that's how a car is built, right?
But on the other hand, that's clearly not how biological systems are made.
The parts co-evolve so as to adapt and work together.
That's by the way how every human engineered system that flies,
that was using that analogy before, is built as well.
So do you find this at all appealing?
There's been a lot of really exciting, which I find strange that it's ignored
work in cognitive architectures, for example, throughout the last few decades.
Do you find that work interesting?
Yeah, I mean, I had a lot to do with that community.
And you know, Paul Rosenblum, who was one of the, and John Laird,
who built the SOAR architecture, our friends of mine.
And I learned SOAR quite well in ACTR and these different cognitive architectures.
How I was looking in the AI world about 10 years ago
before this whole commercial deep learning explosion was,
on the one hand, you had these cognitive architecture guys
who were working closely with psychologists and cognitive scientists
who had thought a lot about how the different parts of a human-like mind
should work together.
On the other hand, you had these learning theory guys
who didn't care at all about the architecture,
but were just thinking about how do you recognize patterns and large amounts of data.
And in some sense, what you needed to do was to get the learning
theory guys were doing and put it together with the architecture,
the cognitive architecture guys were doing, and then you would have what you needed.
Now, you can't, unfortunately, when you look at the details,
you can't just do that without totally rebuilding what is happening
on both the cognitive architecture and the learning side.
So, I mean, they tried to do that in SOAR,
but what they ultimately did is like take a deep neural net or something for perception
and you include it as one of the black boxes.
It becomes one of the boxes.
The learning mechanism becomes one of the boxes as opposed to fundamental...
Yeah, that doesn't quite work.
You could look at some of the stuff DeepMind has done,
like the differential neural computer or something.
That sort of has a neural net for deep learning perception.
It has another neural net, which is like a memory matrix.
It stores, say, the map of the London subway or something.
So, probably Demis Sassabas was thinking about it as part of Cortex and part of Hippocampus,
because Hippocampus has a spatial map.
And when he was a neuroscientist, he was doing a bunch on Cortex-Hippocampus interconnection.
So, there, the DNC would be an example of folks from the deep neural net world
trying to take a step in the cognitive architecture direction
by having two neural modules that correspond roughly to two different parts of the human brain
that deal with different kinds of memory and learning.
But, on the other hand, it's super, super, super crude from the cognitive architecture view, right?
Just as what John Laird and SOAR did with neural nets was super, super crude from a learning point of view,
because the learning was, like, off to the side, not affecting the core representations, right?
I mean, you weren't learning the representation.
You were learning the data that feeds into the...
You were learning abstractions of perceptual data to feed into the representation that was not learned, right?
So, yeah, this was clear to me a while ago, and one of my hopes with the AGI community
was to sort of bring people from those two directions together.
That didn't happen much in terms of...
Not yet.
What I was going to say is it didn't happen in terms of bringing, like, the lions of cognitive architecture together
with the lions of deep learning.
It did work in the sense that a bunch of younger researchers have had their heads filled with both of those ideas.
This comes back to a saying my dad, who was a university professor, often quoted to me,
which was, science advances one funeral at a time, which I'm trying to avoid.
Like, I'm 53 years old, and I'm trying to invent amazing, weird-ass new things
that nobody ever thought about, which we'll talk about in a few minutes.
But there is that aspect, right?
Like, the people who've been at AI a long time and have made their career developing one aspect,
like a cognitive architecture or a deep learning approach.
It can be hard once you're old and have made your career doing one thing.
It can be hard to mentally shift gears.
I mean, I try quite hard to remain flexible-
Have you been successful somewhat in changing...
Maybe have you changed your mind on some aspects of what it takes to build an AGI?
Like, technical things?
The hard part is that the world doesn't want you to.
The world or your own brain?
Well, that one point is that your brain doesn't want to.
The other part is that the world doesn't want you to.
Like, the people who have followed your ideas get mad at you if you change your mind.
And, you know, the media wants to pigeonhole you as an avatar of a certain idea.
Yeah, I've changed my mind on a bunch of things.
I mean, when I started my career, I really thought quantum computing would be necessary for AGI.
And I doubt it's necessary now, although I think it will be a super major enhancement.
But, I mean, I'm also...
I'm now in the middle of embarking on a complete rethink and rewrite from scratch
of our OpenCog AGI system together with Alexei Podopov and his team in St. Petersburg,
who's working with me in SingularityNet.
So, now we're trying to, like, go back to basics.
Take everything we learned from working with the current OpenCog system.
Take everything everybody else has learned from working with their proto-AGI systems
and design the best framework for the next stage.
And I do think there's a lot to be learned from the recent successes with deep neural nets
and deep reinforcement systems.
I mean, people made these essentially trivial systems work much better than I thought they would.
And there's a lot to be learned from that.
And I want to incorporate that knowledge appropriately in our OpenCog 2.0 system.
On the other hand, I also think current deep neural net architectures as such
will never get you anywhere near AGI.
So, I think you want to avoid the pathology of throwing the baby out with the bathwater
and saying, well, these things are garbage because foolish journalists overblow them
as being the path to AGI and a few researchers overblow them as well.
There's a lot of interesting stuff to be learned there, even though those are not the golden path.
So, maybe this is a good chance to step back.
You mentioned OpenCog 2.0.
Go back to OpenCog 0.0, which exists now.
Yeah, maybe talk through the history of OpenCog and you're thinking about these ideas.
I would say OpenCog 2.0 is a term we're throwing around sort of tongue-in-cheek
because the existing OpenCog system that we're working on now is not remotely close
to what we'd consider a 1.0, right?
I mean, it's been around, what, 13 years or something,
but it's still an early-stage research system, right?
Actually, we're going back to the beginning in terms of theory and implementation
because we feel like that's the right thing to do,
but I'm sure what we end up with is going to have a huge amount in common with the current system.
I mean, we all still like the general approach.
So, first of all, what is OpenCog?
Sure, OpenCog is an open-source software project that I launched together with several others in 2008
and probably the first code written toward that was written in 2001 or 2002 or something
that was developed as a proprietary code base within my AI company, NovaMente LLC.
And we decided to open-source it in 2008, cleaned up the code throughout some things, added some new things.
Well, language is it written in?
It's C++.
Primarily, there's a bunch of scheme as well, but most of it's C++.
And it's separate from something we also talked about, the SingularityNet.
So, it was born as a non-networked thing.
Correct.
Well, there are many levels of networks involved here, right?
Activity to the Internet.
Oh, no.
At birth.
Yeah.
I mean, SingularityNet is a separate project and a separate body of code.
And you can use SingularityNet as part of the infrastructure for a distributed OpenCog system.
But there are different layers.
Yeah.
Got it.
So, OpenCog, on the one hand, as a software framework, could be used to implement a variety of different AI architectures and algorithms.
In practice, there's been a group of developers, which I've been leading together with Linus Vepstas, Neil Geisweiler, and a few others, which have been using the OpenCog platform and infrastructure to implement certain ideas about how to make an AI.
So, there's been a little bit of ambiguity about OpenCog, the software platform, versus OpenCog, the AGI design, because in theory, you could use that software to do, you could use it to make a neural net.
You could use it to make a lot of different AGI.
So, what kind of stuff does the software platform provide, like in terms of utilities, tools like what?
Yeah, let me first tell about OpenCog as a software platform, and then I'll tell you the specific AGI R&D we've been building on top of it.
Yep.
So, the core component of OpenCog is a software platform, is what we call the AtomSpace, which is a weighted labeled hypergraph.
ATOM AtomSpace.
AtomSpace, yeah, yeah, not Atom, like Adam and Eve, although that would be cool, too.
Yeah, so you have a hypergraph, which is like, so a graph in this sense is a bunch of nodes with links between them.
A hypergraph is like a graph, but links can go between more than two nodes.
You have a link between three nodes, and in fact, OpenCog's AtomSpace would properly be called a metagraph,
because you can have links pointing to links, or you could have links pointing to whole subgraphs, right?
So, it's an extended hypergraph or a metagraph.
Is metagraph a technical term?
It is now a technical term.
Interesting.
I don't think it was yet a technical term when we started calling this a generalized hypergraph,
but in any case, it's a weighted labeled generalized hypergraph or weighted labeled metagraph.
The weights and labels mean that the nodes and links can have numbers and symbols attached to them,
so they can have types on them.
They can have numbers on them that represent, say, a truth value or an importance value for a certain purpose.
And of course, like with all things, you can reduce that to a hypergraph, then a hypergraph can be reduced to a hypergraph.
You can reduce a hypergraph to a graph, and you can reduce a graph to an adjacency matrix.
I mean, there's always multiple representations.
But there's a layer of representation that seems to work well here, got it.
Right, right, right.
And so, similarly, you could have a link to a whole graph, because a whole graph could represent, say, a body of information.
And I could say, I reject this body of information, then one way to do that is make that link go to that whole subgraph
representing the body of information, right?
I mean, there are many alternate representations, but that's...
Anyway, what we have an open cog, we have an atom space, which is this weighted labeled generalized hypergraph,
knowledge store, it lives in RAM, there's also a way to back it up to disk.
There are ways to spread it among multiple different machines.
Then there are various utilities for dealing with that.
So there's a pattern matcher, which lets you specify a sort of abstract pattern,
and then search through a whole atom space, weighted labeled hypergraph,
to see what subhypergraphs may match that pattern, for an example.
So then there's something called the cog server in open cog, which lets you run a bunch of different agents or processes
in a scheduler, and each of these agents, basically, it reads stuff from the atom space and it writes stuff to the atom space.
So this is sort of the basic operational model.
That's the software framework.
Right, and of course, there's a lot there just from a scalable software engineering standpoint.
So you could use this... I don't know if you've... Have you looked into the Steven Wolframs physics project recently with the hypergraphs and stuff?
Could you theoretically use the software framework to play with it?
You certainly could, although Wolfram would rather die than use anything but Mathematica for his work.
Well, yeah, but there's a big community of people who would love integration.
And like you said, the young minds love the idea of integrating, of connecting things.
Yeah, that's right.
And I would add on that note, the idea of using hypergraph type models in physics is not very new.
Like, if you look at...
The Russians did it first.
Well, I'm sure they did, and a guy named Ben Dribis, who's a mathematician, a professor in Louisiana or somewhere,
had a beautiful book on quantum sets and hypergraphs and algebraic topology for discrete models of physics
and carried it much farther than Wolfram has, but he's not rich and famous.
So it didn't get in the headlines.
But yeah, Wolfram aside, yeah, certainly that's a good way to put it.
The whole OpenCog framework, you could use it to model biological networks and simulate biology processes.
You could use it to model physics on discrete graph models of physics.
You could use it to do, say, biologically realistic neural networks, for example.
So that's a framework.
What do agents and processes do?
Do they grow the graph?
What kind of computations just to get a sense of this?
So in theory, they could do anything they want to do.
They're just C++ processes.
On the other hand, the computation framework is sort of designed for agents
where most of their processing time is taken up with reads and writes to the atom space.
And so that's a very different processing model than, say, the matrix multiplication-based model,
as underlies most deep learning systems.
So you could create an agent that just factored numbers for a billion years.
It would run within the OpenCog platform, but it would be pointless.
The point of doing OpenCog is because you want to make agents that are cooperating via reading and writing
into this weighted labeled hypergraph.
And that has both cognitive architecture importance, because then this hypergraph is being used as a sort of shared memory
among different cognitive processes, but it also has software and hardware implementation implications.
Because current GPU architectures are not so useful for OpenCog,
whereas a graph chip would be incredibly useful.
And I think GraphCore has those now, but they're not ideally suited for this.
So I think in the next, let's say, three to five years,
we're going to see new chips where a graph is put on the chip,
and the back and forth between multiple processes acting SIMD and MIMD on that graph is going to be fast.
And then that may do for OpenCog type architectures what GPUs did for deep neural architecture.
I'm all tangent. Can you comment on thoughts about neuromorphic computing?
So like hardware implementations of all these different kind of...
Are you excited by that possibility?
I'm excited by graph processors, because I think they can massively speed up OpenCog,
which is a class of architectures that I'm working on.
I think if, you know, in principle, neuromorphic computing should be amazing,
yet been fully sold on any of the systems that are out.
Memoristers should be amazing, too, right?
So a lot of these things have obvious potential,
but I haven't yet put my hands on a system that seemed to manifest that.
Yeah, Mark's system should be amazing, but the current system has not been great.
I mean, for example, if you wanted to make a biologically realistic hardware neural network,
like making a circuit in hardware that emulated like the Hodgkin-Huxley equation
or the Izhekovich equation, like differential equations for biologically realistic neuron
and putting that in hardware on the chip,
that would seem that it would make more feasible to make a large-scale,
truly biologically realistic neural network.
No, what's been done so far is not like that.
So I guess, personally, as a researcher,
I mean, I've done a bunch of work in computational neuroscience,
where I did some work with IRPA in DC, Intelligence Advanced Research Project Agency.
We were looking at how do you make a biologically realistic simulation
of seven different parts of the brain cooperating with each other
using realistic nonlinear dynamical models of neurons,
and how do you get that to simulate what's going on in the mind of a geoint intelligence analyst
while they're trying to find terrorists on a map, right?
So if you want to do something like that, having neural-muffered hardware
that really let you simulate like a realistic model of the neuron would be amazing.
But that's sort of what my computational neuroscience had on, right?
With an AGI hat on, I'm just more interested in these
hypergraph knowledge representation-based architectures
which would benefit more from various types of graph processors,
because the main processing bottleneck is reading, writing to RAM.
It's reading, writing to the graph in RAM.
The main processing bottleneck for this kind of proto-AGI architecture
is not multiplying matrices.
And for that reason, GPUs, which are really good at multiplying matrices,
don't apply as well.
There are frameworks like Gunrock and others
that try to boil down graph processing to matrix operations,
and they're cool, but you're still putting a square peg into a round hole in a certain way.
The same is true, I mean, current quantum machine learning, which is very cool.
It's also all about how to get matrix and vector operations in quantum mechanics,
and I see why that's natural to do.
I mean, quantum mechanics is all unitary matrices and vectors, right?
On the other hand, you could also try to make graph-centric quantum computers,
which I think is where things will go,
and then we can take the OpenCog implementation layer,
implement it in a uncollapsed state inside a quantum computer,
but that may be the singularity squared, right?
I'm not sure we need that to get to human level.
That's already beyond the first singularity, but can we just...
Yeah, let's go back to OpenCog.
Yeah, and the hypergraph and OpenCog.
Yeah, that's the software framework, right?
So the next thing is our cognitive architecture tells us particular algorithms to put there.
Got it.
Can we backtrack on the kind of...
Is this graph designed, is it in general supposed to be sparse
and the operations constantly grow and change the graph?
Yeah, the graph is sparse.
But is it constantly adding links and so on?
Yeah, it is a self-modifying hypergraph.
So the write and read operations you're referring to,
this isn't just a fixed graph to which you changed the way,
it's a constantly growing graph.
Yeah, that's true.
So it is a different model than, say, current deep neural nets
and have a fixed neural architecture, and you're updating the weights.
Although there have been cascade correlational neural net architectures that grow new nodes and links,
but the most common neural architectures now have a fixed neural architecture.
You're updating the weights, and in OpenCog, you can update the weights,
and that certainly happens a lot, but adding new nodes, adding new links,
removing nodes and links is an equally critical part of the system's operations.
Got it.
So now when you start to add these cognitive algorithms on top of this OpenCog architecture,
what does that look like?
Yeah, so within this framework, then, creating a cognitive architecture is basically two things.
It's choosing what type system you want to put on the nodes and links in the hypergraph,
what types of nodes and links you want,
and then it's choosing what collection of agents, what collection of AI algorithms or processes
are going to run to operate on this hypergraph.
And of course, those two decisions are closely connected to each other.
So in terms of the type system, there are some links that are more neural net-like.
They have weights to get updated by Hebbian learning and activation spreads along them.
There are other links that are more logic-like, and nodes that are more logic-like.
So you could have a variable node, and you can have a node representing a universal or existential quantifier,
as in predicate logic or term logic.
So you can have logic-like nodes and links, or you can have neural-like nodes and links.
You can also have procedure-like nodes and links, as in, say, combinator logic
or lambda calculus representing programs.
So you can have nodes and links representing many different types of semantics,
which means you could make a horrible, ugly mess,
or you could make a system where these different types of knowledge
all interpenetrate and synergize with each other beautifully, right?
So the hypergraph can contain programs.
Yeah, it can contain programs, although in the current version,
it is a very inefficient way to guide the execution of programs,
which is one thing that we are aiming to resolve with our rewrite of the system now.
So what do you use the most beautiful aspects of OpenCog?
Just do you personally some aspect that captivates your imagination from beauty or power?
What fascinates me is finding a common representation that underlies abstract,
declarative knowledge and sensory knowledge and movement knowledge
and procedural knowledge and episodic knowledge.
Finding the right level of representation where all these types of knowledge
are stored in a sort of universal and inter-convertible,
yet practically manipulable way, right?
So to me, that's the core, because once you've done that,
then the different learning algorithms can help each other out.
Like what you want is, if you have a logic engine that helps with declarative knowledge
and you have a deep neural net that gathers perceptual knowledge
and you have, say, an evolutionary learning system that learns procedures,
you want these to not only interact on the level of sharing results
and passing inputs and outputs to each other,
you want the logic engine, when it gets stuck,
to be able to share its intermediate state with the neural net
and with the evolutionary learning algorithm
so that they can help each other out of bottlenecks
and help each other solve combinatorial explosions
by intervening inside each other's cognitive processes.
But that can only be done if the intermediate state of a logic engine,
the evolutionary learning engine, and a deep neural net
are represented in the same form.
And that's what we figured out how to do
by putting the right type system on top of this weighted-leveled hypergraph.
So is there, can you maybe elaborate on what are the different characteristics
of a type system that can coexist amongst all these different kinds of knowledge
that needs to be represented?
I mean, like, is it hierarchical?
Just any kind of insights you can give on that kind of type system?
Yeah, yeah, so this gets very nitty-gritty and mathematical, of course.
But one key part is switching from predicate logic to term logic.
What is predicate logic? What is term logic?
So term logic wasn't written by Aristotle,
at least that's the oldest recollection we have of it.
But term logic breaks down basic logic
into basically simple links between nodes,
like an inheritance link between node A and node B.
So in term logic, the basic deduction operation is A implies B,
B implies C, therefore A implies C.
Whereas in predicate logic, the basic operation is modus ponens,
A implies B, therefore B.
So it's a slightly different way of breaking down logic.
But by breaking down logic into term logic,
you get a nice way of breaking logic down into nodes and links.
So your concepts can become nodes, the logical relations become links.
And so then inference is like, so if this link is A implies B,
this link is B implies C, then deduction builds a link A implies C.
And your probabilistic algorithm can assign a certain weight there.
Now, you may also have like a Hebbian neural link from A to C,
which is the degree to which thinking,
the degree to which A being the focus of attention
should make B the focus of attention, right?
So you could have then a neural link,
and you could have a symbolic logical inheritance link
in your term logic, and they have separate meaning,
but they could be used to guide each other as well.
Like, if there's a large amount of neural weight on the link between A and B,
that may direct your logic engine to think about,
well, what is the relation?
Are they similar?
Is there an inheritance relation?
Are they similar in some context?
On the other hand, if there's a logical relation between A and B,
that may direct your neural component to think,
well, when I'm thinking about A,
should I be directing some attention to B also?
Because there's a logical relation.
So in terms of logic, there's a lot of thought that went into
how do you break down logic relations,
including basic sort of propositional logic relations
as Aristotle's filling in term logic deals with,
and then quantifier logic relations also.
How do you break those down elegantly into a hypergraph?
I mean, you can boil logic expression and do a graph in many different ways.
Many of them are very ugly, right?
Right.
You can find elegant ways of sort of hierarchically breaking down
complex logic expression into nodes and links,
so that if you have, say, different nodes representing, you know,
Ben, AI, Lex, interview or whatever,
the logic relations between those things are compact
in the node and link representation,
so that when you have a neural net acting on those same nodes and links,
the neural net and the logic engine can sort of interoperate with each other.
And also interpretable by humans.
Is that an important...
That's tough.
In simple cases, it's interpretable by humans,
but honestly, you know, I would say logic systems
give more potential for transparency and comprehensibility
than neural net systems, but you still have to work at it
because, I mean, if I show you a predicate logic proposition
with, like, 500 nested universal and existential quantifiers
and 217 variables, that's no more comprehensible
than the weight metrics of a neural network, right?
So I'd say the logic expressions and AI learns from its experience
are mostly totally opaque to human beings
and maybe even harder to understand than a neural net,
because, I mean, when you have multiple nested quantifier bindings,
it's a very high level of abstraction.
There is a difference, though, in the...
Within logic, it's a little more straightforward
to pose the problem of, like, normalize this
and boil this down to a certain form.
I mean, you can do that in neural nets, too.
Like, you can distill a neural net to a simpler form,
but that's more often done to make a neural net
that'll run on an embedded device or something.
It's harder to distill a net to a comprehensible form
than is to simplify a logic expression to a comprehensible form,
but it doesn't come for free.
So what's in the AI's mind is incomprehensible to a human
unless you do some special work to make it comprehensible.
So on the procedural side,
there's some different and sort of interesting voodoo there.
I mean, if you're familiar in computer science,
there's something called the Curry-Howard correspondence,
which is a one-to-one mapping between proofs and programs.
So every program can be mapped into a proof.
Every proof can be mapped into a program.
You can model this using category theory
and a bunch of nice math,
but we want to make that practical, right?
So that if you have an executable program
that, like, moves a robot's arm
or figures out in what order to say things in a dialogue,
that's a procedure represented in OpenCog's hypergraph.
But if you want a reason on how to improve that procedure,
you need to map that procedure into logic
using Curry-Howard isomorphism,
so then the logic engine
can reason about how to improve that procedure
and then map that back into the procedural representation
that is efficient for execution.
So, again, that comes down to
not just can you make your procedure
into a bunch of nodes and links,
because, I mean, that can be done trivially.
A C++ compiler has nodes and links inside it.
Can you boil down your procedure
into a bunch of nodes and links
in a way that's, like, hierarchically decomposed and simple enough?
It can reason about it.
Yeah, yeah, given the resource constraints at hand,
you can map it back and forth to your term logic,
like, fast enough,
and without having a bloated logic expression, right?
So there's just a lot of...
There's a lot of nitty-gritty particulars there,
but by the same token,
if you ask a chip designer,
like, how do you make the Intel i7 chip so good, right?
There's a long list of technical answers there,
which will take a while to go through, right?
And this has been decades of work.
I mean, the first AI system of this nature I tried to build
was called WebMind in the mid-1990s,
and we had a big graph,
a big graph operating in RAM,
implemented with Java 1.1,
which was a terrible, terrible implementation idea,
and then each node had its own processing.
So, like, there, the core loop
looped through all nodes in the network,
and let each node enact what its little thing was doing.
And we had logic and neural nets in there,
but an evolutionary learning,
but we hadn't done enough of the math
to get them to operate together very cleanly,
so it was really...
It was quite a horrible mess.
So as well as shifting and implementation,
where the graph is its own object
and the agents are separately scheduled,
we've also done a lot of work
on how do you represent programs,
how do you represent procedures,
how do you represent genotypes for evolution
in a way that the interoperability
between the different types of learning
associated with these different types of knowledge
actually works, and that's been quite difficult.
It's taken decades, and it's totally off to the side
of what the commercial mainstream of the AI field is doing,
which isn't thinking about representation at all, really,
although you could see, like in the DNC,
they had to think a little bit about
how do you make representation of a map
in this memory matrix work together
with a representation needed for, say, visual pattern recognition
in a hierarchical neural network.
But I would say we have taken that direction
of taking the types of knowledge you need
for different types of learning,
like declarative, procedural, attentional,
and how do you make these types of knowledge
represent in a way that allows cross-learning
across these different types of memory.
We've been prototyping and experimenting with this
within OpenCog and before that WebMind
since the mid-1990s.
Now, disappointingly to all of us,
this has not yet been cashed out in an AGI system, right?
I mean, we've used this system within our consulting business,
so we've built natural language processing
and robot control and financial analysis.
We've built a bunch of sort of vertical market-specific
proprietary AI projects.
They use OpenCog on the back end,
but that's not the AGI goal, right?
It's interesting, but it's not the AGI goal.
So now what we're looking at with our rebuild
of the system...
2.0.
Yeah, we're also calling it true AGI,
so we're not quite sure what the name is yet.
We made a website for 2AGI.io,
but we haven't put anything on there yet,
so we may come up with an even better name.
It's kind of like the real AI starting point
for your AGI goal.
Yeah, but I like true better because true has...
You can be true-hearted, right?
You can be true to your girlfriend, so true has a number.
And it also has logic in it, right?
Because logic is a key point.
I like it, yeah.
So yeah, with the true AGI system,
we're sticking with the same basic architecture,
but we're trying to build on what we've learned.
And one thing we've learned is that we need type checking
among dependent types to be much faster,
and among probabilistic dependent types to be much faster.
So as it is now, you can have complex types
on the nodes and links.
You want to put...
Like, if you want types to be first-class citizens
so that you can have...
The types can be variables,
and then you do type checking among complex,
higher-order types.
You can do that in the system now, but it's very slow.
This is stuff like it's done in cutting-edge program languages
like Agda or something, these obscure research languages.
On the other hand, we've been doing a lot,
tying together deep neural nets with symbolic learning.
So we did a project for Cisco, for example,
which was on... This was street scene analysis,
but they had deep neural models for a bunch of cameras
watching street scenes,
but they trained a different model for each camera
because they couldn't get the transfer learning to work
between camera A and camera B.
So we took what came out of all the deep neural models
for the different cameras.
We fed it into an OpenCog symbolic representation.
Then we did some pattern mining and some reasoning
on what came out of all the different cameras
within the symbolic graph.
And that worked well for that application.
Hugo Latapie from Cisco gave a talk touching on that
at last year's AGR conference.
It was in Shenzhen.
On the other hand, we learned from there,
it was kind of clunky to get the deep neural models
to work well with the symbolic system
because we were using Torch,
and Torch keeps a sort of computation graph,
but you needed real-time access to that computation graph
within our hypergraph.
We certainly did it.
Alexei Podopov, who leads our St. Petersburg team,
wrote a great paper on cognitive modules in OpenCog,
explaining sort of how do you deal with the Torch compute
graph inside OpenCog.
But in the end, we realized, like,
that just hadn't been one of our design thoughts
when we built OpenCog, right?
So between wanting really fast dependent type checking
and wanting much more efficient interoperation
between the computation graphs of deep neural net frameworks
in OpenCog's hypergraph, and adding on top of that,
wanting to more effectively run an OpenCog hypergraph
distributed across RAM in 10,000 machines,
which is, we're doing dozens of machines now,
but it's just not, we didn't architect it
with that sort of modern scalability in mind.
So these performance requirements
are what have driven us to want to re-architect the base,
but the core AGI paradigm doesn't really change.
Like, the mathematics is the same.
We can't scale to the level that we want
in terms of distributed processing
or speed of various kinds of processing
with the current infrastructure
that was built in the phase 2001 to 2008,
which is hardly shocking, right?
Well, I mean, the three things you mentioned
are really interesting.
So what do you think about, in terms of interoperability,
communicating with computational graph of neural networks,
what do you think about the representations
that neural networks form?
They're bad, but there's many ways
that you could deal with that.
So I've been wrestling with this a lot
in some work on supervised grammar induction,
and I have a simple paper on that
that I'll give at the next AGI conference,
the online portion of which is next week, actually.
What is grammar induction?
So this isn't AGI either,
but it's sort of on the verge
between NRAI and AGI or something.
Unsupervised grammar induction is the problem.
Throw your AI system a huge body of text
and have it learn the grammar of the language
that produced that text.
So you're not giving it labeled examples.
So you're not giving it, like, a thousand sentences
where the parses were marked up by graduate students.
So it's just got to infer the grammar from the text.
It's like the Rosetta Stone, but worse, right,
because you only have the one language,
and you have to figure out what is the grammar.
So that's not really AGI,
because the way a human learns language is not that, right?
I mean, we learn from language that's used in context.
So it's a social embodied thing.
We see how a given sentence is grounded in observation.
There's an interactive element, I guess.
Yeah, yeah, yeah.
On the other hand, so I'm more interested in that.
I'm more interested in making an AGI system
learn language from its social and embodied experience.
On the other hand, that's also more of a pain to do,
and that would lead us into Hansen Robotics
and their robotics work, I've known which we'll talk about
in a few minutes.
But just as an intellectual exercise, as a learning exercise,
trying to learn grammar from a corpus
is very, very interesting, right?
And that's been a field in AI for a long time.
No one can do it very well.
So we've been looking at transformer neural networks
and tree transformers, which are amazing.
These came out of Google Brain, actually.
And actually, on that team was Lucas Kaiser,
who used to work for me in the period 2005 through 2008 or something.
So it's been fun to see my former AGI employees disperse
and do all these amazing things.
Way too many sucked into Google, actually.
We'll talk about that, too.
Lucas Kaiser and a bunch of these guys,
they create transformer networks.
That classic paper like Attention is All You Need
and all these things following on from that.
So we're looking at transformer networks,
and these are able to...
This is what underlies GPT-2 and GPT-3 and so on,
which are very, very cool and have absolutely no cognitive understanding
of any of the texts they're looking at.
They're very intelligent idiots, right?
Sorry to take, but I'll bring this back,
but do you think GPT-3 understands language?
No, no, it understands nothing.
It's a complete idiot.
It's a brilliant idiot.
You don't think GPT-20 will understand language?
No, no, no.
Size is not going to buy you understanding.
Any more than a faster car is going to get you to Mars.
It's a completely different kind of thing.
I mean, these networks are very cool.
And as an entrepreneur,
I can see many highly valuable uses for them.
And as an artist, I love them, right?
So I mean, we're using our own neural model,
which is along those lines to control the Philip K. Dick robot now.
And it's amazing to like train a neural model
on the robot Philip K. Dick
and see it come up with like crazed stone philosopher pronouncements.
Very much like what Philip K. Dick might have said, right?
So these models are super cool.
And I'm working with Hanson Robotics now
on using a similar but more sophisticated one for Sophia,
which we haven't launched yet.
So I think it's cool.
But it's not understanding.
These are recognizing a large number of shallow patterns.
They're not forming an abstract representation.
And that's the point I was coming to when we're looking at grammar induction.
We tried to mine patterns out of the structure of the transformer network.
And you can, but the patterns aren't what you want.
They're nasty.
So I mean, if you do supervised learning,
if you look at sentences where you know the correct parts of a sentence,
you can learn a matrix that maps between the internal representation
of the transformer and the parts of the sentence.
And so then you can actually train something that will output the sentence parts
from the transformer network's internal state.
And we did this, I think, Christopher Manning,
some others have not done this also.
But I mean, what you get is that the representation is hardly ugly
and is scattered all over the network
and doesn't look like the rules of grammar
that you know are the right rules of grammar, right?
It's kind of ugly.
So what we're actually doing is we're using a symbolic grammar learning algorithm,
but we're using the transformer neural network as a sentence probability oracle.
So like when you, if you have a rule of grammar
and you aren't sure if it's a correct rule of grammar or not,
you can generate a bunch of senses using that rule of grammar
and a bunch of senses violating that rule of grammar.
And you can see the transformer model doesn't think
that the senses obeying the rule of grammar are more probable
than the senses disobeying the rule of grammar.
So in that way, you can use the neural model as a sense probability oracle
to guide a symbolic grammar learning process.
And that seems to work better than trying to milk the grammar out of the neural network
that doesn't have an end there.
So I think the thing is these neural nets are not getting
automatically meaningful representation internally by and large.
So one line of research is to try to get them to do that.
And Infogam was trying to do that.
So like if you look back like two years ago, there was all these papers on like at Edward,
this probabilistic programming neural net framework that Google had,
which came out of Infogam.
So the idea there was like you could train an Infogam neural net model,
which is a generative associative network to recognize and generate faces.
And the model would automatically learn a variable for how long the nose is
and automatically learn a variable for how wide the eyes are
or how big the lips are or something, right?
So it automatically learned these variables, which have a semantic meaning.
So that was a rare case where a neural net trained with a fairly standard GAN method
was able to actually learn a semantic representation.
So for many years, many of us tried to take that the next step
and get a GAN type neural network that would have not just a list of semantic latent variables,
but would have say a Bayes net of semantic latent variables with dependencies between them.
The whole programming framework Edward was made for that.
I mean, no one got it to work, right?
You think it's possible?
I don't know.
It might be that back propagation just won't work for it because the gradients are too screwed up.
Maybe you could get it to work using CMAES or some like floating point evolutionary algorithm.
We tried, we didn't get it to work.
Eventually, we just paused that rather than gave it up.
We paused that and said, well, okay, let's try more innovative ways to learn,
to learn what are the representations implicit in that network
without trying to make it grow inside that network.
And I described how we're doing that in language.
You can do similar things in vision, right?
Use it as an oracle.
Yeah, yeah, yeah.
So you can, that's one way is that you use a structure learning algorithm, which is symbolic.
And then you use the deep neural net as an oracle to guide the structure learning algorithm.
The other way to do it is like InfoGam was trying to do
and try to tweak the neural network to have this symbolic representation inside it.
I tend to think what the brain is doing is more like using the deep neural net type thing as an oracle.
Like I think the visual cortex or the cerebellum are probably learning
a non-semantically meaningful opaque tangled representation.
And then when they interface with the more cognitive parts of the cortex,
the cortex is sort of using those as an oracle and learning the abstract representation.
So if you do sports, say, take for example, serving in tennis, right?
I mean, my tennis serve is okay, not great, but I learned it by trial and error, right?
And I mean, I learned music by trial and error too.
I just sit down and play.
But then if you're an athlete, which I'm not a good athlete,
I mean, then you'll watch videos of yourself serving
and your coach will help you think about what you're doing
and you'll then form a declarative representation.
But your cerebellum maybe didn't have a declarative representation.
Same way with music, like, I will hear something in my head.
I'll sit down and play the thing like I heard it.
And then I will try to study what my fingers did to see, like, what did you just play?
Like, how did you do that, right?
Because if you're composing, you may want to see how you did it
and then declaratively morph that in some way that your fingers wouldn't think of, right?
But the physiological movement may come out of some opaque, like,
cerebellum reinforcement learned thing, right?
And so that's, I think, trying to milk the structure of a neural net by treating it as an oracle,
maybe more like how your declarative mind post-processes
what your visual or motor cortex.
I mean, in vision, it's the same way.
Like, you can recognize beautiful art much better than you can say why
you think that piece of art is beautiful.
But if you're trained as an art critic, you do learn to say why.
And some of it's bullshit, but some of it isn't, right?
Some of it is learning to map sensory knowledge into declarative and linguistic knowledge.
Yet without necessarily making the sensory system itself use a transparent and easily communicable representation.
Yeah, that's fascinating.
To think of neural networks as like dumb question answers that you can just milk
to build up a knowledge base.
And then it could be multiple networks, I suppose, from different...
Yeah, yeah.
So I think if a group like DeepMind or OpenAI were to build AGI,
and I think DeepMind is like a thousand times more likely from where I could tell,
because they've hired a lot of people with broad minds in many different approaches
and angles on AGI, whereas OpenAI is also awesome,
but I see them as more of like a pure deep reinforcement learning shop.
Yeah, this time, I got you.
So far.
Yeah, there's a lot of...
You're right.
There's so much interdisciplinary work at DeepMind, like Neuroscience.
And you put that together with Google Brain,
which granted they're not working that closely together now,
but my oldest son, Zarathustra, is doing his PhD in machine learning
applied to automated theorem proving in Prague under Joseph Erbon.
So the first paper, DeepMath, which applied deep neural nets to guide theorem proving
was out of Google Brain.
I mean, by now, the automated theorem proving community is going way, way,
way beyond anything Google was doing.
But still, yeah.
But anyway, if that community was going to make an AGI,
probably one way they would do it was take 25 different neural modules
architected in different ways, maybe resembling different parts of the brain,
like a basal ganglia model, cerebellum model, a thalamus model,
a few hippocampus models, number of different models representing parts
of the cortex, right?
Take all of these and then wire them together to co-train
and learn them together like that.
That would be an approach to creating an AGI.
One could implement something like that efficiently on top of our true AGI,
like OpenCog 2.0 system once it exists.
Although, obviously, Google has their own highly efficient implementation architecture.
So I think that's a decent way to build AGI.
I was very interested in that in the mid-90s.
But I mean, the knowledge about how the brain works sort of pissed me off.
It wasn't there yet.
You know, in the hippocampus, you have these concept neurons,
like the so-called grandmother neuron, which everyone laughed at.
It's actually there.
I have some Lex Friedman neurons that fire differentially when I see you
and not when I see any other person, right?
So how do these Lex Friedman neurons,
how do they coordinate with the distributed representation of Lex Friedman
I have in my cortex, right?
There's some back and forth in cortex and hippocampus
that lets these symbolic representations in hippocampus
correlate and cooperate with the distributed representations in cortex.
This probably has to do with how the brain does its version of abstraction
and quantifier logic, right?
Like, you can have a single neuron in the hippocampus that activates
the whole distributed activation pattern in cortex.
Well, this may be how the brain does, like,
symbolization and abstraction in functional programming or something.
But we can't measure it.
Like, we don't have enough electrodes stuck between the cortex
and the hippocampus in any known experiment to measure it.
So I got frustrated with that direction, not because it's impossible.
Which we just don't understand enough yet.
Of course, it's a valid research direction.
You can try to understand more and more.
And we are measuring more and more about what happens in the brain now
than ever before.
So it's quite interesting.
On the other hand, I sort of got more of an engineering mindset about AGI.
I'm like, well, okay, we don't know how the brain works that well.
We don't know how birds fly that well yet either.
We have no idea how a hummingbird flies in terms of the aerodynamics of it.
On the other hand, we know basic principles of like flapping and pushing the air down.
And we know the basic principles of how the different parts of the brain work.
So let's take those basic principles and engineer something that embodies those basic principles.
But, you know, it's well designed for the hardware that we have on hand right now.
Yes, so do you think we can create AGI before we understand how the brain works?
Yeah, I think that's probably what will happen.
And maybe the AGI will help us do better brain imaging that will then let us build artificial humans,
which is very, very interesting to us because we are humans, right?
I mean, building artificial humans is super worthwhile.
I just think it's probably not the shortest path to AGI.
So it's a fascinating idea that we would build AGI to help us understand ourselves.
You know, a lot of people ask me if, you know, young people interested in doing artificial intelligence,
they look at sort of, you know, doing graduate level, even undergrads, but graduate level research.
And they see whether the artificial intelligence community stands now.
It's not really AGI type research for the most part.
So the natural question they ask is what advice would you give?
I mean, maybe I could ask if people were interested in working on OpenCog
or in some kind of direct or indirect connection to OpenCog or AGI research, what would you recommend?
OpenCog, first of all, is open source project.
There's a Google Group discussion list.
There's a GitHub repository.
If anyone's interested in lending a hand with that aspect of AGI, introduce yourself on the OpenCog email list.
And there's a Slack as well.
I mean, we're certainly interested to have, you know, inputs into our redesign process for a new version of OpenCog,
but also we're doing a lot of very interesting research.
I mean, we're working on data analysis for COVID clinical trials.
We're working with Hanson Robotics.
We're doing a lot of cool things with the current version of OpenCog now.
So there's certainly opportunity to jump into OpenCog or various other open source AGI-oriented projects.
So would you say there's like masters and PhD theses in there?
Plenty, yeah, plenty, of course.
I mean, the challenge is to find a supervisor who wants to foster that sort of research,
but it's way easier than it was when I got my PhD, right?
It's okay, great. We talked about OpenCog, which is kind of one, the software framework,
but also the actual attempt to build an AGI system.
And then there is this exciting idea of SingularityNet.
So maybe can you say first, what is SingularityNet?
Sure, sure.
SingularityNet is a platform for realizing a decentralized network of artificial intelligences.
So Marvin Minsky, the AI pioneer who I knew a little bit, he had the idea of a society of minds,
like you should achieve an AI, not by writing one algorithm or one program,
but you should put a bunch of different AIs out there,
and the different AIs will interact with each other, each playing their own role,
and then the totality of the society of AIs would be the thing that displayed the human level intelligence.
And when he was alive, I had many debates with Marvin about this idea,
and he really thought the mind was more like a society than I do.
I think you could have a mind that was as disorganized as a human society,
but I think a human-like mind has a bit more central control than that, actually.
I mean, we have this thalamus and the medulla and limbic system.
We have a sort of top-down control system that guides much of what we do, more so than a society does.
So I think he stretched that metaphor a little too far,
but I also think there's something interesting there.
And so in the 90s, when I started my first sort of non-academic AI project, WebMind,
which was an AI startup in New York in the Silicon Valley area in the late 90s,
what I was aiming to do there was make a distributed society of AIs,
the different parts of which would live on different computers all around the world,
and each one would do its own thinking about the data local to it,
but they would all share information with each other and outsource work with each other and cooperate,
and the intelligence would be in the whole collective.
I organized a conference together with Francis Heiligen at Free University of Brussels in 2001,
which was the Global Brain Zero conference,
and we're planning the next version, the Global Brain One conference,
at the Free University of Brussels for next year, 2021, so 20 years after.
Maybe we can have the next one 10 years after that,
exponentially faster until the Singularity Conference, right?
The timing is right, yeah.
Yeah, yeah, exactly.
The idea with the Global Brain was, you know,
maybe the AI won't just be in a program on one guy's computer,
but the AI will be, you know, in the internet as a whole,
with the cooperation of different AI modules living in different places.
So one of the issues you face when architecting a system like that is, you know,
how is the whole thing controlled?
Do you have like a centralized control unit that pulls the puppet strings of all the different modules there,
or do you have a fundamentally decentralized network
where the society of AIs is controlled in some democratic and self-organized web,
all the AIs in that society, right?
And Francis and I had a different view of many things,
but we both wanted to make like a global society of AI minds
with a decentralized organizational mode.
Now, the main difference was he wanted the individual AIs to be all incredibly simple
and all the intelligence to be on the collective level.
Whereas I thought that was cool,
but I thought a more practical way to do it might be
if some of the agents in the society of minds were fairly generally intelligent on their own.
So like you could have a bunch of open cogs out there
and a bunch of simpler learning systems,
and then these are all cooperating, co-ordinated together,
sort of like in the brain, okay, the brain as a whole is the general intelligence,
but some parts of the cortex you could say have a fair bit of general intelligence on their own,
whereas say parts of the cerebellum or limbic system have very little general intelligence on their own,
and they're contributing to general intelligence, you know,
by way of their connectivity to other modules.
Do you see instantiations of the same kind of, you know,
maybe different versions of open cog, but also just the same version of open cog
and maybe many instantiations of it as part of...
That's what David and Hans and I want to do with many Sophia and other robots.
Each one has its own individual mind living on a server,
but there's also a collective intelligence infusing them
and a part of the mind living on the edge in each robot, right?
So the thing is, at that time,
as well as web mind being implemented in Java 1.1 as like a massive distributed system,
you know, their blockchain wasn't there yet.
So how to have them do this decentralized control,
you know, we sort of knew it, we knew about distributed systems, we knew about encryption.
So I mean, we had the key principles of what underlies blockchain now,
but I mean, we didn't put it together in the way that's been done now.
So when Vitalik Buterin and colleagues came out with Ethereum blockchain,
you know, many, many years later, like 2013 or something,
then I was like, well, this is interesting.
Like this is Solidity scripting language.
It's kind of dorky in a way.
And I don't see why you need a term complete language for this purpose.
But on the other hand, this is like the first time I could sit down
and start to like script infrastructure for decentralized control of the AIs
in a society of minds in a tractable way.
You could hack the Bitcoin code base, but it's really annoying
whereas Solidity is Ethereum scripting language is just nicer and easier to use.
I'm very annoyed with it by this point, but like Java,
I mean, these languages are amazing when they first come out.
So then I came up with the idea that turned into singularity net.
Okay, let's let's make a decentralized agent system where a bunch of different AIs,
you know, wrapped up and say different Docker containers
or LXC containers, different AIs can each of them have their own identity on the blockchain.
And the coordination of this community of AIs has no central controller, no dictator, right?
There's no central repository of information.
The coordination of the society of minds is done entirely by the decentralized network
in a decentralized way by the algorithms, right?
Because, you know, the model of Bitcoin is in math we trust, right?
So that's what you need. You need the society of minds to trust only in math,
not trust only in one centralized server.
So the AI systems themselves are outside of the blockchain,
but then the communication between them.
At the moment, yeah, yeah.
I would have loved to put the AI's operations on chain in some sense,
but in Ethereum it's just too slow.
You can't still do it.
Somehow it's the basic communication between AI systems that's the distribution.
Basically, an AI is just some software in singularity.
AI is just some software process living in a container.
And there's a proxy that lives in that container along with the AI
that handles the interaction with the rest of singularity net.
And then when one AI wants to contribute with another one in the network,
they set up a number of channels.
And the setup of those channels uses the Ethereum blockchain.
Once the channels are set up, then data flows along those channels
without having to be on the blockchain.
All that goes on the blockchain is the fact that some data went along that channel.
So you can do...
So there's not a shared knowledge.
Well, the identity of each agent is on the blockchain, on the Ethereum blockchain.
If one agent rates the reputation of another agent, that goes on the blockchain.
And agents can publish what APIs they will fulfill on the blockchain.
But the actual data for AI and the results from AI is not on the blockchain.
Do you think it could be? Do you think it should be?
In some cases, it should be.
In some cases, maybe it shouldn't be.
I mean, I think that...
So I'll give you an example.
Using Ethereum, you can't do it.
Using now, there's more modern and faster blockchains
where you could start to do that in some cases.
A few years ago, that was less so.
It's a very rapidly evolving ecosystem.
So like one example, maybe you can comment on something I worked a lot on is autonomous vehicles.
You can see each individual vehicle as an AI system.
And you can see vehicles from Tesla, for example, and then Ford and GM
and all these as also like larger...
I mean, they all are running the same kind of system on each set of vehicles.
They have individual AI systems and individual vehicles,
but it's all different.
The satiation is the same AI system within the same company.
So you can envision a situation where all of those AI systems are put on SingularityNet.
And how do you see that happening and what would be the benefit?
And could they share data?
I guess one of the biggest things is that the power there is in decentralized control,
the benefit would have been...
It's really nice if they can somehow share the knowledge in an open way if they choose to.
Yeah, those are all quite good points.
So I think the benefit from being on the decentralized network as we envision it
is that we want the AIs and the network to be outsourcing work to each other
and making API calls to each other frequently.
I got you.
The real benefit would be if that AI wanted to outsource some cognitive processing
or data processing or data preprocessing, whatever,
to some other AIs in the network which specialize in something different.
And this really requires a different way of thinking about AI software development, right?
So just like object-oriented programming was different than imperative programming.
And now object-oriented programmers all use these frameworks to do things
rather than just libraries even.
Shifting to agent-based programming where AI agent is asking other live,
real-time evolving agents for feedback in what they're doing,
that's a different way of thinking.
I mean, it's not a new one.
There was loads of papers on agent-based programming in the 80s and onward.
But if you're willing to shift to an agent-based model of development,
then you can put less and less in your AI
and rely more and more on interactive calls to other AIs running in the network.
And of course, that's not fully manifested yet
because although we've rolled out a nice working version of SingulariNet platform,
there's only 5,200 AIs running in there now.
There's not tens of thousands of AIs.
So we don't have the critical mass for the whole society of mind
to be doing what we want to do.
Yeah, the magic really happens when there's just a huge number of agents.
Yeah, yeah, exactly.
In terms of data, we're partnering closely with another blockchain project
called Ocean Protocol.
And Ocean Protocol, that's the project of Trent McConaughey,
who developed Big Chain DB, which is a blockchain-based database.
So Ocean Protocol is basically blockchain-based big data,
and aims at making it efficient for different AI processes
or statistical processes or whatever to share large data sets.
One process can send a clone of itself to work on the other guy's data set
and send results back and so forth.
So by getting Ocean and, you know, you have Data Lake,
so this is the data ocean, right?
By getting Ocean and SingulariNet to interoperate,
we're aiming to take into account of the big data aspect also.
But it's quite challenging,
because to build this whole decentralized blockchain-based infrastructure,
I mean, your competitors are like Google, Microsoft, Alibaba, and Amazon,
which have so much money behind their centralized infrastructures,
plus they're solving simpler algorithmic problems,
because making it centralized in some ways is easier, right?
So they're very major computer science challenges,
and I think what you saw with the whole ICO boom in the blockchain and cryptocurrency world
is a lot of young hackers who are hacking Bitcoin or Ethereum,
and they say, well, why don't we make this decentralized on blockchain,
then after they raise some money through an ICO, they realize how hard it is.
It's like, actually, we're wrestling with incredibly hard computer science
and software engineering and distributed systems problems,
which can be solved, but they're just very difficult to solve.
And in some cases, the individuals who started those projects
were not well-equipped to actually solve the problems that they wanted to solve.
So you think, would you say that's the main bottleneck?
If you look at the future of currency, you know, the question is...
Well, currency, the main bottleneck is politics.
It's governments and the fans of armed thugs
that will shoot you if you bypass their currency restriction.
That's right.
So your sense is that versus the technical challenges,
because you kind of just suggest that the technical challenges are quite high as well.
I mean, for making a distributed money, you could do that on Algorand right now.
While Ethereum is too slow, there's Algorand
and there's a few other more modern, more scalable blockchains
that would work fine for a decentralized global currency.
So I think there were technical bottlenecks to that two years ago.
And maybe Ethereum 2.0 will be as fast as Algorand.
I don't know. That's not fully written yet, right?
So I think the obstacle to currency being put on the blockchain is that...
Is the other stuff you mentioned.
I mean, currency will be on the blockchain.
It'll just be on the blockchain in a way that enforces centralized control
and government hedge money rather than otherwise.
Like the ER&P will probably be the first global,
the first currency on the blockchain.
The E-rubble maybe next.
E-rubble?
Yeah, yeah, yeah.
I mean...
That's hilarious.
Digital currency, you know, makes total sense,
but they would rather do it in the way that Putin and Xi Jinping
have access to the global keys for everything, right?
So, and then the analogy to that in terms of Singularity Net.
I mean, there's echoes.
I think you've mentioned before that Linux gives you hope.
AI is not as heavily regulated as money, right?
Not yet, right?
Not yet.
Oh, that's a lot slipperier than money too, right?
I mean, money is easier to regulate
because it's kind of easier to define.
Whereas AI is...
It's almost everywhere inside everything.
Where's the boundary between AI and software, right?
I mean, if you're going to regulate AI,
there's no IQ test for every hardware device that has a learning algorithm.
You're going to be putting, like, hegemonic regulation on all software.
And I don't rule out that that...
Any adaptive software.
Yeah, but how do you tell if software is adaptive?
And what...
Every software is going to be adaptive, I mean...
Well, maybe they...
Maybe the...
You know, maybe we're living in the golden age of open source
that will not always be open.
Maybe it'll become centralized control of software by government.
It is entirely possible.
And part of what I think we're doing with things like SingularityNet protocol
is creating a toolset that can be used to counteract that sort of thing.
Say a similar thing about mesh networking, right?
It plays a minor role now, the ability to access internet,
like, directly phone to phone.
On the other hand, if your government starts trying to control your use of the internet,
suddenly having mesh working... mesh networking there can be very convenient, right?
And so right now, something like a decentralized blockchain based AGI framework
or narrow AI framework, it's cool.
It's nice to have.
On the other hand, if government start trying to tamp down on my AI,
interoperating with someone's AI in Russia or somewhere, right?
Then suddenly having a decentralized protocol that nobody owns or controls
becomes an extremely valuable part of the toolset.
And, you know, we've put that out there now.
It's not perfect, but it operates.
And, you know, it's pretty blockchain agnostic.
So we're talking to Algorand about making part of SingularityNet run on Algorand.
My good friend Tufi Saliba has a cool blockchain project called TOTA,
which is a blockchain without a distributed ledger.
It's like a whole other architecture.
So there's a lot of more advanced things you can do in the blockchain world.
SingularityNet could be ported to a whole bunch of...
It could be made multi-chain, important to a whole bunch of different blockchains.
And there's a lot of potential and a lot of importance to putting this kind of toolset out there.
If you compare the OpenCog, what you could see is OpenCog allows tight integration
of a few AI algorithms that share the same knowledge store in real-time, in RAM, right?
SingularityNet allows loose integration of multiple different AIs.
They can share knowledge, but they're mostly not going to be sharing knowledge
in RAM on the same machine.
And I think what we're going to have is a network of networks, right?
You have the knowledge graph inside the OpenCog system,
and then you have a network of machines inside a distributed OpenCog mind.
But then that OpenCog will interface with other AIs doing deep neural nets
or custom biology data analysis or whatever they're doing in SingularityNet,
which is a looser integration of different AIs, some of which may be their own networks, right?
And I think at a very loose analogy, you could see that in the human body.
Like the brain has regions like cortex or hippocampus,
which tightly interconnects like conical columns within the cortex, for example.
Then there's looser connection in the different lobes of the brain,
and then the brain interconnects with the endocrine system in different parts of the body,
even more loosely.
Then your body interacts even more loosely with the other people that you talk to.
So you often have networks within networks within networks
with progressively looser coupling as you get higher up in that hierarchy.
I mean, you have that in biology, you have that in the internet as just networking medium.
And I think that's what we're going to have in the network of software processes leading to AGI.
That's a beautiful way to see the world.
Again, the same similar question as with OpenCog.
If somebody wanted to build an AI system and plug into the SingularityNet,
what would you recommend?
Yeah, so that's much easier.
I mean, OpenCog is still a research system,
so it takes some expertise and sometimes we have tutorials,
but it's somewhat cognitively labor intensive to get up to speed on OpenCog.
I mean, what's one of the things we hope to change with the true AGI OpenCog 2.0 version
is just make the learning curve more similar to TensorFlow or Torch or something.
Right now, OpenCog is amazingly powerful, but not simple to do one.
On the other hand, SingularityNet, as an open platform,
was developed a little more with usability in mind.
The blockchain is still kind of a pain.
If you're a command line guy, there's a command line interface.
It's quite easy to take any AI that has an API and lives in a Docker container
and put it online anywhere, and then it joins the global SingularityNet.
Anyone who puts a request for services out into the SingularityNet,
the peer-to-peer discovery mechanism will find your AI,
and if it does what was asked,
it can then start a conversation with your AI about whether it wants to ask your AI
to do something for it, how much it would cost, and so on.
That's fairly simple.
If you wrote an AI and want it listed on official SingularityNet marketplace,
which is on our website, then we have a publisher portal,
and then there's a KYC process to go through,
because then we have some legal liability for what goes on that website.
In a way, that's been an education too.
There's two layers.
There's the open decentralized protocol.
There's the market.
Anyone can use the open decentralized protocol.
Say some developers from Iran, and there's brilliant AI guys
in the University of Isfahan in Tehran,
they can put their stuff on SingularityNet protocol,
and just like they can put something on the internet.
I don't control it.
But if we're going to list something on the SingularityNet marketplace
and put a little picture and a link to it,
then if I put some Iranian AI genius's code on there,
then Donald Trump can send a bunch of jack-booted thugs to my house
to arrest me for doing business with Iran.
We already see in some ways the value of having a decentralized protocol,
because what I hope is that someone in Iran will put online
an Iranian SingularityNet marketplace,
which you can pay in the cryptographic token, which is not owned by any country,
and then if you're in Congo or somewhere that doesn't have any problem with Iran,
you can subcontract AI services that you find on that marketplace,
even though U.S. citizens can't buy U.S. law.
Right now, that's kind of a minor point.
As you alluded, if regulations go in the wrong direction,
it could become more of a major point.
But I think it also is the case that having these workarounds to regulations in place
is a defense mechanism against those regulations being put into place.
You can see that in the music industry.
Napster just happened and BitTorrent just happened,
and now most people in my kids' generation,
they're baffled by the idea of paying for music.
My dad pays for music.
Because these decentralized mechanisms happened,
and then the regulations followed.
The regulations would be very different if they'd been put into place
before there was Napster and BitTorrent and so forth.
In the same way, we got to put AI out there in a decentralized vein,
and big data out there in a decentralized vein now
so that the most advanced AI in the world is fundamentally decentralized.
And if that's the case, that's just the reality the regulators have to deal with.
And then as in the music case, they're going to come up with regulations
that sort of work with the decentralized reality.
Beautiful. You were the chief scientist of Hansen Robotics.
You're still involved with Hansen Robotics,
doing a lot of really interesting stuff there.
This is for people who don't know, the company that created Sophia, the robot.
Can you tell me who Sophia is?
I'd rather start by telling you who David Hansen is.
David is the brilliant mind behind the Sophia robot,
and so far he remains more interesting than his creation,
although she may be improving faster than he does, actually.
That's a good point.
I met David maybe 2007 or something at some future conference.
We were both speaking of that.
And I could see we had a great deal in common.
I mean, we were both kind of crazy,
but we also both had a passion for AGI and the singularity.
And we were both huge fans of the work of Philip K. Dick, the science fiction writer.
And I wanted to create benevolent AGI that would create massively better life
for all humans and all sentient beings, including animals, plants, and superhuman beings.
And David, he wanted exactly the same thing,
but he had a different idea of how to do it.
He wanted to get computational compassion.
He wanted to get machines that would love people and empathize with people.
And he thought the way to do that was to make a machine that could look people,
eye to eye, face to face, look at people, and make people love the machine
and the machine loves the people back.
So I thought that was a very different way of looking at it,
because I'm very math-oriented and I'm just thinking like,
what is the abstract cognitive algorithm that will let the system internalize
the complex patterns of human values, blah, blah, blah,
whereas he's like, look you in the face and the eye and love you.
So we hit it off quite well and we talked to each other off and on.
Then I moved to Hong Kong in 2011.
So I've been living all over the place.
I've been in Australia and New Zealand in my academic career then in Las Vegas for a while.
I was in New York in the late 90s starting my entrepreneurial career.
I was in D.C. for nine years doing a bunch of U.S. government consulting stuff.
Then moved to Hong Kong in 2011 mostly because I met a Chinese girl who I fell in love with
and we got married.
She's actually not from Hong Kong.
She's from mainland China, but we converged together in Hong Kong.
Still married now.
I have a two-year-old baby.
So went to Hong Kong to see about a girl, I guess.
Yeah, pretty much, yeah.
On the other hand, I started doing some cool research there with Gino Yu at Hong Kong Polytechnic University.
I got involved with a project called IDEA using machine learning for stock and futures prediction,
which was quite interesting.
I also got to know something about the consumer electronics and hardware manufacture ecosystem
in Shenzhen across the border, which is like the only place in the world that makes sense to make
complex consumer electronics at large scale and low cost.
It's astounding the hardware ecosystem that you have in South China.
Like you ask people here cannot imagine what it's like.
So David was starting to explore that also.
I invited him to Hong Kong to give a talk at Hong Kong PolyU.
And I introduced him in Hong Kong to some investors who were interested in his robots.
And he didn't have Sophia then.
He had a robot of Philip K. Dick, our favorite science fiction writer.
He had a robot Einstein.
He had some little toy robots that looked like his son, Zino.
So through the investors I connected him to, he managed to get some funding to basically port Hanson Robotics to Hong Kong.
And when he first moved to Hong Kong, I was working on AGI research and also on this machine learning trading project.
So I didn't get that tightly involved with Hanson Robotics.
But as I hung out with David more and more, as we were both there in the same place, I started to get...
I started to think about what you could do to make his robots smarter than they were.
And so we started working together.
And for a few years I was chief scientist and head of software at Hanson Robotics.
Then when I got deeply into the blockchain side of things, I stepped back from that
and co-founded SingularityNet.
David Hanson was also one of the co-founders of SingularityNet.
So part of our goal there had been to make the blockchain-based like cloud mind platform for Sophia and the other...
Sophia would be just one of the robots in this SingularityNet.
Yeah, exactly.
So many copies of the Sophia robot would be among the user interfaces to the globally distributed SingularityNet cloud mind.
And David and I talked about that for quite a while before co-founding SingularityNet.
By the way, in his vision and your vision, was Sophia tightly coupled to a particular AI system?
Or was the idea that you could just keep plugging in different AI systems within the head of a...
I think David's view was always that Sophia would be a platform, much like the Pepper robot is a platform from SoftBank.
It should be a platform with a set of nicely designed APIs that anyone can use to experiment with their different AI algorithms on that platform.
And SingularityNet, of course, fits right into that, right?
Because SingularityNet, it's an API marketplace, so anyone can put their AI on there.
OpenCog is a little bit different.
I mean, David likes it, but I'd say it's my thing, it's not his.
David has a little more passion for biologically-based approaches to AI than I do, which makes sense.
I mean, he's really into human physiology and biology.
He's a character sculptor, right?
Yeah.
He's interested in... but he also worked a lot with rule-based and logic-based AI systems, too.
So yeah, he's interested in not just Sophia, but all the handsome robots as a powerful social and emotional robotics platform.
And what I saw in Sophia was a way to get AI algorithms out there in front of a whole lot of different people in an emotionally compelling way.
And part of my thought was really kind of abstract, connected to AGI ethics.
And many people are concerned AGI is going to enslave everybody or turn everybody into computronium to make extra hard drives for their cognitive engine or whatever.
And emotionally, I'm not driven to that sort of paranoia.
I'm really just an optimist by nature.
But intellectually, I have to assign a non-zero probability to those sorts of nasty outcomes.
Because if you're making something 10 times as smart as you, how can you know what it's going to do?
There's an irreducible uncertainty there, just as my dog can't predict what I'm going to do tomorrow.
So it seemed to me that based on our current state of knowledge, the best way to bias the AGI's we create toward benevolence would be to infuse them with love and compassion the way that we do our own children.
So you want to interact with AGI's in the context of doing compassionate, loving, and beneficial things. And in that way, as your children will learn by doing compassionate, beneficial, loving things alongside you,
in that way, the AI will learn in practice what it means to be compassionate, beneficial, and loving.
It will get a sort of ingrained, intuitive sense of this, which it can then abstract in its own way as it gets more and more intelligent.
Now, David saw this the same way. That's why he came up with the name Sophia, which means wisdom.
So it seemed to me making these beautiful, loving robots to be rolled out for beneficial applications would be the perfect way to roll out early-stage AGI systems so they can learn from people,
and not just learn factual knowledge, but learn human values and ethics from people while being their home service robots, their education assistants, their nursing robots.
So that was the grand vision. Now, if you've ever worked with robots, the reality is quite different, right?
Like the first principle is the robot is always broken.
I mean, I worked with robots in the 90s a bunch when you had to solder them together yourself, and I'd put neural nets during reinforcement learning on like overturned, solid-bolt type robots in the 90s when I was a professor.
Things, of course, advanced a lot, but the principle still holds.
Yeah, the principle, the robot's always broken, still holds.
Yeah, so faced with the reality of making Sophia do stuff, many of my Robo-AGI aspirations were temporarily cast aside.
And I mean, there's just a practical problem of making this robot interact in a meaningful way because like, you know, you put nice computer vision on there, but there's always glare.
Or you have a dialogue system, but at the time I was there, like no speech-to-text algorithm could deal with Hong Kong, Hong Kongese people's English accents.
So the speech-to-text was always bad, so the robot always sounded stupid because it wasn't getting the right text, right?
So I started to view that really as what in software engineering you call a walking skeleton, which is maybe the wrong metaphor to use for Sophia, or maybe the right one.
I mean, what a walking skeleton is in software development is, if you're building a complex system, how do you get started?
Well, one way is to first build part one well, then build part two well, then build part three well, and so on.
Another way is you make like a simple version of the whole system and put something in the place of every part the whole system will need,
so that you have a whole system that does something, and then you work on improving each part in the context of that whole integrated system.
So that's what we did on a software level in Sophia.
We made like a walking skeleton software system where there's something that sees, there's something that hears, there's something that moves, there's something that remembers, there's something that learns.
You put a simple version of each thing in there, and you connect them all together so that the system will do its thing.
So there's a lot of AI in there.
There's not any AGI in there. I mean, there's computer vision to recognize people's faces, recognize when someone comes in the room and leaves,
try to recognize whether two people are together or not.
The dialogue system, it's a mix of like hand-coded rules with deep neural nets that come up with their own responses.
And there's some attempt to have a narrative structure and sort of try to pull the conversation into something with the beginning, middle, and end, and the sort of story arc.
I mean, like if you look at the Lobner Prize and the systems that beat the Turing test currently, they're heavily rule-based,
because like you had said, narrative structure to create compelling conversations,
you currently, neural networks cannot do that well, even with Google Mina. When you actually look at full-scale conversations, it's just not...
Yeah, this is the thing. I've actually been running an experiment the last couple weeks taking Sophia's chatbot and then Facebook's Transformer chatbot,
which they opened the model. We've had them chatting to each other for a number of weeks on the server.
That's funny.
We're generating training data of what Sophia says in a wide variety of conversations.
But we can see, compared to Sophia's current chatbot, the Facebook deep neural chatbot comes up with a wider variety of fluent-sounding sentences.
On the other hand, it rambles like mad. The Sophia chatbot, it's a little more repetitive in the sentence structures it uses.
On the other hand, it's able to keep like a conversation arc over a much longer period.
Now, you can probably surmount that using Reformer and using various other deep neural architectures to improve the way these Transformer models are trained,
but in the end, neither one of them really understands what's going on.
That's the challenge I had with Sophia is if I were doing a robotics project aimed at AGI,
I would want to make like a robot toddler that was just learning about what it was seeing,
because then the language is grounded in the experience of the robot.
But what Sophia needs to do to be Sophia is talk about sports or the weather or robotics or the conference she's talking at.
She needs to be fluent talking about any damn thing in the world, and she doesn't have grounding for all those things.
So there's just like, I mean, Google Mina and Facebook's chatbot don't have grounding for what they're talking about either.
So in a way, the need to speak fluently about things where there's no non-linguistic grounding pushes what you can do for Sophia,
in the short term, a bit away from AGI.
I mean, it pushes you towards IBM Watson situation where you basically have to heuristic and hardcore stuff and rule-based stuff.
I have to ask you about this.
So in part, Sophia is an art creation because it's beautiful.
She's beautiful because she inspires through our human nature of anthropomorphized things.
We immediately see an intelligent being there.
Because David is a great sculptor.
That's right. So in fact, if Sophia just had nothing inside her head, said nothing.
If she just sat there, we already prescribed some intelligence to her.
There's a long selfie line in front of her after every talk.
That's right.
So it captivated the imagination of many people.
I was going to say the world, but yeah, I mean a lot of people.
Billions of people, which is amazing.
It's amazing, right.
Now, of course, many people have prescribed essentially AGI type of capabilities to Sophia when they see her.
And of course, friendly French folk like Jan Lacune,
immediately see that of the people from the AI community and get really frustrated because...
It's understandable.
And then they criticize people like you who sit back and don't say anything about...
Basically allow the imagination of the world, allow the world to continue being captivated.
So what's your sense of that kind of annoyance that the AI community has?
I think there's several parts to my reaction there.
First of all, if I weren't involved with Hanson Rebox and didn't know David Hanson personally,
I probably would have been very annoyed initially at Sophia as well.
I mean, I can understand the reaction.
I would have been like, wait, all these stupid people out there think this is an AGI.
But it's not an AGI, but they're tricking people that this very cool robot is an AGI.
And those of us trying to raise funding to build AGI,
people will think it's already there and already works.
On the other hand, I think even if I weren't directly involved with it,
once I dug a little deeper into David and the robot and the intentions behind it,
I think I would have stopped being pissed off.
Folks like Jan LeCun have remained pissed off after their initial reaction.
That's his thing.
That in particular struck me as somewhat ironic because Jan LeCun is working for Facebook,
which is using machine learning to program the brains of the people in the world
toward vapid consumerism and political extremism.
So if your ethics allows you to use machine learning in such a blatantly destructive way,
why would your ethics not allow you to use machine learning to make a lovable theatrical robot
that draws some foolish people into its theatrical illusion?
If the pushback had come from Yoshua Bengio, I would have felt much more humbled by it
because he's not using AI for blatant evil.
On the other hand, he also is a super nice guy and doesn't bother to go out there
trashing other people's work for no good reason.
Shots fired, but I get you.
If you're going to ask, I'm going to answer.
For sure. I think we'll go back and forth. I'll talk to Jan again.
I would add on this though. David Hansen is an artist and he often speaks off the cuff
and I have not agreed with everything that David has said or done regarding Sophia.
David also was not agreed with everything David has said or done about Sophia.
That's an important point.
David is an artistic wild man and that's part of his charm. That's part of his genius.
Certainly, there have been conversations within Hansen Robotics in between me and David
where I was like, let's be more open about how this thing is working.
I did have some influence in nudging Hansen Robotics to be more open
about how Sophia was working. David wasn't especially opposed to this.
He was actually quite right about it. What he said was, you can tell people exactly how it's working
and they won't care. They want to be drawn into the illusion.
He was 100% correct. I'll tell you what, this wasn't Sophia.
This was Philip K. Dick, but we did some interactions between humans
and Philip K. Dick Robot in Austin, Texas a few years back.
In this case, the Philip K. Dick was just teleoperated by another human in the other room.
During the conversations, we didn't tell people the robot was teleoperated.
We just said, here, have a conversation with Phil Dick. We're going to film you.
They had a great conversation with Philip K. Dick, teleoperated by my friend Stefan Bugai.
After the conversation, we brought the people in the back room to see Stefan
who was controlling the Philip K. Dick robot, but they didn't believe it.
These people were like, well, yeah, but I know I was talking to Phil.
Maybe Stefan was typing, but the spirit of Phil was animating his mind while he was typing.
Even though they knew it was a human in the loop, even seeing the guy there,
they still believed that was Phil they were talking to.
A small part of me believes that they were right, actually, because our understanding…
Well, we don't understand the universe.
That's the thing.
There is a cosmic mind field that we're all embedded in
that yields many strange synchronicities in the world,
which is a topic we don't have time to go into too much here.
I mean, there's something to this where our imagination about Sophia
and people like Jan Likun being frustrated about it is all part of this beautiful dance
of creating artificial intelligence that's almost essential.
You see with Boston Dynamics, whom I'm a huge fan of as well,
these robots are very far from intelligent.
I played with their last one, actually.
Yeah, very cool.
It reacts quite in a fluid and flexible way.
But we immediately ascribe the kind of intelligence.
We immediately ascribe AGI to them.
Yeah, if you kick it and it falls down and goes out, you feel bad.
You can't help it.
I mean, that's going to be part of our journey in creating intelligent systems
more and more and more and more.
Like, as Sophia starts out with a walking skeleton,
as you add more and more intelligence,
I mean, we're going to have to deal with this kind of idea.
Absolutely.
And about Sophia, I would say, first of all, I have nothing against Jan Likun.
No, no, this is fine.
This is all for fun.
He's a nice guy.
If he wants to play the media banter game, I'm happy to play.
He's a good researcher and a good human being.
I'd happily work with the guy.
The other thing I was going to say is, I have been explicit about how Sophia works.
And I've posted online and what H Plus Magazine and online WebZine.
I mean, I posted a moderately detailed article explaining like,
there are three software systems we've used inside Sophia.
There's a timeline editor, which is like a rule-based authoring system,
where she's really just being an outlet for what a human scripted.
There's a chatbot, which has some rule-based and some neural aspects.
And then sometimes we've used OpenCog behind Sophia,
where there's more learning and reasoning.
And, you know, the funny thing is, I can't always tell which system is operating here, right?
I mean, so whether she's really learning or thinking,
or just appears to be over half hour, I could tell,
but over like three or four minutes of interaction.
So even having three systems that's already sufficiently complex where you can't really tell, right?
Yeah, the thing is, even if you get up on stage and tell people how Sophia's working,
and then they talk to her, they still attribute more agency and consciousness to her
than is really there.
So I think there's a couple levels of ethical issue there.
One issue is, should you be transparent about how Sophia is working?
And I think you should.
And I think we have been.
I mean, there's articles online.
There's some TV special that goes through me explaining the three subsystems behind Sophia.
So the way Sophia works is out there much more clearly than how Facebook say I works or something, right?
I mean, we've been fairly explicit about it.
The other is, given that telling people how it works doesn't cause them to not attribute too much intelligence, agency to it anyway,
then should you keep fooling them when they want to be fooled?
And I mean, the whole media industry is based on fooling people the way they want to be fooled.
And we are fooling people 100% toward a good end.
I mean, we are playing on people's sense of empathy and compassion so that we can give them a good user experience with helpful robots
and so that we can fill the AI's mind with love and compassion.
So I've been talking a lot with Hanson Robotics lately about collaborations in the area of medical robotics.
And we haven't quite pulled the trigger on a project in that domain yet, but we may well do so quite soon.
So we've been talking a lot about robots can help with elder care, robots can help with kids,
David's and a lot of things with autism therapy and robots before.
In the COVID era, having a robot that can be a nursing assistant in various senses can be quite valuable.
The robots don't spread infection and they can also deliver more attention than human nurses can give, right?
So if you have a robot that's helping a patient with COVID, if that patient attributes more understanding and compassion
and agency to that robot than it really has because it looks like a human, I mean, is that really bad?
I mean, we can tell them it doesn't fully understand you and they don't care because they're lying there with a fever and they're sick.
But they'll react better to that robot with its loving, warm facial expression than they would to a pepper robot or a metallic looking robot.
So it's really, it's about how you use it, right?
If you made a human looking like door-to-door sales robot that used its human looking appearance to scan people out of their money,
then you're using that connection in a bad way, but you could also use it in a good way.
But then that's the same problem with every technology, right?
Beautifully put.
So like you said, we're living in the era of the COVID.
This is 2020, one of the craziest years in recent history.
So if we zoom out and look at this pandemic, the coronavirus pandemic,
maybe let me ask you this kind of thing in viruses in general.
When you look at viruses, do you see them as a kind of intelligence system?
I think the concept of intelligence is not that natural of a concept in the end.
I mean, I think human minds and bodies are a kind of complex, self-organizing adaptive system.
And viruses certainly are that, right?
They're a very complex, self-organizing adaptive system.
If you want to look at intelligence as Marcus Hoeter defines it,
as sort of optimizing computable reward functions over computable environments,
for sure viruses are doing that, right?
And I mean, in doing so, they're causing some harm to us.
So the human immune system is a very complex, self-organizing adaptive system,
which has a lot of intelligence to it.
And viruses are also adapting and dividing into new mutant strains and so forth.
And ultimately, the solution is going to be nanotechnology, right?
I mean, the solution is going to be making little nanobots that fight the viruses.
Well, people will use them to make nastier viruses,
but hopefully we can also use them to just detect combat and kill the viruses.
But I think now we're stuck with the biological mechanisms to combat these viruses.
Yeah, we've been, AGI is not yet mature enough to use against COVID,
but we've been using machine learning and also some machine reasoning in OPENCOG
to help some doctors to do personalized medicine against COVID.
So the problem there is, given the person's genomics
and given their clinical medical indicators,
how do you figure out which combination of antivirals
is going to be most effective against COVID for that person?
So that's something where machine learning is interesting,
but also we're finding the abstraction,
we get an OPENCOG with machine reasoning is interesting,
because it can help with transfer learning
when you have not that many different cases to study
and qualitative differences between different strains of a virus
or people of different ages who may have COVID.
So there's a lot of different disparate data to work with
and small data sets and somehow integrating them.
You know, this is one of the shameful things,
it's very hard to get that data.
So, I mean, we're working with a couple groups doing clinical trials
and they're sharing data with us like under non-disclosure,
but what should be the case is like every COVID clinical trial
should be putting data online somewhere,
like suitably encrypted to protect patient privacy
so that anyone with the AI algorithms should be able to help analyze it
and any biologist should be able to analyze it by hand
to understand what they can, right?
Instead, that data is like siloed inside whatever hospital
is running the clinical trial,
which is completely asinine and ridiculous.
So why the world works that way?
I mean, we could all analyze why, but it's insane that it does.
You look at this hydrochloroquine, right?
All these clinical trials were done were reported by Surgisphere,
some little company no one ever heard of.
And everyone paid attention to this.
So they were doing more clinical trials based on that.
Then they stopped doing clinical trials based on that.
Then they started again.
And why isn't that data just out there so everyone can analyze
and see what's going on, right?
Do you have hope that we'll move,
that data will be out there eventually for future pandemics?
I mean, do you have hope that our society will move in the direction of...
Not in the immediate future because the U.S. and China frictions are getting very high.
So it's hard to see U.S. and China as moving in the direction
of openly sharing data with each other, right?
There's some sharing of data,
but different groups are keeping their data private
until they've milked the best results from it and then they share it, right?
So we're working with some data that we've managed to get our hands on,
something we're doing to do good for the world.
That's a very cool playground for like putting deep neural nets and open cog together.
So we have like a bioadm space full of all sorts of knowledge
from many different biology experiments about human longevity
and from biology knowledge bases online.
And we can do like graph to vector type embeddings
where we take nodes from the hypergraph,
embed them into vectors,
which can then feed into neural nets for different types of analysis.
We were doing this in the context of a project called the Rejuve
that we spun off from SingularityNet to do longevity analytics.
I can understand why people live to 105 years or over and other people don't.
And then we had this spinoff Singularity Studio
where we're working with some healthcare companies on data analytics.
So this bioadm space we built for these more commercial
and longevity data analysis purposes were repurposing
and feeding COVID data into the same bioadm space
and playing around with like graph embeddings
from that graph into neural nets for bioinformatics.
So it's both being a cool testing ground
for some of our bio AI learning and reasoning.
And it seems we're able to discover things that people weren't seeing otherwise.
Because the thing in this case is for each combination of antivirals
you may have only a few patients who've tried that combination
and those few patients may have their particular characteristics.
Like this combination of three was tried only on people aged 80 or over.
This other combination of three,
which has an overlap with the first combination,
was tried more on young people.
So how do you combine those different pieces of data?
It's a very dodgy transfer learning problem,
which is the kind of thing that the probabilistic reasoning algorithms
we have inside OpenCog are better at than deep neural networks.
On the other hand, you have gene expression data,
where you have 25,000 genes and the expression level of each gene
and the peripheral blood of each person.
So that sort of data, either deep neural nets
or tools like XGBoost or CatBoost, these decision forest trees,
are better at dealing with it than OpenCog.
Because it's just these huge, huge messy floating point vectors
that are annoying for a logic engine to deal with,
but are perfect for a decision forest or a neural net.
So it's a great playground for hybrid AI methodology
and we can have a singularity net,
have OpenCog in one agent and XGBoost in a different agent
and they talk to each other.
But at the same time, it's highly practical,
because we're working with, for example,
some physicians on this project
and the group, physicians in the group called
Anthropinion based out of Vancouver and Seattle,
who are, these guys are working every day,
like in the hospital with patients dying of COVID.
So it's quite cool to see like neural symbolic AI,
like where the rubber hits the road,
trying to save people's lives.
I've been doing bio AI since 2001,
but mostly human longevity research
and fly longevity research,
try to understand why some organisms really live a long time.
This is the first time like race against the clock
and try to use the AI to figure out stuff that,
like if we take two months longer to solve the AI problem,
some more people will die because we don't know
what combination of antivirals to give them.
Yeah.
At the societal level, the biological level, at any level,
are you hopeful about us as a human species
getting out of this pandemic?
What are your thoughts on it in general?
Well, the pandemic will be gone in a year or two
once there's a vaccine for it.
So I mean, that's...
A lot of pain and suffering can happen in that time.
So that could be irreversible on me.
I think if you spend much time in sub-Saharan Africa,
you can see there's a lot of pain
and suffering happening all the time.
Like you walk through the streets
of any large city in sub-Saharan Africa,
and there are loads, I mean tens of thousands,
probably hundreds of thousands of people,
lying by the side of the road,
dying mainly of curable diseases without food or water
and either ostracized by their families
or they left their family house
because they didn't want to infect their family, right?
I mean, there's tremendous human suffering on the planet
all the time,
which most folks in the developed world
pay no attention to,
and COVID is not remotely the worst.
How many people are dying of malaria all the time?
I mean, so COVID is bad.
It is by no mean the worst thing happening.
And setting aside diseases,
I mean, there are many places in the world
where you're at risk of having like your teenage son
kidnapped by armed militias
and forced to get killed in someone else's war
fighting tribe against tribe.
I mean, so humanity has a lot of problems
which we don't need to have
given the state of advancement of our technology right now.
And I think COVID is one of the easier problems to solve
in the sense that there are many brilliant people
working on vaccines.
We have the technology to create vaccines
and we're going to create new vaccines.
We should be more worried that we haven't managed to defeat malaria
after so long and after the Gates Foundation
and others putting so much money into it.
I mean, I think clearly the whole global medical system,
the global health system,
and the global political and socioeconomic system
are incredibly unethical and unequal
and badly designed.
And I mean, I don't know how to solve that directly.
I think what we can do indirectly to solve it
is to make systems that operate in parallel
and off to the side of the governments
that are nominally controlling the world
with their armies and militias
and to the extent that you can make compassionate,
peer-to-peer, decentralized frameworks
and these are things that can start out unregulated
and then if they get traction before the regulators come in,
then they've influenced the way the world works, right?
SingularityNet aims to do this with AI Rejuve,
which is a spinoff from SingularityNet.
You can see it, Rejuve.io.
How do you spell that?
Rejuve.io.
That aims to do the same thing for medicine.
So it's like peer-to-peer sharing of medical data
so you can share medical data into a secure data wallet.
You can get advice about your health and longevity
through apps that Rejuve will launch within the next couple months
and then SingularityNet AI can analyze all this data
but then the benefits from that analysis
are spread among all the members of the network.
But I mean, of course, I'm going to haunt my particular projects
but I mean, whether or not SingularityNet and Rejuve are the answer,
I think it's key to create decentralized mechanisms for everything.
I mean, for AI, for human health, for politics,
for jobs and employment, for sharing social information,
and to the extent decentralized peer-to-peer methods
designed with universal compassion at the core can gain traction,
then these will just decrease the role that government has.
And I think that's much more likely to do good
than trying to explicitly reform the global government system.
I mean, I'm happy other people are trying to explicitly reform
the global government system.
On the other hand, you look at how much good the Internet
or Google did or mobile phones did.
I mean, you're making something that's decentralized
and throwing it out everywhere and it takes hold,
then government has to adapt.
And I mean, that's what we need to do with AI and with health.
And in that light, I mean, the centralization of healthcare
and of AI is certainly not ideal, right?
Like most AI PhDs are being sucked in by, you know,
a half dozen to a dozen big companies.
Most AI processing power is being bought by a few big companies
for their own proprietary good.
And most medical research is within a few pharmaceutical companies
and clinical trials run by pharmaceutical companies
will stay solid within those pharmaceutical companies.
You know, these large centralized entities,
which are intelligences in themselves, these corporations,
but they're mostly malevolent, psychopathic
and sociopathic intelligences.
Not saying the people involved are,
but the corporations as self-organizing entities on their own,
which are concerned with maximizing shareholder value
as a sole objective function.
I mean, AI and medicine are being sucked into these pathological
corporate organizations with government cooperation
and Google cooperating with British and US government on this
as one among many, many different examples.
23andMe providing you the nice service of sequencing your genome
and then licensing the genome to GlaxoSmithKline on an exclusive basis, right?
Now, you can take your own DNA and do whatever you want with it,
but the pooled collection of 23andMe sequence DNA is just to GlaxoSmithKline.
Someone else could reach out to everyone who had worked with 23andMe to sequence their DNA
and say, give us your DNA for our open and decentralized repository
that will make available to everyone,
but nobody's doing that because it's a pain to get organized
and the customer list is proprietary to 23andMe, right?
So, yeah, I mean, this I think is a greater risk to humanity from AI
than rogue AGIs turning the universe into paperclips or a computronium
because what you have here is mostly good-hearted and nice people
who are sucked into a mode of organization of large corporations
which has evolved just for no individual's fault
just because that's the way society has evolved.
It's not altruistic, it's self-interested and becomes psychopathic, like you said.
The corporation is psychopathic even if the people are not,
and that's really the disturbing thing about it
because the corporations can do things that are quite bad for society
even if nobody has a bad intention.
No individual member of that corporation has a bad intention.
No, some probably do, but it's not necessary that they do for the corporation.
I mean, Google, I know a lot of people in Google,
and with very few exceptions, they're all very nice people
who genuinely want what's good for the world.
And Facebook, I know fewer people, but it's probably mostly true.
It's probably like fine young geeks who want to build cool technology.
I actually tend to believe that even the leaders,
even Mark Zuckerberg, one of the most disliked people in tech,
also wants to do good for the world.
What about Jamie Dimon?
Who's Jamie Dimon?
The heads of the great banks may have a different psychology.
Oh boy, yeah.
I tend to be naive about these things and see the best.
I tend to agree with you that I think the individuals want to do good by the world,
but the mechanism of the company can sometimes be its own intelligence system.
I mean, one of my cousins, Mario Goertz,
who has worked for Microsoft since 1985 or something,
and I can see for him, as well as just working on cool projects,
your coding stuff that gets used by billions and billions of people.
And you think, if I improve this feature,
that's making billions of people's lives easier, right?
So of course, that's cool,
and the engineers are not in charge of running the company anyway.
And of course, even if you're Mark Zuckerberg or Larry Page,
you still have a fiduciary responsibility,
and you're responsible to the shareholders, your employees,
who you want to keep paying them and so forth.
Yeah, you're invested in this system.
And when I worked in D.C., I worked a bunch with INSCOM, U.S. Army Intelligence,
and I was heavily politically opposed to what the U.S. Army was doing in Iraq at that time,
like torturing people in Abu Ghraib.
But everyone I knew in U.S. Army and INSCOM, when I hung out with them,
was a very nice person.
They were friendly to me.
They were nice to my kids and my dogs, right?
And they really believed that the U.S. was fighting the forces of evil.
And if you ask me about Abu Ghraib, they're like,
well, but these Arabs will chop us into pieces.
So how can you say we're wrong to waterboard them a bit, right?
Like, that's much less than what they would do to us.
It's just in their worldview, what they were doing was really genuinely for the good of humanity.
None of them woke up in the morning and said,
like, I want to do harm to good people because I'm just a nasty guy, right?
So, yeah, most people on the planet,
sitting aside a few genuine psychopaths and sociopaths,
I mean, most people on the planet have a heavy dose of benevolence and wanting to do good
and also a heavy capability to convince themselves whatever they feel like doing
or whatever is best for them is for the good of humankind, right?
So the more we can decentralize control...
Decentralization, you know, the democracy is horrible,
but this is like Winston Churchill said,
you know, it's the worst possible system of government except for all the others, right?
I mean, I think the whole mess of humanity has many, many very bad aspects to it.
But so far, the track record of elite groups who know what's better for all of humanity
is much worse than the track record of the whole teaming democratic participatory mess of humanity, right?
I mean, none of them is perfect by any means.
The issue with a small elite group that knows what's best
is even if it starts out as truly benevolent and doing good things
in accordance with its initial good intentions,
you find out you need more resources, you need a bigger organization,
you pull in more people, internal politics arises, differences of opinions arise,
and bribery happens, like some opponent organization takes a second in command now
to make the first in command of some other organization,
and I mean, there's a lot of history of what happens with elite groups
thinking they know what's best for the human race.
And if I have to choose, I'm going to reluctantly put my faith
in the vast democratic decentralized mass.
And I think corporations have a track record of being ethically worse
than their constituent human parts.
And democratic governments have a more mixed track record, but there are at least...
That's the best we got.
And you can... There's Iceland, very nice country, right?
Very democratic for 800 plus years, very benevolent, beneficial government.
And I think, yeah, there are track records of democratic modes of organization.
Linux, for example, some of the people in charge of Linux
are overtly complete assholes, right?
And trying to reform themselves in many cases, in other cases not,
but the organization as a whole, I think it's done a good job overall.
It's been very welcoming in the third world, for example,
and it's allowed advanced technology to roll out
on all sorts of different embedded devices and platforms
in places where people couldn't afford to pay for proprietary software.
So I'd say the internet, Linux, and many democratic nations
are examples of how an open, decentralized democratic methodology
can be ethically better than some of the parts, rather than worse.
And corporations, that has happened only for a brief period,
and then it goes sour, right?
I mean, I'd say a similar thing about universities.
University is a horrible way to organize research and get things done,
yet it's better than anything else we've come up with, right?
The company can be much better, but for a brief period of time,
and then it stops being so good, right?
So then I think if you believe that AGI is going to emerge
sort of incrementally out of AIs doing practical stuff in the world,
like controlling humanoid robots, or driving cars, or diagnosing diseases,
or operating killer drones, or spying on people,
and reporting under the government, then what kind of organization
creates more and more advanced, narrow AI verging toward AGI
may be quite important, because it will guide what's in the mind
of the early-stage AGI as it first gains the ability to rewrite
its own code base and project itself toward superintelligence.
And if you believe that AI may move toward AGI
out of this sort of synergetic activity of many agents cooperating together,
rather than just to have one person's project,
then who owns and controls that platform for AI cooperation
becomes also very, very important, right?
And is that platform AWS?
Is it Google Cloud?
Is it Alibaba?
Or is it something more like the Internet or Singularity Net,
which is open and decentralized?
So if all of my weird machinations come to pass, right?
I mean, we have the Hanson robots being a beautiful user interface,
gathering information on human values,
and being loving and compassionate to people in medical, home service,
robot office applications.
You have Singularity Net in the back end,
networking together many different AIs toward cooperative intelligence,
fueling the robots, among many other things.
You have OpenCog 2.0 and TrueAGI as one of the sources of AI
inside this decentralized network, powering the robot and medical AIs,
helping us live a long time and cure diseases among other things.
And this whole thing is operating in a democratic and decentralized way, right?
I think if anyone can pull something like this off,
whether using the specific technologies I've mentioned or something else,
then I think we have a higher odds of moving toward a beneficial technological Singularity
rather than one in which the first super AGI is indifferent to humans
and just considers us an inefficient use of molecules.
That was a beautifully articulated vision for the world.
So thank you for that.
Well, let's talk a little bit about life and death.
I'm pro-life and anti-death.
For most people, there's few exceptions that I won't mention here.
I'm glad just like your dad, you're taking a stand against death.
You have, by the way, you have a bunch of awesome music where you play piano online.
One of the songs that I believe you've written, the lyrics go,
by the way, I like the way it sounds. People should listen to it. It's awesome.
I considered, I probably will cover it. It's a good song.
Tell me why do you think it is a good thing that we all get old and die?
It's one of the songs. I love the way it sounds.
But let me ask you about death first.
Do you think there's an element to death that's essential to give our life meaning,
like the fact that this thing ends?
Let me say, I'm pleased and a little embarrassed you've been listening to that music I put online.
That's awesome.
One of my regrets in life recently is I would love to get time to really produce music well.
I haven't touched my sequencer software in like five years.
I would love to rehearse and produce and edit.
But with a two-year-old baby and trying to create the singularity, there's no time.
So I just made the decision to, well, I'm playing random shit in an off moment.
Just record it.
Just record it, put it out there like whatever.
Maybe if I'm unfortunate enough to die, maybe that can be input to the AGI
when it tries to make an accurate mind upload of me, right?
Death is bad.
I mean, that's very simple.
It's bad thing we should have to say that.
I mean, of course, people can make meaning out of death.
And if someone is tortured, maybe they can make beautiful meaning out of that torture
and write a beautiful poem about what it was like to be tortured, right?
I mean, we're very creative.
We can melt beauty and positivity out of even the most horrible and shitty things.
But just because if I was tortured, I could write a good song about what it was like to be tortured.
It doesn't make torture good.
And just because people are able to derive meaning and value from death
doesn't mean they wouldn't derive even better meaning and value from ongoing life without death, which I very...
Definite.
Yeah, yeah.
So if you could live forever, would you live forever?
Forever.
My goal with longevity research is to abolish the plague of involuntary death.
I don't think people should die unless they choose to die.
If I had to choose forced immortality versus dying, I would choose forced immortality.
On the other hand, if I chose...
If I had the choice of immortality with the choice of suicide whenever I felt like it, of course I would take that instead.
And that's the more realistic choice.
I mean, there's no reason you should have forced immortality.
You should be able to live until you get sick of living, right?
And that will seem insanely obvious to everyone 50 years from now.
And there will be so...
I mean, people who thought death gives meaning to life so we should all die,
they will look at that 50 years from now the way we now look at the Anabaptists in the year 1000
who gave away all their positions, went on top of the mountain for Jesus to come and bring them to the ascension.
I mean, it's ridiculous that people think death is good because you gain more wisdom as you approach dying.
I mean, of course it's true.
I'm 53 and the fact that I might have only a few more decades left, it does make me reflect on things differently.
It does give me a deeper understanding of many things.
But I mean, so what?
You could get a deep understanding in a lot of different ways.
Pain is the same way.
We're going to abolish pain and that's even more amazing than abolishing death, right?
I mean, once we get a little better at neuroscience, we'll be able to go in and adjust the brain
so that pain doesn't hurt anymore, right?
And that people will say that's bad because there's so much beauty in overcoming pain and suffering.
Sure, and there's beauty in overcoming torture too.
And some people like to cut themselves, but not many, right?
That's interesting.
But to push back again, this is the Russian side of me.
I do romanticize suffering.
It's not obvious.
I mean, the way you put it, it seems very logical.
It's almost absurd to romanticize suffering or pain or death.
But to me, a world without suffering, without pain, without death, it's not obvious what that world looks like.
Well, then you can stay in the people's zoo with the people torturing each other.
No, but what I'm saying is I don't...
I guess what I'm trying to say, I don't know if I was presented with that choice, what I would choose.
Because to me...
No, this is a subtler matter, and I've posed it in this conversation in an unnecessarily extreme way.
So I think the way you should think about it is what if there's a little dial on the side of your head
and you could turn to how much pain hurt.
Turn it down to zero, turn up to 11, like in spinal tap if it wants.
Maybe through an actual spinal tap, right?
I mean, would you opt to have that dial there or not?
That's the question.
The question isn't whether you would turn the pain down to zero all the time.
Would you opt to have the dial or not?
My guess is that in some dark moment of your life, you would choose to have the dial implanted, and then it would be there.
Just to confess a small thing, don't ask me why, but I'm doing this physical challenge currently where I'm doing 680 push-ups and pull-ups a day.
And my shoulder is currently as we sit here in a lot of pain.
And I don't know, I would certainly right now, if you gave me a dial, I would turn that sucker to zero as quickly as possible.
Good.
But I think the whole point of this journey is, I don't know.
Well, because you're a twisted human being.
I'm a twisted.
So the question is, am I somehow twisted because I created some kind of narrative for myself so that I can deal with the injustice and the suffering in the world?
Or is this actually going to be a source of happiness for me?
Well, this is, to an extent, is a research question that humanity will undertake, right?
So I mean, human beings do have a particular biological makeup, which sort of implies a certain probability distribution over motivational systems, right?
And that is there.
Well put.
That is there.
Now, the question is, how flexibly can that morph as society and technology change, right?
So if we're given that dial and we're given a society in which say we don't have to, we don't have to work for a living and in which there's an ambient decentralized benevolent AI network that will warn us when we're about to hurt ourselves.
If we're in a different context, can we consistently, with being genuinely and fully human, can we consistently get into a state of consciousness where we just want to keep the pain dial turned all the way down?
And yet we're leading very rewarding and fulfilling lives, right?
Now, I suspect the answer is yes, we can do that.
But I don't know that.
It's a research question, like I said.
I don't know that for certain.
Yeah, now I'm more confident that we could create a non-human AGI system, which just didn't need an analog of feeling pain.
And I think that AGI system will be fundamentally healthier and more benevolent than human beings.
So I think it might or might not be true that humans need a certain element of suffering to be satisfied humans, consistent with the human physiology.
If it is true, that's one of the things that makes us fucked and disqualified to be the super AGI, right?
I mean, the nature of the human motivational system is that we seem to gravitate towards situations where the best thing in the large scale is not the best thing in the small scale, according to our subjective value system.
We gravitate towards subjective value judgments where to gratify ourselves in the large, we have to ungratify ourselves in the small.
And you see that in music, there's a theory of music which says the key to musical aesthetics is the surprising fulfillment of expectations.
Like you want something that will fulfill the expectations elicited in the prior part of the music, but in a way with a bit of a twist that surprises you.
I mean, that's true not only in out there music like my own or that of Zappa or Steve Vai or Buckethead or Christoph Penderecki or something.
It's even there in Mozart or something. It's not there in elevator music too much, but that's why it's boring, right?
But wrapped up in there is, you know, we want to hurt a little bit so that we can feel the pain go away.
We want to be a little confused by what's coming next.
So then when the thing that comes next actually makes sense, it's so satisfying, right?
It's the surprising fulfillment of expectations that we said. It's so beautifully put.
We've been skirting around a little bit, but if I were to ask you the most ridiculous big question of what is the meaning of life, what would your answer be?
Three values, joy, growth and choice.
I think you need joy. I mean, that's the basis of everything if you want the number one value.
On the other hand, I'm unsatisfied with a static joy that doesn't progress perhaps because of some elemental element of human perversity,
the idea of something that grows and becomes more and more and better and better in some sense appeals to me.
But I also sort of like the idea of individuality that as a distinct system, I have some agency.
So there's some nexus of causality within this system rather than the causality being wholly evenly distributed over the joyous growing mass.
So you start with joy, growth and choice as three basic values.
Those three things could continue indefinitely. That's something that can last forever.
Is there some aspect of something you called, which I like super longevity, that you find exciting?
Is there research-wise? Is there ideas in that space?
I mean, I think, yeah, in terms of the meaning of life, this really ties into that because for us as humans,
probably the way to get the most joy, growth and choice is transhumanism and to go beyond the human form that we have right now.
I mean, I think human body is great and by no means to any of us maximize the potential for joy, growth and choice imminent in our human bodies.
On the other hand, it's clear that other configurations of matter could manifest even greater amounts of joy, growth and choice than humans do.
Maybe even finding ways to go beyond the realm of matter as we understand it right now.
So I think in a practical sense, much of the meaning I see in human life is to create something better than humans and go beyond human life.
But certainly that's not all of it for me in a practical sense, right?
Like I have four kids and a granddaughter and many friends and parents and family and just enjoying everyday human social existence.
But we can do even better.
Yeah, yeah. And I mean, I love, I've always, when I could live near nature, I spend a bunch of time out in nature in the forest and on the water every day and so forth.
So I mean, enjoying the pleasant moment is part of it.
But the, you know, the growth and choice aspect are severely limited by our human biology.
In particular, dying seems to inhibit your potential for personal growth considerably as far as we know.
I mean, there's some element of life after death, perhaps, but even if there is, why not also continue going in this biological realm, right?
In super longevity, I mean, you know, we haven't yet cured aging.
We haven't yet cured death.
Certainly, there's very interesting progress all around.
I mean, CRISPR and gene editing can be an incredible tool.
And I mean, right now, stem cells could potentially prolong life a lot.
Like if you got stem cell injections of just stem cells for every tissue of your body injected into every tissue and you can just have replacement of your old cells with new cells produced by those stem cells.
I mean, that could be highly impactful at prolonging life.
Now, we just need slightly better technology for having them grow, right?
So using machine learning to guide procedures for stem cell differentiation and trans-differentiation, it's kind of nitty gritty.
But I mean, that's quite interesting.
So I think there's a lot of different things being done to help with prolongation of human life.
But we could do a lot better.
So for example, the extracellular matrix, which is the bunch of proteins in between the cells in your body, they get stiffer and stiffer as you get older.
And the extracellular matrix transmits information both electrically, mechanically, and to some extent, bio-photonically.
So there's all this transmission through the parts of the body.
But the stiffer the extracellular matrix gets, the less the transmission happens, which makes your body get worse coordinated between the different organs as you get older.
So my friend Christian Schaffmeister at my alumnus organization, my alma mater at the Great Temple University, Christian Schaffmeister has a potential solution to this,
where he has these novel molecules called spiral ligamers, which are like polymers that are not organic.
They're specially designed polymers so that you can algorithmically predict exactly how they'll fold very simply.
So he designed molecular scissors that have spiral ligamers that you could eat and would then cut through all the glucosapane and other cross-link proteins in your extracellular matrix.
But to make that technology really work and be mature as several years of work, as far as I know, no one's funding it at the moment.
So there's so many different ways that technology could be used to prolong longevity.
What we really need, we need an integrated database of all biological knowledge about human beings and model organisms.
Like, hopefully a massively distributed open-cocked bio-adm space, but it can exist in other forms too.
We need that data to be opened up in a suitably privacy-protecting way.
We need massive funding into machine learning, AGI, proto-AGI statistical research aimed at solving biology,
both molecular biology and human biology, based on this massive data set, right?
And then we need regulators not to stop people from trying radical therapies on themselves if they so wish to,
as well as better cloud-based platforms for automated experimentation on microorganisms, flies and mice and so forth.
And we could do all this.
After the last financial crisis, Obama, who I generally like pretty well, but he gave $4 trillion to large banks and insurance companies,
now in this COVID crisis, trillions are being spent to help everyday people and small businesses.
In the end, we probably will find many more trillions of being given to large banks and insurance companies anyway.
Like, could the world put $10 trillion into making a massive, holistic bio-AI and biosimulation and experimental biology infrastructure?
We could.
We could put $10 trillion into that without even screwing us up too badly, just as in the end COVID and the last financial crisis won't screw up the world economy so badly.
We're not putting $10 trillion into that.
Instead, all this research is siloed inside a few big companies and government agencies.
And most of the data that comes from our individual bodies, personally, that could feed this AI to solve aging and death,
most of that data is sitting in some hospitals database doing nothing, right?
I got two more quick questions for you.
One, I know a lot of people are going to ask me.
You are on the Joe Rogan podcast wearing that same amazing hat.
Do you have an origin story for the hat?
Does the hat have its own story that you're able to share?
The hat story has not been told yet, so we're going to have to come back and you can interview the hat.
We'll leave that for the hat so we can interview.
It's too much to pack into a few seconds.
Is there a book?
Is the hat going to write a book?
Okay.
It may transmit the information through direct neural transmission.
Okay.
Actually, there might be some neural link competition there.
Beautiful.
We'll leave it as a mystery.
Maybe one last question.
If you build an AGI system, you're successful at building the AGI system that could lead us to the singularity.
You get to talk to her and ask her one question.
What would that question be?
We're not allowed to ask what is the question I should be asking.
Yeah, that would be cheating, but I guess that's a good question.
I'm thinking of a...
I wrote a story with Stefan Bugai once where these AI developers, they created a super smart AI
aimed at answering all the philosophical questions that have been worrying them.
What is the meaning of life?
Is there free will?
What is consciousness and so forth?
So they got this super AGI built and it turned a while.
It said, those are really stupid questions.
And then it puts off on a space ship and left the earth.
So you'd be afraid of scaring it off?
That's it.
Honestly, there is no one question that rises among all the others, really.
What interests me more is upgrading my own intelligence so that I can absorb the whole world view of the super AGI.
But I mean, of course, if the answer could be what is the chemical formula for the immortality pill,
then I would do that or emit a bit string, which will be the code for a super AGI on the Intel i7 processor.
So those would be good questions.
If your own mind was expanded to become super intelligent, like you're describing,
I mean, there's this kind of a notion that intelligence is a burden,
that it's possible that with greater and greater intelligence,
that other metric of joy that you mentioned becomes more and more difficult.
That's a pretty stupid idea.
So you think if you're super intelligent, you can also be super joyful?
I think getting root access to your own brain will enable new forms of joy that we don't have now.
And I think, as I've said before, what I aim at is really make multiple versions of myself.
So I would like to keep one version, which is basically human like I am now,
but keep the dial to turn pain up and down and get rid of death, right?
And make another version which fuses its mind with superhuman AGI
and then will become massively transhuman.
And whether it will send some messages back to the human me or not will be interesting to find out.
The thing is, once you're super AGI, like one subjective second to a human
might be like a million subjective years to that super AGI, right?
So it would be on a whole different basis.
I mean, at very least those two copies will be good to have,
but it could be interesting to put your mind into a dolphin or a space amoeba or all sorts of other things.
You can imagine one version that doubled its intelligence every year
and another version that just became a super AGI as fast as possible, right?
So, I mean, now we're sort of constrained to think one mind, one self, one body, right?
But I think we actually, we don't need to be that constrained in thinking about future intelligence
after we've mastered AGI and nanotechnology and longevity, biology.
I mean, then each of our minds is a certain pattern of organization, right?
And I know we haven't talked about consciousness, but I sort of, I'm panpsychist.
I sort of view the universe as conscious.
So, you know, a light bulb or a quark or an ant or a worm or a monkey
have their own manifestations of consciousness and the human manifestation of consciousness.
It's partly tied to the particular meat that were manifested by,
but it's largely tied to the pattern of organization in the brain, right?
So if you upload yourself into a computer or a robot or whatever else it is,
some element of your human consciousness may not be there
because it's just tied to the biological embodiment,
but I think most of it will be there,
and these will be incarnations of your consciousness in a slightly different flavor.
And, you know, creating these different versions will be amazing
and each of them will discover meanings of life that have some overlap,
probably not total overlap with the human band's meaning of life.
The thing is, to get to that future where we can explore different varieties of joy,
different variations of human experience and values and transhuman experiences and values,
to get to that future, we need to navigate through a whole lot of human bullshit
of companies and governments and killer drones and making and losing money and so forth, right?
And that's the challenge we're facing now,
is if we do things right, we can get to a benevolent singularity,
which is levels of joy, growth and choice that are literally unimaginable to human beings.
If we do things wrong, we could either annihilate all life on the planet,
or we could lead to a scenario where, say, all humans are annihilated
and there's some super-AGI that goes on and does its own thing, unrelated to us,
except via our role in originating it.
And we may well be at a bifurcation point now, right,
where what we do now has significant causal impact on what comes about.
And yet, most people on the planet aren't thinking that way whatsoever.
They're thinking only about their own narrow aims and goals, right?
Now, of course, I'm thinking about my own narrow aims and goals to some extent also,
but I'm trying to use as much of my energy and mind as I can
to push toward this more benevolent alternative, which will be better for me,
but also for everybody else.
It's weird that so few people understand what's going on.
I know you interviewed Elon Musk and he understands a lot of what's going on,
but he's much more paranoid than I am, right?
Because Elon gets that AGI is going to be way, way smarter than people
and he gets that an AGI does not necessarily have to give a shit about people
because we're a very elementary mode of organization of matter compared to many AGI's.
But I don't think he has a clear vision of how infusing early stage AGI's
with compassion and human warmth can lead to an AGI that loves and helps people
rather than viewing us as a historical artifact and a waste of mass energy.
But on the other hand, while I have some disagreements with him,
like he understands way, way more of the story than almost anyone else
in such a large scale corporate leadership position, right?
It's terrible how little understanding of these fundamental issues exists out there now.
That may be different five or 10 years from now though,
because I can see understanding of AGI and longevity and other such issues
is certainly much stronger and more prevalent now than 10 or 15 years ago, right?
I mean, humanity as a whole can be slow learners relative to what I would like.
But on a historical sense, on the other hand, you could say the progress is astoundingly fast.
But Elon also said, I think on the Joe Rogan podcast, that love is the answer.
So maybe in that way, you and him are both on the same page of how we should proceed with AGI.
I think there's no better place to end it.
I hope we get to talk again about the hat and about consciousness
and about a million topics we didn't cover.
Ben, it's a huge honor to talk to you.
Thank you for making it out.
Thank you for talking today.
No, thanks for having me.
This was really good fun.
And we dug deep into some very important things.
So thanks for doing this.
Thanks very much.
Awesome.
Thanks for listening to this conversation with Ben Gertzel.
And thank you to our sponsors, The Jordan Harbinger Show and Masterclass.
Please consider supporting the podcast by going to jordanharbinger.com.
And signing up to masterclass and masterclass.com.
Click the links by the stuff.
It's the best way to support this podcast and the journey I'm on in my research and startup.
If you enjoy this thing, subscribe on YouTube, review it with five stars on a podcast.
Support on Patreon or connect with me on Twitter.
Alex Friedman spelled without the E.
Just F-R-I-D-M-A-N.
I'm sure eventually you will figure it out.
And now let me leave you with some words from Ben Gertzel.
Our language for describing emotions is very crude.
That's what music is for.
Thank you for listening and hope to see you next time.