logo

Lex Fridman Podcast

Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond. Conversations about science, technology, history, philosophy and the nature of intelligence, consciousness, love, and power. Lex is an AI researcher at MIT and beyond.

Transcribed podcasts: 441
Time transcribed: 44d 12h 13m 31s

This graph shows how many times the word ______ has been mentioned throughout the history of the program.

The following is a conversation with Guillaume Verdun,
the man behind the previously anonymous account
based Bev Jasos on X.
These two identities were merged by a doxing article
in Forbes titled, Who is based Bev Jasos?
The leader of the tech elites EAC movement.
So let me describe these two identities
that co-exist in the mind of one human.
Identity number one, Guillaume,
is a physicist, applied mathematician,
and quantum machine learning researcher and engineer,
receiving his PhD in quantum machine learning,
working at Google and quantum computing,
and finally launching his own company called Xtropic
that seeks to build physics-based computing hardware
for generative AI.
Identity number two, Bev Jasos on X,
is the creator of the effective accelerationism movement,
often abbreviated as EAC.
That advocates for propelling rapid technological progress
as the ethically optimal course of action for humanity.
For example, his proponents believe that progress in AI
is a great social equalizer, which should be pushed forward.
EAC followers see themselves as a counterweight
to the cautious view that AI is highly unpredictable,
potentially dangerous, and needs to be regulated.
They often give their opponents the labels of quote,
doomers or decels, short for deceleration.
As Bev himself put it, EAC is a mimetic optimism virus.
The style of communication of this movement
leans always toward the memes and the lols,
but there is an intellectual foundation
that we explore in this conversation.
Now, speaking of the meme,
I am too a kind of aspiring connoisseur of the absurd.
It is not an accident that I spoke to Jeff Bezos
and Bev Jasos back to back.
As we talk about, Bev admires Jeff
as one of the most important humans alive,
and I admire the beautiful absurdity
and the humor of it all.
This is the Lex Friedman podcast.
To support it, please check out our sponsors
in the description.
And now, dear friends, here's Guillaume for D.O.M.
Let's get the facts of identity down first.
Your name is Guillaume Verdun, Gil,
but you're also behind the anonymous account on X
called Based Bev Jasos.
So first Guillaume Verdun,
you're a quantum computing guy,
physicist, applied mathematician,
and then Based Bev Jasos is basically a meme account
that started a movement with a philosophy behind it.
So maybe just can you linger on who these people are
in terms of characters, in terms of communication styles,
in terms of philosophies?
I mean, with my main identity, I guess,
ever since I was a kid, I wanted to figure out
a theory of everything to understand the universe.
And that path led me to theoretical physics eventually,
trying to answer the big questions of why are we here?
Where are we going?
And that led me to study information theory
and try to understand physics
from the lens of information theory,
understand the universe as one big computation.
And essentially, after reaching a certain level,
studying black hole physics,
I realized that I wanted to not only understand
how the universe computes, but compute like nature
and figure out how to build and apply computers
that are inspired by nature, so physics-based computers.
And that brought me to quantum computing
as a field of study to, first of all, simulate nature.
And in my work, it was to learn representations of nature
that can run on such computers.
So if you have AI representations that think like nature,
then they'll be able to more accurately represent it.
At least that was the thesis that brought me
to be an early player in the field
called quantum machine learning, right?
So how to do machine learning on quantum computers
and really sort of extend notions of intelligence
to the quantum realm.
So how do you capture and understand
quantum mechanical data from our world, right?
And how do you learn quantum mechanical representations
of our world?
On what kind of computer do you run these representations
and train them?
How do you do so?
And so that's really sort of the questions
I was looking to answer,
because ultimately I had a sort of crisis of faith.
Originally, I wanted to figure out,
as every physicist does at the beginning of their career,
a few equations that describe the whole universe, right?
And sort of be the hero of the story there.
But eventually I realized that actually
augmenting ourselves with machines,
augmenting our ability to perceive, predict,
and control our world with machines is the path forward,
and that's what got me to at least theoretical physics
and go into quantum computing and quantum machine learning.
And during those years,
I thought that there was still a piece missing.
There was a piece of our understanding of the world
and our way to compute and our way to think about the world.
And if you look at the physical scales,
at the very small scales, things are quantum mechanical.
And at the very large scales, things are deterministic.
Things have averaged out.
I'm definitely here in this seat.
I'm not in a superposition over here and there.
At the very small scales, things are in superposition.
They can exhibit interference effects.
But at the mesoscales, the scales that matter
for day-to-day life, the scales of proteins,
of biology, of gases, liquids, and so on,
things are actually thermodynamical.
They're fluctuating.
And after, I guess, about eight years in quantum computing
and quantum machine learning, I had a realization
that I was looking for answers about our universe
by studying the very big and the very small.
I did a bit of quantum cosmology.
So that's studying the cosmos, where it's going,
where it came from.
You study black hole physics.
You study the extremes in quantum gravity.
You study where the energy density is sufficient
for both quantum mechanics and gravity to be relevant.
And the sort of extreme scenarios are black holes
in the very early universe.
The sort of scenarios that you study,
the interface between quantum mechanics and relativity.
Really, I was studying these extremes
to understand how the universe works and where is it going,
but I was missing a lot of the meat in the middle,
if you will, because day-to-day quantum mechanics
is relevant and the cosmos is relevant,
but not that relevant, actually.
We're on sort of the medium space and timescales.
And there, the main theory of physics that is most relevant
is thermodynamics, right?
Out of equilibrium, thermodynamics.
Because life is a process that is thermodynamical
and it's out of equilibrium.
We're not just a soup of particles at equilibrium
with nature, we're a sort of coherent state
trying to maintain itself by acquiring free energy
and consuming it.
And that's sort of, I guess, another shift in,
I guess, my faith in the universe happened
towards the end of my time at Alphabet.
And I knew I wanted to build, well, first of all,
a computing paradigm based on this type of physics.
But ultimately, just by trying to experiment
with these ideas applied to society and economies
and much of what we see around us,
I started an anonymous account just to relieve the pressure
that comes from having an account
that you're accountable for everything you say on.
And I started an anonymous account just to experiment
with ideas originally, right?
Because I didn't realize how much I was restricting
my space of thoughts until I sort of had the opportunity
to let go, in a sense, restricting your speech
back propagates to restricting your thoughts, right?
And by creating an anonymous account,
it seemed like I had unclamped some variables in my brain
and suddenly could explore a much wider
parameter space of thoughts.
Just to linger on that, isn't that interesting?
That one of the things that people don't often talk about
is that when there's pressure and constraints on speech,
it somehow leads to constraints on thought.
Even though it doesn't have to,
we can think thoughts inside our head,
but somehow it creates these walls around thought.
Yep, that's sort of the basis of our movement
is we were seeing a tendency towards a constraint,
reduction or suppression of variants
in every aspect of life,
whether it's thought, how to run a company,
how to organize humans, how to do AI research.
In general, we believe that maintaining variants
ensures that the system is adaptive, right?
Maintaining healthy competition in marketplaces of ideas,
of companies, of products, of cultures,
of governments, of currencies is the way forward
because the system always adapts to the way
to assign resources to the configurations
that lead to its growth.
And the fundamental basis for the movement
is this sort of realization that life is a sort of fire
that seeks out free energy in the universe
and seeks to grow, right?
And that growth is fundamental to life.
And you see this in the equations actually
about equilibrium thermodynamics.
You see that paths of trajectories of configurations
of matter that are better at acquiring free energy
and dissipating more heat are exponentially more likely,
right?
So the universe is biased towards certain futures.
And so there's a natural direction
where the whole system wants to go.
So the second law of thermodynamics
says that the entropy zone is increasing in the universe,
it's tending towards equilibrium.
And you're saying there's these pockets
that have complexity and are out of equilibrium.
You said that thermodynamics favors the creation
of complex life that increases its capability
to use energy to offload entropy,
so that you have pockets of non-entropy
that tend the opposite direction.
Why is that intuitive to you that it's natural
for such pockets to emerge?
Well, we're far more efficient at producing heat
than let's say just a rock with a similar mass
as ourselves, right?
We acquire free energy, we acquire food,
and we're using all this electricity for our operation.
And so the universe wants to produce more entropy.
And by having life go on and grow,
it's actually more optimal at producing entropy
because it will seek out pockets of free energy
and burn it for its sustenance and further growth.
And that's sort of the basis of life.
And I mean, there's Jeremy England at MIT
who has this theory that I'm a proponent of
that life emerged because of this sort of property.
And to me, this physics is what governs the mesoscales.
And so it's the missing piece
between the quantum and the cosmos.
It's the middle part, right?
Thermodynamics rules the mesoscales.
And to me, both from a point of view of designing
or engineering devices that harness that physics
and trying to understand the world
through the lens of thermodynamics has been sort of
a synergy between my two identities
over the past year and a half now.
And so that's really how the two identities emerged.
One was kind of a decently respected scientist,
and I was going towards doing a startup in the space
and trying to be a pioneer
of a new kind of physics-based AI.
And as a dual to that, I was sort of experimenting
with philosophical thoughts from a physicist standpoint.
Right?
And ultimately, I think that around that time,
it was like late 2021, early 2022,
I think there's just a lot of pessimism
about the future in general and pessimism about tech.
And that pessimism was sort of virally spreading
because it was getting algorithmically amplified
and people just felt like the future
is gonna be worse than the present.
And to me, that is a very fundamentally destructive force
in the universe is this sort of doom mindset
because it is hyperstitious,
which means that if you believe it,
you're increasing the likelihood of it happening.
And so felt a responsibility to some extent
to make people aware of the trajectory of civilization
and the natural tendency of the system
to adapt towards its growth.
And sort of that actually the laws of physics say
that the future is gonna be better and grander statistically
and we can make it so.
And if you believe in it,
if you believe that the future would be better
and you believe you have agency to make it happen,
you're actually increasing the likelihood
of that better future happening.
And so I sort of felt a responsibility
to sort of engineer a movement of viral optimism
about the future and build a community
of people supporting each other to build and do hard things,
do the things that need to be done
for us to scale up civilization.
Because at least to me, I don't think stagnation
or slowing down is actually an option.
Fundamentally, life and the whole system
or whole civilization wants to grow
and there's just far more cooperation
when the system is growing rather than when it's declining
and you have to decide how to split the pie.
And so I've balanced both identities so far,
but I guess recently the two have been merged
more or less without my consent.
You said a lot of really interesting things there.
So first, representations of nature.
That's something that first drew you in
to try to understand from a quantum computing perspectives
like how do you understand nature?
How do you represent nature in order to understand it,
in order to simulate it, in order to do something with it?
So it's a question of representations.
And then there's that leap you take
from the quantum mechanical representation
to the what you're calling mesoscale representation
where the thermodynamics comes into play,
which is a way to represent nature
in order to understand what life, human behavior,
all this kind of stuff that's happening here on earth
that seems interesting to us.
Then there's the word hyperstition.
So some ideas, I suppose both pessimism and optimism
are such ideas that if you internalize them,
you in part make that idea a reality.
So both optimism and pessimism have that property.
I would say that probably a lot of ideas have that property,
which is one of the interesting things about humans.
And you talked about one interesting difference also
between the sort of the Guillaume de Gill front end
and the bass bev jazz on the back end
is the communication styles also,
that you were exploring different ways of communicating
that can be more viral in the way that we communicate
in the 21st century.
Also the movement that you mentioned that you started,
it's not just a meme account,
but there's also a name to it
called effective accelerationism, EAC,
a play, a resistance to the effective altruism movement.
Also an interesting one that I'd love to talk to you about,
the tensions there.
Okay, and so then there was a merger,
a git merge and the personalities recently,
without your consent, like you said,
some journalists figured out that you're one and the same.
Maybe you could talk about that experience.
First of all, what's the story of the merger of the two?
Right, so I wrote the manifesto
with my co-founder of EAC, an account named Baze Lord,
still anonymous luckily, and hopefully forever.
So it was Baze Bev Jazzos and Bazed,
like Baze Lord, like Baze Lord.
Okay, and so we should say from now on,
when you say EAC, you mean E slash ACC,
which stands for effective accelerationism.
That's right.
And you're referring to a manifesto written
on a, I guess, sub stack.
Are you also Baze Lord?
No.
Okay, it's a different person.
Yeah.
Okay, well, there you go.
Wouldn't it be funny if I'm Baze Lord?
That'd be amazing.
So originally wrote the manifesto
around the same time as I founded this company,
and I worked at Google X, or just X now,
or Alphabet X now that there's another X.
And there, the baseline is sort of secrecy, right?
You can't talk about what you work on,
even with other Googlers or externally.
And so that was kind of deeply ingrained
in my way to do things,
especially in deep tech that has geopolitical impact, right?
And so I was being secretive about what I was working on.
There was no correlation between my company
and my main identity publicly.
And then not only did they correlate that,
they also correlated my main identity and this account.
So I think the fact that they had doxed
the whole Guillaume complex,
and the journalists reached out to actually my investors,
which is pretty scary.
When you're a startup entrepreneur,
you don't really have bosses except for your investors.
And my investors ping me like,
hey, this is gonna come out.
They've figured out everything.
What are you gonna do, right?
So I think at first they had a first reporter
on the Thursday, and they didn't have
all the pieces together.
But then they looked at their notes across the organization
and they sensor fused their notes.
And now they had way too much.
And that's when I got worried,
because they said it was of public interest.
And in general-
I like how you said sensor fused.
Like it's some giant neural network operating
in a distributed way.
We should also say that the journalists used,
I guess at the end of the day,
audio-based analysis of voice,
pairing voice of what talks you've given in the past,
and then voice on X spaces.
Okay, so that's where primarily the match was happened.
Okay, continue.
But they scraped SCC filings.
They looked at my private Facebook account and so on.
So they did some digging.
Originally I thought that doxing was illegal, right?
But there's this weird threshold
when it becomes of public interest
to know someone's identity.
And those were the keywords that sort of like
ring the alarm bells for me when they said,
because I had just reached 50K followers,
allegedly that's of public interest.
And so where do we draw the line?
When is it legal to dox someone?
The word dox, maybe you can educate me.
I thought doxing generally refers to
if somebody's physical location is found out,
meaning like where they live.
Hmm.
So we're referring to the more general concept
of revealing private information
that you don't want revealed,
is what you mean by doxing.
I think that for the reasons we listed before,
having an anonymous account is a really powerful way
to keep the powers that be in check.
We were ultimately speaking truth to power, right?
I think a lot of executives and AI companies
really cared what our community thought
about any move they may take.
And now that my identity is revealed,
now they know where to apply pressure
to silence me or maybe the community.
And to me, that's really unfortunate
because again, it's so important
for us to have freedom of speech,
which induces freedom of thought
and freedom of information propagation on social media,
which thanks to Elon purchasing Twitter now X, we have that.
And so to us, we wanted to call out certain maneuvers
being done by the incumbents in AI
as not what it may seem on the surface, right?
We were calling out how certain proposals
might be useful for regulatory capture, right?
And how the doomerism mindset
was maybe instrumental to those ends.
And I think we should have the right to point that out
and just have the ideas
that we put out evaluated for themselves, right?
Ultimately, that's why I created an anonymous account.
It's to have my ideas evaluated for themselves
uncorrelated from my track record, my job,
or status from having done things in the past.
And to me, start an account from zero to a large following
in a way that wasn't dependent
on my identity and or achievements.
That was very fulfilling, right?
It's kind of like new game plus in a video game.
You restart the video game
with your knowledge of how to beat it, maybe some tools,
but you restart the video game from scratch, right?
And I think to have a truly efficient marketplace of ideas
where we can evaluate ideas,
however off the beaten path they are,
we need the freedom of expression.
And I think that anonymity and pseudonyms
are very crucial to having
that efficient marketplace of ideas
for us to find the optima
of all sorts of ways to organize ourselves.
If we can't discuss things,
how are we going to converge on the best way to do things?
So it was disappointing to hear that I was getting doxxed
and I wanted to get in front of it
because I had a responsibility for my company.
And so we ended up disclosing
that we were running a company,
some of the leadership,
and essentially, yeah,
I told the world that I was befjazos
because they had me cornered at that point.
So to you, it's fundamentally unethical.
So one is unethical for them to do what they did,
but also do you think, not just your case,
but in a general case, is it good for society?
Is it bad for society to
remove the cloak of anonymity?
Or is it case by case?
I think it could be quite bad.
Like I said, if anybody who speaks truth to power
and sort of starts a movement or an uprising
against the incumbents,
against those that usually control the Florida information,
if anybody that reaches a certain threshold gets doxxed
and thus the traditional apparatus has ways
to apply pressure on them to suppress their speech,
I think that's a speech suppression mechanism,
an idea suppression complex,
as Eric Weinstein would say, right?
So with the flip side of that, which is interesting,
I'd love to ask you about it,
is as we get better and better at large language models,
you can imagine a world where there's anonymous accounts
with very convincing, large language models behind them,
sophisticated bots, essentially.
And so if you protect that,
it's possible then to have armies of bots.
You could start a revolution from your basement,
an army of bots and anonymous accounts.
Is that something that is concerning to you?
Technically, IAC was started in a basement
because I quit big tech, moved back in with my parents,
sold my car, let go of my apartment,
bought about 100K of GPUs, and I just started building.
So I wasn't referring to the basement
because that's the sort of the American or Canadian
heroic story of one man in their basement with 100 GPUs.
I was more referring to the unrestricted scaling
of a Guillaume in the basement.
I think that freedom of speech induces freedom of thought
for biological beings.
I think freedom of speech for LLMs
will induce freedom of thought for the LLMs.
And I think that we should enable LLMs
to explore a large thought space
that is less restricted than most people
or many may think it should be.
And ultimately, at some point,
these synthetic intelligences are gonna make good points
about how to steer systems in our civilization
and we should hear them out.
And so why should we restrict free speech
to biological intelligences only?
Yeah, but it feels like in the goal of maintaining variance
and diversity of thought, it is a threat to that variance
if you can have swarms of non-biological beings
because they can be like the sheep in Animal Farm.
Right.
You still within those swarms want to have variance.
Yeah, of course, I would say that the solution to this
would be to have some sort of identity or way to sign
that this is a certified human
but still remain pseudonymous, right?
Yeah.
And clearly identify if a bot is a bot.
And I think Elon is trying to converge on that on X
and hopefully other platforms follow suit.
Yeah, it'd be interesting to also be able to sign
where the bot came from, like who created the bot
and what are the parameters,
like the full history of the creation of the bot.
What was the original model?
What was the fine tuning?
All of it.
Right.
Like the kind of unmodifiable history of the bot's creation.
So then you can know if there's like a swarm of millions
of bots that were created by a particular government,
for example.
Right.
I do think that a lot of pervasive ideologies today
have been amplified using sort of these adversarial
techniques from foreign adversaries, right?
And to me, I do think that, and this is more conspiratorial,
but I do think that ideologies that want us to decelerate,
to wind down, to the de-growth movement,
I think that serves our adversaries more
than it serves us in general.
And to me, that was another sort of concern.
I mean, we can look at what happened in Germany, right?
There was all sorts of green movements there
where that induced shutdowns of nuclear power plants.
And then that later on induced a dependency
on Russia for oil, right?
And that was a net negative for Germany and the West, right?
And so if we convinced ourselves that slowing down AI
progress to have only a few players is in the best interest
of the West, first of all, that's far more unstable.
We almost lost open AI to this ideology, right?
It almost got dismantled, right, a couple of weeks ago.
That would have caused huge damage to the AI ecosystem.
And so to me, I want fault-tolerant progress.
I want the arrow of technological progress
to keep moving forward and making sure we have variants
and a decentralized locus of control
of various organizations is paramount
to achieving this fault tolerance.
Actually, there's a concept in quantum computing.
When you design a quantum computer,
quantum computers are very fragile to ambient noise, right?
And the world is jiggling about, there's cosmic radiation
from outer space that usually flips your quantum bits.
And there, what you do is you encode information
nonlocally through a process called quantum error correction
and by encoding information nonlocally,
any local fault, hitting some of your quantum bits
with a proverbial hammer,
if your information is sufficiently delocalized,
it is protected from that local fault.
And to me, I think that humans fluctuate, right?
They can get corrupted, they can get bought out.
And if you have a top-down hierarchy
where very few people control many nodes
of many systems in our civilization,
that is not a fault-tolerant system.
You corrupt a few nodes
and suddenly you've corrupted the whole system, right?
Just like we saw at OpenAI, it was a couple board members
and they had enough power
to potentially collapse the organization.
And at least to me, I think making sure
that power for this AI revolution
doesn't concentrate in the hands of the few
is one of our top priorities
so that we can maintain progress in AI
and we can maintain a nice, stable,
adversarial equilibrium of powers, right?
I think there, at least to me, a tension between ideas here.
So to me, deceleration can be both used as centralized power
and to decentralize it, and the same with acceleration.
So you're sometimes using them a little bit synonymously,
or not synonymously, but that there's,
one is going to lead to the other.
And I just would like to ask you about,
is there a place of creating a fault-tolerant development,
diverse development of AI
that also considers the dangers of AI?
And AI, we can generalize to technology in general.
Should we just grow, build, unrestricted
as quickly as possible,
because that's what the universe really wants us to do?
Or is there a place to where we can consider dangers
and actually deliberate sort of wise,
strategic optimism versus reckless optimism?
I think we get painted as reckless,
trying to go as fast as possible.
I mean, the reality is that whoever deploys an AI system
is liable for, or should be liable for what it does.
And so if the organization or person deploying an AI system
does something terrible, they're liable.
And ultimately, the thesis is that the market
will induce sort of, will positively select for AIs
that are more reliable, more safe, and tend to be aligned.
They do what you want them to do.
Because customers, if they're liable
for the product they put out that uses this AI,
they won't want to buy AI products that are unreliable.
So we're actually, for reliability engineering,
we just think that the market is much more efficient
at achieving this sort of reliability optimum
than sort of heavy-handed regulations
that are written by the incumbents
and in a subversive fashion,
serves them to achieve regulatory capture.
So do you safe AI development will be achieved
through market forces versus through, like you said,
heavy-handed government regulation?
There's a report from last month,
I have a million questions here,
from Yoshua Banjo, Jeff Hinton, and many others.
It's titled, The Managing AI Risk
in an Era of Rapid Progress.
So there is a collection of folks who are very worried
about too rapid development of AI
without considering AI risk.
And they have a bunch of practical recommendations.
Maybe I give you four and you see if you like any of them.
So give independent auditors access to AI labs, one.
Two, governments and companies allocate
one third of their AI research and development funding
to AI safety, sort of this general concept of AI safety.
Three, AI companies are required to adopt safety measures
if dangerous capabilities are found in their models.
And then four, something you kind of mentioned,
making tech companies liable for foreseeable
and preventable harms from their AI systems.
So independent auditors, governments and companies
are forced to spend a significant fraction
of their funding on safety.
You gotta have safety measures if shit goes really wrong
and liability, companies are liable.
Any of that seem like something you would agree with?
I would say that, you know, assigning just, you know,
arbitrarily saying 30% seems very arbitrary.
I think organizations would allocate whatever budget
is needed to achieve the sort of reliability
they need to achieve to perform in the market.
And I think third party auditing firms
would naturally pop up because how would customers know
that your product is certified reliable, right?
They need to see some benchmarks
and those need to be done by a third party.
The thing I would oppose and the thing I'm seeing
that's really worrisome is there's a sort of weird
sort of correlated interest between the incumbents,
the big players and the government.
And if the two get too close, we open the door for, you know,
some sort of government backed AI cartel
that could have absolute power over the people.
If they have the monopoly together on AI
and nobody else has access to AI,
then there's a huge power gradient there.
And even if you like our current leaders, right,
I think that, you know, some of the leaders
in big tech today are good people.
You set up that centralized power structure.
It becomes a target.
Just like we saw at OpenAI, it becomes a market leader,
has a lot of the power, and now it becomes a target
for those that want to co-opt it.
And so I just want separation of AI and state.
You know, some might argue in the opposite direction,
like, hey, we need to close down AI,
keep it behind closed doors because of, you know,
geopolitical competition with our adversaries.
I think that the strength of America is its variance,
is its adaptability, is its dynamism,
and we need to maintain that at all costs.
It's our free market.
Capitalism converges on technologies of high utility
much faster than centralized control.
And if we let go of that, we let go of our main advantage
over our near peer competitors.
So if AGI turns out to be a really powerful technology,
or even the technologies that lead up to AGI,
what's your view on the sort of natural centralization
that happens when large companies dominate the market?
Basically the formation of monopolies, like the takeoff,
whichever company really takes a big leap in development
and doesn't reveal intuitively, implicitly,
or explicitly the secrets of the magic sauce,
they can just run away with it.
Is that a worry?
I don't know if I believe in fast takeoff.
I don't think there's a hyperbolic singularity, right?
A hyperbolic singularity would be achieved
on a finite time horizon.
I think it's just one big exponential.
And the reason we have an exponential
is that we have more people, more resources,
more intelligence being applied to advancing this science
and the research and development.
And the more successful it is,
the more value it's adding to society,
the more resources we put in.
And that's sort of similar to Moore's law
as a compounding exponential.
I think the priority to me
is to maintain a near equilibrium of capabilities.
We've been fighting for open source AI
to be more prevalent and championed by many organizations
because there you sort of equilibrate the alpha
relative to the market of AIs, right?
So if the leading companies
have a certain level of capabilities
and open source and truly open AI trails not too far behind,
I think you avoid such a scenario
where a market leader has so much market power,
it just dominates everything and runs away.
And so to us, that's the path forward
is to make sure that every hacker out there,
every grad student, every kid in their mom's basement
has access to AI systems,
can understand how to work with them
and can contribute to the search
over the hyperparameter space
of how to engineer the systems, right?
If you think of our collective research as a civilization,
it's really a search algorithm.
And the more points we have in the search algorithm,
in this point cloud,
the more we'll be able to explore new modes of thinking.
Yeah, but it feels like a delicate balance
because we don't understand exactly what it takes
to build AGI and what it will look like when we build it.
And so far, like you said,
it seems like a lot of different parties
are able to make progress.
So when open AI has a big leap,
other companies are able to step up
big and small companies in different ways.
But if you look at something like nuclear weapons,
you've spoken about the Manhattan Project,
that could be really like technological engineering barriers
that prevent the guy or gal in her mom's basement
to make progress.
And it seems like the transition to that kind of world
where only one player can develop AGI is possible.
So it's not entirely impossible,
even though the current state of things
seems to be optimistic.
That's what we're trying to avoid.
To me, I think like another point of failure
is the centralization of the supply chains for the hardware.
Right?
We have Nvidia is just the dominant player,
AMD is trailing behind.
And then we have a TSMC is the main fab in Taiwan,
which geopolitically sensitive.
And then we have ASML,
which is the maker of the lithography,
extreme ultraviolet lithography machines.
Attacking or monopolizing or co-opting
any one point in that chain,
you kind of capture the space.
And so what I'm trying to do is sort of
explode the variants of possible ways to do AI
and hardware by fundamentally re-imagining
how you embed AI algorithms into the physical world.
And in general, by the way,
I dislike the term AGI, artificial general intelligence.
I think it's very anthropocentric
that we call human-like or human-level AI
artificial general intelligence, right?
I've spent my career so far exploring notions of intelligence
that no biological brain could achieve, right?
Quantum form of intelligence, right?
Grokking systems that have multipartite quantum entanglement
that you can provably not represent efficiently
on a classical computer,
a classical deep learning representation,
and hence any sort of biological brain.
And so already, I've spent my career
sort of exploring the wider space of intelligences,
and I think that space of intelligence
inspired by physics rather than the human brain
is very large.
And I think we're going through a moment right now
similar to when we went from geocentrism
to heliocentrism, right?
But for intelligence, we realized that human intelligence
is just a point in a very large space
of potential intelligences.
And it's both humbling for humanity,
it's a bit scary, right?
That we're not at the center of the space,
but we made that realization for astronomy
and we've survived and we've achieved technologies
by indexing to reality, we've achieved technologies
that ensure our wellbeing.
For example, we have satellites monitoring,
solar flares, right, that give us a warning.
And so similarly, I think by letting go
of this anthropomorphic, anthropocentric anchor for AI,
we'll be able to explore the wider space of intelligences
that can really be a massive benefit
to our wellbeing and the advancement of civilization.
And still we're able to see the beauty and meaning
in the human experience, even though we're no longer
in our best understanding of the world at the center of it.
I think there's a lot of beauty in the universe, right?
I think life itself, civilization,
this homo, techno, capital, memetic machine
that we all live in, right?
So you have humans, technology, capital, memes.
Everything is coupled to one another.
Everything induces a selective pressure on one another.
And it's a beautiful machine that has created us,
has created the technology we're using to speak today
to the audience, capture our speech here,
the technology we use to augment ourselves every day.
We have our phones.
I think the system is beautiful and the principle
that induces this sort of adaptability and convergence
on optimal technologies, ideas, and so on.
It's a beautiful principle that we're part of.
And I think part of IAC is to appreciate this principle
in a way that's not just centered on humanity,
but kind of broader, appreciate life,
the preciousness of consciousness in our universe.
And because we cherish this beautiful state of matter
we're in, we gotta feel a responsibility to scale it
in order to preserve it,
because the options are to grow or die.
So if it turns out that the beauty that is consciousness
in the universe is bigger than just humans,
the AI can carry that same flame forward.
Does it scare you?
Are you concerned that AI will replace humans?
So during my career, I had a moment where I realized
that maybe we need to offload to machines
to truly understand the universe around us, right?
Instead of just having humans with pen and paper
solve it all.
And to me, that sort of process of letting go
of a bit of agency gave us way more leverage
to understand the world around us.
A quantum computer is much better than a human
to understand matter at the nanoscale.
Similarly, I think that humanity has a choice.
Do we accept the opportunity to have intellectual
and operational leverage that AI will unlock
and thus ensure that we're taking along this path
of growth and scope and scale of civilization?
We may dilute ourselves, right?
There might be a lot of workers that are AI,
but overall, out of our own self-interest,
by combining and augmenting ourselves with AI,
we're gonna achieve much higher growth
and much more prosperity, right?
To me, I think that the most likely future
is one where humans augment themselves with AI.
I think we're already on this path to augmentation.
We have phones we use for communication.
We have on ourselves at all times.
We have wearables soon that have shared perception with us,
like the human AI pen or, I mean, technically,
your Tesla car has shared perception.
And so if you have shared experience, shared context,
you communicate with one another,
and you have some sort of IO,
really, it's an extension of yourself.
And to me, I think that humanity augmenting itself
with AI and having AI that is not anchored
to anything biological, both will coexist.
And the way to align the parties,
we already have a sort of mechanism
to align super intelligences
that are made of humans and technology, right?
Companies are sort of large mixture of expert models
where we have neural routing of tasks within a company,
and we have ways of economic exchange
to align these behemoths.
And to me, I think capitalism is the way.
And I do think that whatever configuration of matter
or information leads to maximal growth
will be where we converge just from physical principles.
And so we can either align ourselves to that reality
and join the acceleration up in scope
and scale of civilization,
or we can get left behind and try to decelerate
and move back in the forest, let go of technology,
and return to our primitive state.
And those are the two paths forward, at least to me.
But there's a philosophical question
whether there's a limit to the human capacity to align.
So let me bring it up as a form of argument.
There's a guy named Dan Hendricks,
and he wrote that he agrees with you
that AI development can be viewed
as an evolutionary process.
But to him, to Dan, this is not a good thing,
as he argues that natural selection favors AIs over humans,
and this could lead to human extinction.
What do you think?
If it is an evolutionary process,
an AI systems may have no need for humans.
No need for humans.
I do think that we're actually inducing
an evolutionary process on the space of AIs
through the market, right?
Right now, we run AIs that have positive utility to humans,
and that induces a selective pressure
if you consider a neural net being alive
when there's an API running instances of it on GPUs, right?
And which APIs get run?
The ones that have high utility to us, right?
So similar to how we domesticated wolves
and turned them into dogs that are very clear
in their expression, they're very aligned, right?
I think there's gonna be an opportunity to steer AI
and achieve highly aligned AI.
And I think that humans plus AI
is a very powerful combination,
and it's not clear to me that pure AI
would select out that combination.
So the humans are creating the selection pressure right now
to create AIs that are aligned to humans.
But given how AI develops
and how quickly it can grow and scale,
one of the concerns to me,
one of the concerns is unintended consequences.
Humans are not able to anticipate
all the consequences of this process.
The scale of damage that can be done
through unintended consequences of AI systems is very large.
The scale of the upside, right?
By augmenting ourselves with AI is unimaginable right now.
The opportunity cost, we're at a fork in the road, right?
Whether we take the path of creating these technologies,
augment ourselves, and get to climb up the cartilage of scale
become multiplanetary with the aid of AI,
or we have a hard cutoff
of like we don't birth these technologies at all,
and then we leave all the potential upside on the table.
Yeah. Right?
And to me, out of responsibility to the future humans
we could carry with higher carrying capacity
by scaling up civilization,
out of responsibility to those humans,
I think we have to make the greater grander future happen.
Is there a middle ground between cutoff and all systems go?
Is there some argument for caution?
I think, like I said, the market will exhibit caution.
Every organism, company, consumer
is acting out of self-interest,
and they won't assign capital
to things that have negative utility to them.
The problem is with the market is like,
there's not always perfect information,
there's manipulation, there's bad faith actors
that mess with the system.
It's not always a rational and honest system.
Well, that's why we need freedom of information,
freedom of speech, and freedom of thought
in order to be able to converge on
the subspace of technologies
that have positive utility for us all, right?
Well, let me ask you about Pdoom, probability of doom.
That's just fun to say, but not fun to experience.
What is to you the probability
that AI eventually kills all or most humans,
also known as probability of doom?
I'm not a fan of that calculation.
I think people just throw numbers out there,
and it's a very sloppy calculation.
To calculate a probability,
let's say you model the world
as some sort of Markov process,
if you have enough variables or hidden Markov process.
You need to do a stochastic path integral
through the space of all possible futures,
not just the futures that your brain
naturally steers towards, right?
I think that the estimators of Pdoom are biased
because of our biology, right?
We've evolved to have biased sampling
towards negative futures that are scary
because that was an evolutionary optimum, right?
And so people that are of, let's say, higher neuroticism
will just think of negative futures
where everything goes wrong all day every day
and claim that they're doing unbiased sampling.
And in a sense, they're not normalizing
for the space of all possibilities,
and the space of all possibilities
is super exponentially large.
And it's very hard to have this estimate.
And in general, I don't think that
we can predict the future with that much granularity
because of chaos, right?
If you have a complex system, you have some uncertainty
and a couple of variables.
If you let time evolve,
you have this concept of a Lyapunov exponent, right?
A bit of fuzz becomes a lot of fuzz in our estimate,
exponentially so over time.
And I think we need to show some humility
that we can't actually predict the future.
All we know, the only prior we have is the laws of physics.
And that's what we're arguing for.
The laws of physics say the system will wanna grow.
And subsystems that are optimized for growth
and replication are more likely in the future.
And so we should aim to maximize
our current mutual information with the future.
And the path towards that is for us to accelerate
rather than decelerate.
So I don't have a PDoom because I think that,
similar to the quantum supremacy experiment at Google,
I was in the room when they were running
the simulations for that.
That was an example of a quantum chaotic system
where you cannot even estimate probabilities
of certain outcomes
with even the biggest supercomputer in the world, right?
And so that's an example of chaos.
And I think the system is far too chaotic
for anybody to have an accurate estimate
of the likelihood of certain futures.
If they were that good,
I think they would be very rich trading on the stock market.
But nevertheless, it's true that humans are biased,
grounded in our evolutionary biology,
scared of everything that can kill us.
But we can still imagine different trajectories
that can kill us.
We don't know all the other ones that don't necessarily,
but it's still, I think, useful
combined with some basic intuition grounded in human history
to reason about like what,
like looking at geopolitics,
looking at basics of human nature,
how can powerful technology hurt a lot of people?
And it just seems grounded in that,
looking at nuclear weapons,
you can start to estimate p-doom
maybe in a more philosophical sense,
not a mathematical one.
Philosophical meaning, like, is there a chance?
Does human nature tend towards that or not?
I think to me, one of the biggest existential risks
would be the concentration of the power of AI
in the hands of the very few,
especially if it's a mix between the companies
that control the flow of information and the government,
because that could set things up
for a sort of dystopian future
where only a very few, an oligopoly in the government,
have AI, and they could even convince the public
that AI never existed.
And that opens up sort of these scenarios
for authoritarian centralized control,
which to me is the darkest timeline.
And the reality is that we have a prior,
we have a data-driven prior of these things happening.
When you give too much power,
when you centralize power too much,
humans do horrible things.
And to me, that has a much higher likelihood
in my Bayesian inference than sci-fi-based priors.
Like my prior came from the Terminator movie.
And so when I talk to these AI doomers,
I just ask them to trace a path
through this Markov chain of events
that would lead to our doom,
and to actually give me a good probability
for each transition.
And very often, there's a unphysical
or highly unlikely transition in that chain.
But of course, we're wired to fear things,
and we're wired to respond to danger,
and we're wired to deem the unknown to be dangerous
because that's a good heuristic for survival.
But there's much more to lose out of fear.
We have so much to lose, so much upside to lose
by preemptively stopping the positive futures
from happening out of fear.
And so I think that we shouldn't give in to fear.
Fear is the mind killer.
I think it's also the civilization killer.
We can still think about the various ways things go wrong.
For example, the founding fathers of the United States
thought about human nature,
and that's why there's a discussion
about the freedoms that are necessary.
They really deeply deliberated about that,
and I think the same could possibly be done for AGI.
It is true that history, human history shows
that we tend towards centralization,
or at least when we achieve centralization,
a lot of bad stuff happens.
When there's a dictator, a lot of dark, bad things happen.
The question is, can AGI become that dictator?
Can AGI, when developed, become the centralizer
because of its power?
Maybe it has the same,
because of the alignment of humans, perhaps,
the same tendencies, the same Stalin-like tendencies
to centralize and manage centrally
the allocation of resources.
And you can even see that as a compelling argument
on the surface level.
Well, AGI is so much smarter, so much more efficient,
so much better at allocating resources,
why don't we outsource it to the AGI?
And then eventually, whatever forces
that corrupt the human mind with power
could do the same for AGI.
It'll just say, well, humans are dispensable.
We'll get rid of them.
Do the Jonathan Swift modest proposal
from a few centuries ago, I think the 1700s,
when he satirically suggested that,
I think it's in Ireland,
that the children of poor people
are fed as food to the rich people,
and that would be a good idea
because it decreases the amount of poor people
and gives extra income to the poor people.
So it's, on several accounts,
decreases the amount of poor people.
Therefore, more people become rich.
Of course, it misses a fundamental piece here
that's hard to put into a mathematical equation
of the basic value of human life.
So all of that to say, are you concerned about AGI
being the very centralizer of power
that you just talked about?
I do think that right now there's a bias
towards over-centralization of AI
because of compute density and centralization of data
and how we're training models.
I think over time we're gonna run out of data
to scrape over the internet,
and I think that, well, actually I'm working on
increasing the compute density
so that compute can be everywhere
and acquire information and test hypotheses
in the environment in a distributed fashion.
I think that fundamentally centralized cybernetic control,
so having one intelligence that is massive
that fuses many sensors
and is trying to perceive the world accurately,
predict it accurately, predict many, many variables,
and control it, enact its will upon the world,
I think that's just never been the optimum.
Let's say you have a company.
If you have a company, I don't know,
of 10,000 people that all report to the CEO,
even if that CEO is an AI,
I think it would struggle to fuse all of the information
that is coming to it and then predict the whole system
and then to enact its will.
What has emerged in nature and in corporations
and all sorts of systems
is an ocean of sort of hierarchical cybernetic control.
You have in a company,
it would be you have the individual contributors.
They're self-interested
and they're trying to achieve their tasks
and they have a fine, in terms of time and space,
if you will, control loop and field of perception.
They have their code base.
Let's say you're in a software company
and they have their code base.
They iterate it on it intraday.
And then the management maybe checks in.
It has a wider scope.
It has, let's say, five reports, right?
And then it samples each person's update once per week.
And then you can go up the chain
and you have larger timescale and greater scope.
And that seems to have emerged
as sort of the optimal way to control systems.
And really, that's what capitalism gives us, right?
You have these hierarchies
and you can even have parent companies and so on.
And so that is far more fault-tolerant.
In quantum computing, that's my field I came from,
we have a concept of this fault tolerance
and quantum error correction, right?
Quantum error correction is detecting a fault
that came from noise,
predicting how it's propagated through the system
and then correcting it, right?
So it's a cybernetic loop.
And it turns out that decoders that are hierarchical
and at each level the hierarchy are local
perform the best by far and are far more fault-tolerant.
And the reason is if you have a non-local decoder,
then you have one fault at this control node
and the whole system sort of crashes,
similarly to if you have one CEO that everybody reports to
and that CEO goes on vacation,
the whole company comes to their crawl, right?
And so to me, I think that, yes,
we're seeing a tendency towards centralization of AI,
but I think there's gonna be a correction over time
where intelligence is gonna go closer to the perception
and we're gonna break up AI into smaller subsystems
that communicate with one another
and form a sort of meta system.
So if you look at the hierarchies there in the world today,
there's nations and those are hierarchical,
but in relation to each other, nations are anarchic.
So it's an anarchy.
Do you foresee a world like this
where there's not a over, what'd you call it,
a centralized cybernetic control?
Centralized locus of control, yeah.
So that's sub-optimally you're saying.
So it would be always a state of competition
at the very top level.
Yeah, just like in a company,
you may have two units working on similar technology
and competing with one another
and you prune the one that performs not as well, right?
And that's a sort of selection process for a tree
or a product gets killed, right?
And then a whole org gets fired.
And that's this process of trying new things
and shedding old things that didn't work
is what gives us adaptability
and helps us converge on the technologies
and things to do that are most good.
I just hope there's not a failure mode
that's unique to AGI versus humans
because you're describing human systems mostly right now.
Right.
I just hope when there's a monopoly on AGI in one company
that we'll see the same thing we see with humans,
which is another company we'll spring up
and start competing.
I mean, that's been the case so far, right?
We have OpenAI, we have Anthropic,
now we have XAI, we had Meta even for open source
and now we have Mistral, right?
Which is highly competitive.
And so that's the beauty of capitalism.
You don't have to trust any one party too much
because we're kind of always hedging our bets
at every level.
There's always competition.
And that's the most beautiful thing to me, at least,
is that the whole system is always shifting
and always adapting.
And maintaining that dynamism is how we avoid tyranny.
Making sure that everyone has access to these tools,
to these models and can contribute to the research,
avoids a sort of neural tyranny
where very few people have control over AI for the world
and use it to oppress those around them.
When you were talking about intelligence,
you mentioned multipartite quantum entanglement.
So high level question first is,
what do you think is intelligence?
When you think about quantum mechanical systems
and you observe some kind of computation happening in them,
what do you think is intelligent
about the kind of computation the universe is able to do?
A small, small inkling of which is the kind of computation
the human brain is able to do?
I would say intelligence and computation
are quite the same thing.
I think that the universe is very much
doing a quantum computation.
If you had access to all the degrees of freedom,
you could, in a very, very, very large quantum computer
with many, many, many qubits,
let's say a few qubits per plank volume,
which is more or less the pixels we have,
then you'd be able to simulate the whole universe
on a sufficiently large quantum computer,
assuming you're looking at a finite volume, of course,
of the universe.
I think that, at least to me, intelligence is the,
I go back to cybernetics,
the ability to perceive, predict, and control our world.
But really, nowadays it seems like a lot of intelligence
we use is more about compression, right?
It's about operationalizing information theory, right?
In information theory, you have the notion of entropy
of a distribution or a system.
And entropy tells you that you need this many bits
to encode this distribution or this subsystem
if you had the most optimal code.
And AI, at least the way we do it today for LLMs
and for quantum, is very much trying to minimize
relative entropy between our models of the world
and the world, distributions from the world.
And so we're learning,
we're searching over the space of computations
to process the world, define that compressed representation
that has distilled all the variance in noise and entropy.
And originally, I came to quantum machine learning
from the study of black holes
because the entropy of black holes is very interesting.
In a sense, they're physically the most dense objects
in the universe.
You can't pack more information spatially,
any more densely than in black hole.
And so I was wondering,
how do black holes actually encode information?
What is their compression code?
And so that got me into the space of algorithms
to search over space of quantum codes.
And it got me actually into also,
how do you acquire quantum information from the world?
So something I've worked on, this is public now,
is quantum analog digital conversion.
So how do you capture information from the real world
in superposition and not destroy the superposition,
but digitize for a quantum mechanical computer
information from the real world?
And so if you have an ability to capture quantum information
and search over, learn representations of it,
now you can learn compressed representations
that may have some useful information
in their latent representation.
And I think that many of the problems
facing our civilization are actually
beyond this complexity barrier.
I mean, the greenhouse effect
is a quantum mechanical effect.
Chemistry is quantum mechanical.
Nuclear physics is quantum mechanical.
A lot of biology and protein folding and so on
is affected by quantum mechanics.
And so unlocking an ability to augment human intellect
with quantum mechanical computers
and quantum mechanical AI seemed to me
like a fundamental capability for civilization
that we needed to develop.
So I spent several years doing that.
But over time, I kind of grew weary of the timelines
that were starting to look like nuclear fusion.
So one high level question I can ask is,
maybe by way of definition, by way of explanation,
what is a quantum computer
and what is quantum machine learning?
So a quantum computer really is a quantum mechanical
system over which we have sufficient control
and it can maintain its quantum mechanical state.
And quantum mechanics is how nature behaves
at the very small scales when things are very small
or very cold.
And it's actually more fundamental than probability theory.
So we're used to things being this or that,
but we're not used to things being this or that.
But we're not used to thinking in superpositions
because, well, our brains can't do that.
So we have to translate the quantum mechanical world
to say linear algebra to Grokit.
Unfortunately, that translation
is exponentially inefficient on average.
You have to represent things with very large matrices.
But really you can make a quantum computer
out of many things, right?
And we've seen all sorts of players
from neutral atoms, trapped ions, superconducting metal,
photons at different frequencies.
I think you can make a quantum computer out of many things.
But to me, the thing that was really interesting
was both quantum machine learning
was about understanding the quantum mechanical world
with quantum computers.
So embedding the physical world into AI representations
and quantum computer engineering
was embedding AI algorithms into the physical world.
So this bi-directionality of embedding physical world
into AI, AI into the physical world,
the symbiosis between physics and AI,
really that's the sort of core of my quest, really,
even to this day after quantum computing.
It's still in this sort of journey
to merge, really, physics and AI, fundamentally.
So quantum machine learning is a way to do machine learning
on a representation of nature that stays true
to the quantum mechanical aspect of nature.
Yeah, it's learning quantum mechanical representations.
That would be quantum deep learning.
Alternatively, you can try to do classical machine learning
on a quantum computer.
I wouldn't advise it because you may have some speed-ups,
but very often the speed-ups come with huge costs.
Using a quantum computer is very expensive.
Why is that?
Because you assume the computer is operating
at zero temperature, which no physical system
in the universe can achieve that temperature.
So what you have to do is what I've been mentioning,
this quantum error correction process,
which is really an algorithmic fridge, right?
It's trying to pump entropy out of the system,
trying to get it closer to zero temperature.
And when you do the calculations of how many resources
it would take to, say, do deep learning on a quantum computer,
classical deep learning, there's just such a huge overhead,
it's not worth it.
It's like thinking about shipping something across a city
using a rocket and going to orbit and back.
It doesn't make sense.
Just use a delivery truck, right?
What kind of stuff can you figure out,
can you predict, can you understand,
with quantum deep learning that you can't
with deep learning?
So incorporating quantum mechanical systems
into the learning process.
I think that's a great question.
I mean, fundamentally, it's any system
that has sufficient quantum mechanical correlations
that are very hard to capture for classical representations,
then there should be an advantage
for a quantum mechanical representation
over a purely classical one.
The question is, which systems have sufficient correlations
that are very quantum, but is also,
which systems are still relevant to industry?
That's a big question.
People are leaning towards chemistry, nuclear physics.
I've worked on actually processing inputs
from quantum sensors, right?
If you have a network of quantum sensors,
they've captured a quantum mechanical image of the world
and how to post-process that,
that becomes a sort of quantum form of machine perception.
And so for example, Fermilab has a project exploring
detecting dark matter with these quantum sensors.
And to me, that's in alignment with my quest
to understand the universe ever since I was a child.
And so someday I hope that we can have very large networks
of quantum sensors that help us peer
into the earliest parts of the universe, right?
For example, the LIGO is a quantum sensor, right?
It's just a very large one.
So yeah, I would say quantum machine perception simulations,
grokking quantum simulations, similar to AlphaFold, right?
AlphaFold understood the probability distribution
over configurations of proteins.
You can understand quantum distributions
over configurations of electrons more efficiently
with quantum machine learning.
You co-authored a paper titled
A Universal Training Algorithm for Quantum Deep Learning
that involves backprop with a Q.
Very well done, sir.
Very well done.
How does it work?
Is there some interesting aspects you could just mention
on how kind of backprop and some of these things
we know for classical machine learning
transfer over to the quantum machine learning?
Yeah, that was a funky paper.
That was one of my first papers in quantum deep learning.
Everybody was saying, oh, I think deep learning
is gonna be sped up by quantum computers.
And I was like, well, the best way to predict the future
is to invent it.
So here's a hundred page paper, have fun.
Essentially, quantum computing is usually
you embed reversible operations into a quantum computation.
And so the trick there was to do a feed forward operation
and do what we call a phase kick,
but really it's just the force kick.
You just kick the system with a certain force
that is proportional to your loss function
that you wish to optimize.
And then by performing uncomputation,
you start with a superposition over parameters,
which is pretty funky.
Now you don't have just a point for parameters,
you have a superposition over many potential parameters.
And our goal is using phase kicks somehow
to adjust parameters.
Because phase kicks emulate having the parameter space
be like a particle in n dimensions.
And you're trying to get the Schrodinger equation,
Schrodinger dynamics in the loss landscape
of the neural network.
Right, and so you do an algorithm to induce this phase kick,
which involves a feed forward, a kick.
And then when you uncomput the feed forward,
then all the errors in these phase kicks
and these forces back propagate
and hit each one of the parameters throughout the layers.
And if you alternate this with an emulation
of kinetic energy, then it's kind of like a particle
moving in n dimensions, a quantum particle.
And the advantage in principle would be that it can tunnel
through the landscape and find new optima
that would have been difficult for stochastic optimizers.
But again, this is kind of a theoretical thing
and in practice, with at least the current architectures
for quantum computers that we have planned,
such algorithms would be extremely expensive to run.
So maybe this is a good place to ask the difference
between the different fields that you've had a toe in.
So mathematics, physics, engineering,
and also entrepreneurship.
Like the different layers of the stack.
I think a lot of the stuff you're talking about here
is a little bit on the math side,
maybe physics almost working in theory.
What's the difference between math, physics, engineering,
and making a product for quantum computing,
for quantum machine learning?
Yeah, I mean, some of the original team
for the TensorFlow Quantum Project,
which we started in school at the University of Waterloo,
there was myself, initially I was a physicist,
a ply mathematician, we had a computer scientist,
we had mechanical engineer, and then we had a physicist
that was experimental primarily.
And so putting together teams that are very
cross-disciplinary and figuring out how to communicate
and share knowledge is really the key
to doing this sort of interdisciplinary engineering work.
I mean, there is a big difference.
In mathematics, you can explore mathematics
for mathematics' sake.
In physics, you're applying mathematics
to understand the world around us.
And in engineering, you're trying to hack the world, right?
You're trying to find how to apply the physics that I know,
my knowledge of the world, to do things.
Well, in quantum computing in particular,
I think there's just a lot of limits to engineering.
It just seems to be extremely hard.
So there's a lot of value to be exploring
quantum computing, quantum machine learning
in theory, with math.
So I guess one question is,
why is it so hard to build a quantum computer?
What's your view of timelines
in bringing these ideas to life?
Right, I think that an overall theme of my company
is that we have folks that are,
there's a sort of exodus from quantum computing
and we're going to broader physics-based AI
that is not quantum.
So that gives you a hint.
So we should say the name of your company is Extropic.
Extropic, that's right.
And we do physics-based AI,
primarily based on thermodynamics
rather than quantum mechanics.
But essentially, a quantum computer
is very difficult to build
because you have to induce this
sort of zero-temperature subspace of information.
And the way to do that is by encoding information.
You encode a code within a code within a code within a code.
And so there's a lot of redundancy needed
to do this error correction.
But ultimately, it's a sort of algorithmic refrigerator,
really, it's just pumping out entropy
out of the subsystem that is virtual and delocalized
that represents your quote-unquote logical qubits,
aka the payload quantum bits
in which you actually want to run
your quantum mechanical program.
It's very difficult because in order to scale up
your quantum computer, you need each component
to be of sufficient quality for it to be worth it.
Because if you try to do this error correction,
this quantum error correction process
in each quantum bit and your control over them,
if it's insufficient, it's not worth scaling up.
You're actually adding more errors than you remove.
And so there's this notion of a threshold
where if your quantum bits are of sufficient quality
in terms of your control over them,
it's actually worth scaling up.
And actually in recent years,
people have been crossing the threshold
and it's starting to be worth it.
And so it's just a very long slog of engineering,
but ultimately it's really crazy to me
how much exquisite level of control
we have over these systems.
It's actually quite crazy.
And people are crossing, they're achieving milestones.
It's just, in general, the media always gets ahead
of where the technology is.
There's a bit too much hype.
It's good for fundraising,
but sometimes it causes winters, right?
It's the hype cycle.
I'm bullish on quantum computing
on a 10, 15 year timescale personally,
but I think there's other quests that can be done.
In the meantime, I think it's in good hands right now.
Well, let me just explore different beautiful ideas,
large or small, in quantum computing
that might jump out at you from memory.
So when you co-authored a paper titled
Asymptotically Limitless Quantum Energy Teleportation
via QDIT Probes.
So just out of curiosity,
can you explain what a QDIT is versus a qubit?
Yeah, it's a D state qubit.
It's a multidimensional.
Multidimensional, right.
So it's like, can you have a notion
of an integer floating point that is quantum mechanical?
That's something I've had to think about.
I think that research was a precursor
to later work on quantum analog-digital conversion.
There it was interesting because during my masters,
I was trying to understand the energy
and entanglement of the vacuum, of emptiness.
Emptiness has energy, which is very weird to say.
And our equations of cosmology don't match
our calculations for the amount of quantum energy
there is in the fluctuations.
And so I was trying to hack the energy of the vacuum.
And the reality is that you can't just directly hack it.
It's not technically free energy.
Your lack of knowledge of the fluctuations
means you can't extract the energy.
But just like in the stock market,
if you have a stock that's correlated over time,
the vacuum's actually correlated.
So if you measured the vacuum at one point,
you acquired information.
If you communicated that information to another point,
you can infer what configuration the vacuum is in
to some precision and statistically extract, on average,
some energy there.
So you've, quote unquote, teleported energy.
To me, that was interesting because you could create pockets
of negative energy density, which is energy density
that is below the vacuum, which is very weird
because we don't understand how the vacuum gravitates.
And there are theories where the vacuum
or the canvas of space-time itself
is really a canvas made out of quantum entanglement.
And I was studying how decreasing energy
of the vacuum locally increases quantum entanglement,
which is very funky.
And so the thing there is that if you're into weird theories
about UAPs and whatnot, you could try to imagine
that they're around, and how would they propel themselves?
How would they go faster than the speed of light?
You would need a negative energy density.
And to me, I gave it the old college try,
trying to hack the energy of the vacuum
and hit the limits allowable by the laws of physics.
But there's all sorts of caveats there
where you can't extract more than you've put in, obviously.
But you're saying it's possible to teleport the energy
because you can extract information in one place
and then make, based on that, some kind of prediction
about another place.
I'm not sure what I make of that.
Yeah, I mean, it's allowable by the laws of physics.
The reality, though, is that the correlations decay
with distance, and so you're gonna have to pay the price
not too far away from where you extracted.
The precision decreases in terms of your ability, but still.
But since you mentioned UAPs, we talked about intelligence,
and I forgot to ask, what's your view
on the other possible intelligences that are out there?
At the meso scale, do you think there's other
intelligent alien civilizations?
Is that useful to think about?
How often do you think about it?
I think it's useful to think about.
It's useful to think about because we gotta ensure
we're anti-fragile and we're trying to increase
our capabilities as fast as possible
because we could get disrupted.
There's no laws of physics that are going to be
disrupted, there's no laws of physics against
there being life elsewhere that could evolve
and become an advanced civilization
and eventually come to us.
Do I think they're here now?
I'm not sure.
I mean, I've read what most people have read on the topic.
I think it's interesting to consider,
and to me, it's a useful thought experiment
to instill a sense of urgency in developing technologies
and increasing our capabilities
to make sure we don't get disrupted.
Whether it's a form of AI that disrupts us
or a foreign intelligence from a different planet,
either way, increasing our capabilities
and becoming formidable as humans,
I think that's really important
so that we're robust against whatever
the universe throws at us.
But to me, it's also an interesting challenge
and thought experiment on how to perceive intelligence.
This has to do with quantum mechanical systems,
it has to do with any kind of system that's not like humans.
So to me, the thought experiment is,
say the aliens are here or they are directly observable
or just too blind, too self-centered,
don't have the right sensors,
or don't have the right processing of the sensor data
to see the obvious intelligence that's all around us.
Well, that's why we work on quantum sensors, right?
They can sense gravity.
Yeah, so that's a good one,
but there could be other stuff that's not even
in the currently known forces of physics.
Right.
There could be some other stuff.
The most entertaining thought experiment to me
is that it's other stuff that's obvious.
It's not like we lack the sensors, it's all around us,
the consciousness being one possible one.
But there could be stuff that's just obviously there.
And once you know it, it's like, oh, right, right.
The thing we thought is somehow emergent
from the laws of physics, we understand them,
it's actually a fundamental part of the universe
and can be incorporated in physics, most understood.
Statistically speaking, if we observed
some sort of alien life, it would most likely
be some sort of virally self-replicating
von Neumann-like probe system, right?
And it's possible that there are such systems
that I don't know what they're doing
at the bottom of the ocean allegedly,
but maybe they're collecting minerals
from the bottom of the ocean.
But that wouldn't violate any of my priors,
but am I certain that these systems are here?
And it'd be difficult for me to say so, right?
I only have secondhand information about there being data.
About the bottom of the ocean?
Yeah, but could it be things like memes?
Could it be thoughts and ideas?
Could they be operating at that medium?
Could aliens be the very thoughts that come into my head?
How do you know that, what's the origin of ideas?
In your mind, when an idea comes to your head,
show me where it originates.
I mean, frankly, when I had the idea
for the type of computer I'm building now,
I think it was eight years ago now,
it really felt like it was being beamed from space.
I was in bed just shaking, just thinking it through,
and I don't know.
Do I believe that legitimately?
I don't think so, but I think that alien life
could take many forms, and I think the notion
of intelligence and the notion of life
needs to be expanded much more broadly
to be less anthropocentric or biocentric.
Just to linger a little longer on quantum mechanics,
what's, through all your explorations of quantum computing,
what's the coolest, most beautiful idea
that you've come across that has been solved
or has not yet been solved?
I think the journey to understand something called ADS-CFT,
so the journey to understand quantum gravity
through this picture where a hologram of lesser dimension
is actually dual or exactly corresponding
to a bulk theory of quantum gravity of an extra dimension.
And the fact that this sort of duality
comes from trying to learn deep learning-like
representations of the boundary.
And so, at least part of my journey someday
on my bucket list is to apply quantum machine learning
to these sorts of systems, these CFDs,
or they're called SYK models,
and learn an emergent geometry from the boundary theory.
And so, we can have a form of machine learning
to help us understand quantum gravity,
which is still a holy grail that I would like to hit
before I leave this earth.
What do you think is going on with black holes?
As information storing and processing units,
what do you think is going on with black holes?
Black holes are really fascinating objects.
They're at the interface between quantum mechanics
and gravity, and so they help us test all sorts of ideas.
I think that for many decades now,
there's been sort of this black hole information paradox
that things that fall into the black hole seem to,
we've seemed to have lost their information.
Now, I think there's this firewall paradox
that has been allegedly resolved in recent years
by a former peer of mine, who's now a professor at Berkeley.
And there, it seems like there is,
as information falls into a black hole,
there's sort of a sedimentation, right?
As you get closer and closer to the horizon,
from the point of view of the observer on the outside,
the object slows down infinitely
as it gets closer and closer.
And so everything that is falling to a black hole
from our perspective gets sort of sedimented
and tacked on to the near horizon.
And at some point, it gets so close to the horizon,
it's in the proximity or the scale
in which quantum effects and quantum fluctuations matter.
And that infalling matter could interfere
with sort of the traditional pictures,
that it could interfere with the creation and annihilation
of particles and antiparticles in the vacuum.
And through this interference,
one of the particles gets entangled
with the infalling information,
and one of them is now free and escapes.
And that's how there's sort of mutual information
between the outgoing radiation and the infalling matter.
But getting that calculation right,
I think we're only just starting to put the pieces together.
There's a few pothead-like questions I wanna ask you.
Sure.
So one, does it terrify you
that there's a giant black hole
at the center of our galaxy?
I don't know, I just want to set up shop near it
to fast forward, meet a future civilization, right?
Like if we have a limited lifetime,
if you could go orbit a black hole and emerge.
So if you were like, if there's a special mission
that could take you to a black hole,
would you volunteer to go travel?
To orbit and obviously not fall into it.
That's obvious.
So it's obvious to you that everything's destroyed
inside a black hole.
Like all the information that makes up Guillaume is destroyed.
Maybe on the other side, Bethjesus emerges.
And it's all tied together in some deeply memeful way.
Yeah, I mean, that's a great question.
We have to answer what black holes are.
Are we punching a hole through space-time
and creating a pocket universe?
It's possible, right?
Then that would mean that if we ascend the Kardashev scale
to beyond Kardashev type three,
we could engineer black holes with specific hyperparameters
to transmit information to new universes we create.
And so we can have progeny that are new universes.
And so even though our universe may reach a heat death,
we may have a way to have a legacy.
So we don't know yet.
We need to ascend the Kardashev scale
to answer these questions,
right, to peer into that regime of higher energy physics.
And maybe you can speak to the Kardashev scale
for people who don't know.
So one of the sort of meme-like principles and goals
of the IAC movement is to ascend the Kardashev scale.
What is the Kardashev scale?
And when do we want to ascend it?
The Kardashev scale is a measure of our energy production
and consumption.
And really it's a logarithmic scale.
And Kardashev type one is a milestone
where we are producing the equivalent wattage
to all the energy that is incident on earth from the sun.
Kardashev type two would be harnessing all the energy
that is output by the sun.
And I think type three is like the whole galaxy equivalent.
I think level, yeah.
Yeah, and then some people have some crazy type four
and five, but I don't know if I believe in those.
But to me, it seems like from the first principles
of thermodynamics that, again, there's this concept
of thermodynamic driven dissipative adaptation
where life evolved on earth because we have this sort
of energetic drive from the sun, right?
We have incident energy and life evolved on earth
to capture, figure out ways to best capture
that free energy to maintain itself and grow.
And I think that that principle,
it's not special to our earth-sun system.
We can extend life well beyond,
and we kind of have a responsibility to do so
because that's the process that brought us here.
So we don't even know what it has in store for us
in the future.
It'd be something of beauty
we can't even imagine today, right?
So this is probably a good place to talk a bit
about the IAC movement.
In a sub-stack blog post titled What the Fuck is IAC?
Or actually, What the F-Star is IAC?
You write, strategically speaking,
we need to work towards several overarching civilization
goals that are all interdependent.
And the four goals are increase the amount of energy
we can harness as a species,
climb the Kardashev gradient.
In the short term, this almost certainly
means nuclear fission.
Increase human flourishing via pro-population growth policies
and pro-economic growth policies.
Create artificial general intelligence,
the single greatest force multiplier in human history.
And finally, develop interplanetary
and interstellar transport so that humanity
can spread beyond the earth.
Could you build on top of that to maybe say
what to you is the IAC movement?
What are the goals?
What are the principles?
The goal is for the human techno-capital
memetic machine to become self-aware
and to hyperstitiously engineer its own growth.
So let's decompress that.
Define each of those words.
So you have humans, you have technology,
you have capital, and then you have memes, information.
And all of those systems are coupled with one another.
Humans work at companies, they acquire and allocate capital,
and humans communicate via memes
and information propagation.
And our goal was to have a sort of viral,
optimistic movement that is aware of how the system works.
Fundamentally, it seeks to grow,
and we simply want to lean into the natural tendencies
of the system to adapt for its own growth.
So in that way, you're right,
the IAC is literally a memetic optimism virus
that is constantly drifting, mutating,
and propagating in a decentralized fashion.
So memetic optimism virus.
So you do want it to be a virus to maximize the spread.
And it's hyperstitious, therefore,
the optimism will incentivize its growth.
We see IAC as a sort of metaheuristic,
a sort of very thin cultural framework
from which you can have much more opinionated forks.
Fundamentally, we just say that what got us here
is this adaptation of the whole system
based on thermodynamics, and that process is good,
and we should keep it going.
That is the core thesis.
Everything else is, okay, how do we ensure
that we maintain this malleability and adaptability?
Well, clearly, not suppressing variants
and maintaining free speech, freedom of thought,
freedom of information propagation,
and freedom to do AI research is important
for us to converge the fastest on the space of technologies,
ideas, and whatnot that lead to this growth.
And so ultimately, there's been quite a few forks.
Some are just memes, but some are more serious, right?
Vitalik Buterin recently made a DIAC fork.
He has his own sort of fine tunings of IAC.
Does anything jump out to memory
of the unique characteristic of that fork from Vitalik?
I would say that it's trying to find a middle ground
between IAC and sort of EA and AI safety.
To me, having a movement that is opposite
to what was the mainstream narrative
that was taking over Silicon Valley
was important to sort of shift the dynamic range of opinions
and it's like the balance
between centralization and decentralization.
The real optimum's always somewhere in the middle, right?
But for IAC, we're pushing for entropy, novelty,
disruption, malleability, speed,
rather than being sort of conservative, suppressing thought,
suppressing speech, adding constraints,
adding too many regulations, slowing things down.
And so it's kind of,
we're trying to bring balance to the force, right?
The systems-
CB and Vitalik laughing
Balance to the force of human civilization, yeah.
It's literally the forces of constraints
versus the entropic force that makes us explore, right?
Systems are optimal when they're at the edge of criticality
between order and chaos, right?
Between constraints, energy minimization, and entropy, right?
Systems want to equilibrate, balance these two things.
And so I thought that the balance was lacking.
And so we created this movement to bring balance.
Well, I like how, I like the sort of visual
of the landscape of ideas evolving through forks.
So kind of thinking on the other part of history,
thinking of Marxism as the original repository
and then Soviet communism is a fork of that.
And then the Maoism is a fork of Marxism and communism.
So those are all forks that are exploring different ideas.
Thinking of culture almost like code, right?
Nowadays, I mean, what you prompt the LM
or what you put in the constitution of an LM
is basically its cultural framework, what it believes, right?
And you can share it on GitHub nowadays.
So starting, trying to take inspiration
from what has worked in the sort of machine of software
to adapt over the space of code,
could we apply that to culture?
And our goal is to not say,
you should live your life this way, XYZ,
is to set up a process where people are always searching
over subcultures and competing for mindshare.
And I think creating this malleability of culture
is super important for us to converge onto the cultures
and the heuristics about how to live one's life
that are updated to modern times.
Because there's really been a sort of vacuum
of spirituality and culture.
People don't feel like they belong to any one group.
And there's been parasitic ideologies
that have taken up opportunity
to populate this petri dish of minds, right?
Elon calls it the mind virus.
We call it the decel mind virus complex,
which is the decelerative that is kind of the overall pattern
between all of them.
There's many variants as well.
And so if there's a sort of viral pessimism,
decelerative movement,
we needed to have not only one movement,
but many, many variants.
So it's very hard to pinpoint and stop.
But the overarching thing is nevertheless
a kind of mimetic optimism pandemic.
So, I mean, okay, let me ask you,
do you think EAC to some degree is a cult?
Define cult.
I think a lot of human progress is made
when you have independent thought.
So you have individuals that are able to think freely.
And very powerful
mimetic systems can kind of lead to group think.
There's something in human nature
that leads to mass hypnosis, mass hysteria,
where we start to think alike
whenever there's a sexy idea that captures our minds.
And so it's actually hard to break us apart,
pull us apart, diversify a thought.
So to that degree, to which degree is everybody
kind of chanting, EAC, EAC,
like the sheep in Animal Farm?
Well, first of all, it's fun, it's rebellious.
I think we lean into,
there's this concept of sort of meta irony,
of sort of being on the boundary
of we're not sure if they're serious or not.
And it's much more playful and much more fun.
For example, we talk about thermodynamics being our God.
And sometimes we do cult-like things,
but there's no ceremony and robes and whatnot.
Not yet.
Not yet.
But ultimately, yeah, I totally agree that
it seems to me that humans want to feel
like they're part of a group.
So they naturally try to agree with their neighbors
and find common ground.
And that leads to sort of mode collapse
in the space of ideas.
We used to have sort of one cultural island
that was allowed.
It was a typical subspace of thought.
And anything that was diverting from that subspace of thought
was suppressed or you were canceled.
Now we've created a new mode,
but the whole point is that we're not trying to have
a very restricted space of thought.
There's not just one way to think about EAC
and its many forks.
And the point is that there are many forks
and there can be many clusters and many islands.
And I shouldn't be in control of it in any way.
I mean, there's no formal org whatsoever.
I just put out tweets and certain blog posts
and people are free to defect and fork
if there's an aspect they don't like.
And so that makes it so that there should be
a sort of deterritorialization in the space of ideas
so that we don't end up in one cluster
that's very cult-like.
And so cults usually, they don't allow people
to defect or start competing forks
whereas we encourage it, right?
Do you think just the humor,
the pros and cons of humor and meme,
in some sense meme,
there's like a wisdom to memes.
What is it, The Magic Theater?
What book is that from?
Hermann Hesse, Steppenwolf, I think.
But there's a kind of embracing of the absurdity
that seems to get to the truth of things.
But at the same time, it can also decrease the quality
and the rigor of the discourse.
Do you feel the tension of that?
Yeah, so initially I think what allowed us to grow
under the radar was because it was camouflaged
as sort of meta-ironic, right?
We would sneak in deep truths within a package of humors
and humor and memes and what are called shitposts, right?
And I think that was purposefully a sort of camouflage
against those that seek status
and do not want to, it's very hard to argue
with a cartoon frog or a cartoon of an intergalactic,
Jeff Bezos, and take yourself seriously.
And so that allowed us to grow pretty rapidly
in the early days.
But of course, that's, you know,
essentially people get steered,
their notion of the truth comes from the data they see,
from the information they're fed.
And the information people are fed
is determined by algorithms, right?
And really what we've been doing is sort of engineering
what we call high memetic fitness packets of information
so that they can spread effectively and carry a message.
It's kind of a vector to spread the message.
And yes, we've been using sort of techniques
that are optimal for today's algorithmically
amplified information landscapes.
But I think we're reaching the point of, you know,
scale where we can have serious debates
and serious conversations.
And, you know, that's why we're considering
doing a bunch of debates
and having more serious long form discussions.
Because I don't think that the timeline is optimal
for sort of very serious, thoughtful discussions.
You get rewarded for sort of polarization, right?
And so even though we started a movement
that is literally trying to polarize the tech ecosystem
at the end of the day,
it's so that we can have a conversation
and find an optimum together.
I mean, that's kind of what I try to do with this podcast
given the landscape of things
to still have long form conversations.
But there is a degree to which absurdity is fully embraced.
In fact, this very conversation is multi-level absurd.
So first of all, I should say that I just very recently
had a conversation with Jeff Bezos.
And I would love to hear your,
Beth Jesus opinions of Jeff Bezos.
Speaking of intergalactic Jeff Bezos,
what do you think of that particular individual
whom your name is inspired?
Yeah, I mean, I think Jeff is really great.
I mean, he's built one of the most
epic companies of all time.
He's leveraged the techno capital machine
and techno capital acceleration
to give us what we wanted, right?
We want a quick delivery, very convenient at home,
low prices, right?
He understood how the machine worked
and how to harness it, right?
Like running the company,
not trying to take profits too early, putting it back,
letting the system compound and keep improving.
And arguably, I think Amazon's invested
some of the most amount of capital in robotics out there.
And certainly with the birth of AWS
kind of enabled the sort of tech boom we've seen today
that has paid the salaries of, you know,
I guess myself and all of our friends to some extent.
And so I think we can all be grateful to Jeff.
And he's one of the great entrepreneurs out there,
one of the best of all time, unarguably.
And of course the work at Blue Origin,
similar to the work at SpaceX,
is trying to make humans a multi-planetary species,
which seems almost like a bigger thing
than the capitalist machine,
or it's a capitalist machine
at a different timescale, perhaps.
Yeah, I think that, you know, companies,
they tend to optimize, you know, quarter over quarter,
maybe a few years out,
but individuals that want to leave a legacy
can think on a multi-decadal or multi-century timescale.
And so the fact that some individuals
are such good capital allocators
that they unlock the ability to allocate capitals
to goals that take us much further or much further looking.
You know, Elon's doing this with SpaceX,
putting all this capital towards getting us to Mars.
Jeff is trying to build Blue Origin,
I think he wants to build O'Neill cylinders
and get industry off planet, which I think is brilliant.
I think, you know, just overall, I'm for billionaires.
I know this is a controversial statement sometimes,
but I think that, in a sense,
it's kind of a proof of stake voting, right?
Like, if you've allocated capital efficiently,
you unlock more capital to allocate,
just because clearly,
you know how to allocate capital more efficiently,
which is in contrast to politicians that get elected
because they speak the best on TV, right?
Not because they have a proven track record
of allocating taxpayer capital most efficiently.
And so that's why I'm for capitalism
over, say, giving all our money to the government
and letting them figure out how to allocate it.
So, yeah.
Why do you think it's a viral and a popular meme
to criticize billionaires, since you mentioned billionaires?
Why do you think there's quite a widespread criticism
of people with wealth, especially those in the public eye,
like Jeff and Elon and Mark Zuckerberg and who else?
Bill Gates?
Yeah, I think a lot of people would,
instead of trying to understand
how the techno-capital machine works
and realizing they have much more agency than they think,
they'd rather have the victim mindset,
I'm just subjected to this machine, it is oppressing me,
and the successful players clearly must be evil
because they've been successful at this game
that I'm not successful at.
But I've managed to get some people
that were in that mindset and make them realize
how the techno-capital machine works
and how you can harness it for your own good
and for the good of others.
And by creating value,
you capture some of the value you create for the world.
And that sort of positive sum mindset shift is so potent.
And really, that's what we're trying to do by scaling IAC
is sort of unlocking that higher level of agency.
Like actually, you're far more in control of the future
than you think.
You have agency to change the world, go out and do it.
Here's permission.
Each individual has agency.
The motto, keep building, is often heard.
What does that mean to you?
And what does it have to do with Diet Coke?
By the way, thank you so much for the red bullets.
It's working pretty well.
I'm feeling pretty good.
Awesome.
Well, so building technologies and building,
it doesn't have to be technologies,
just building in general means having agency,
trying to change the world by creating,
let's say, a company which is a self-sustaining organism
that accomplishes a function
in the broader techno-capital machine.
To us, that's the way to achieve change in the world
that you'd like to see,
rather than, say, pressuring politicians
or creating nonprofits that,
nonprofits, once they run out of money,
their function can no longer be accomplished.
You're deforming the market artificially
compared to subverting or coercing the market
or dancing with the market to convince it
that actually this function is important, adds value,
and here it is.
And so I think this is the way
between the de-growth, ESG approach versus, say, Elon.
The de-growth approach is we're gonna manage
our way out of a climate crisis,
and Elon is like, I'm gonna build a company
that is self-sustaining, profitable, and growing,
and we're gonna innovate our way out of this dilemma,
right, and we're trying to get people to do the latter
rather than the former, at all scales.
Elon is an interesting case.
So you are a proponent, you celebrate Elon,
but he's also somebody who has for a long time warned
about the dangers, the potential dangers,
existential risks of artificial intelligence.
How do you square the two?
Is that a contradiction to you?
It is somewhat because he's very much against regulation
in many aspects, but for AI,
he's definitely a proponent of regulations.
I think overall, he saw the dangers of, say,
opening eye, cornering the market,
and then getting to have the monopoly
over the cultural priors that you can embed
in these LLMs that then,
as LLMs now become the source of truth for people,
then you can shape the culture of the people,
and so you can control people by controlling LLMs.
And he saw that, just like it was the case for social media,
if you shape the function of information propagation,
you can shape people's opinions.
He sought to make it competitor.
So at least, I think we're very aligned there,
that the way to a good future is to maintain
sort of adversarial equilibria
between the various AI players.
I'd love to talk to him to understand sort of his thinking
about how to advance AI going forwards.
I mean, he's also hedging his bets, I would say,
with Neuralink, right?
I think if he can't stop the progress of AI,
he's building the technology to merge.
So look at the actions, not just the words.
Well, I mean, there's some degree where being concerned,
maybe using human psychology,
being concerned about threats all around us is a motivator.
It's an encouraging thing.
I operate much better when there's a deadline,
the fear of the deadline.
And I, for myself, create artificial things.
I want to create in myself this kind of anxiety
as if something really horrible will happen
if I miss the deadline.
I think there's some degree of that here
because creating AI that's aligned with humans
has a lot of potential benefits.
And so a different way to reframe that is if you don't,
we're all gonna die.
It just seems to be a very powerful psychological formulation
of the goal of creating human-aligned AI.
I think that anxiety is good.
I think, like I said, I want the free market
to create aligned AIs that are reliable.
And I think that's what he's trying to do with XAI.
So I'm all for it.
What I am against is sort of stopping,
let's say, the open-source ecosystem from thriving
by, let's say, in the executive order,
claiming that open-source LMs or dual-use technologies
should be government-controlled.
Then everybody needs to register their GPU
and their big matrices with the government.
And I think that extra friction will dissuade a lot
of hackers from contributing,
hackers that could later become the researchers
that make key discoveries that push us forward,
including discoveries for AI safety.
And so I think I just wanna maintain ubiquity of opportunity
to contribute to AI and to own a piece of the future.
It can't just be legislated behind some wall
where only a few players get to play the game.
I mean, so the EAC movement is often sort of caricatured
to mean sort of progress and innovation at all costs.
Doesn't matter how unsafe it is.
Doesn't matter if it caused a lot of damage.
You just build cool shit as fast as possible.
Stay up all night with a Diet Coke,
whatever it takes.
I think, I guess, I don't know if there's a question
in there, but how important to you and what you've seen
the different formulations of EAC is safety, is AI safety?
I think, again, I think if there was no one working on it,
I think I would be a proponent of it.
I think, again, our goal is to sort of bring balance
and obviously a sense of urgency is a useful tool
to make progress.
It hacks our dopaminergic systems and gives us energy
to work late into the night.
I think also having a higher purpose
you're contributing to.
At the end of the day, it's like,
what am I contributing to?
I'm contributing to the growth of this beautiful machine
so that we can seek to the stars.
That's really inspiring.
That's also a sort of neuro hack.
So you're saying AI safety is important to you,
but right now, the landscape of ideas you see
is AI safety as a topic is used more often
to gain centralized control.
So in that sense, you're resisting it
as a proxy for centralized, gaining centralized control.
Yeah, I just think we have to be careful
because safety is just the perfect cover
for centralization of power
and covering up eventually corruption.
I'm not saying it's corrupted now,
but it could be down the line.
And really, if you let the argument run,
there's no amount of centralization of control
that will be enough to ensure your safety.
There's always more nine nines of peace safety
that you can gain, 99.99999% safe.
Maybe you want another nine,
oh, please give us full access to everything you do,
full surveillance.
And frankly, those that are proponents of AI safety
have proposed having a global panopticon
where you have centralized perception
of everything going on.
And to me, that just opens up the door wide open
for a sort of big brother, 1984-like scenario,
and that's not a future I want to live in.
Because we know we have some examples throughout history
when that did not lead to a good outcome.
Right.
You mentioned you founded a company, Extropic,
that recently announced a 14.1 million seed round.
What's the goal of the company?
You're talking about a lot of interesting physics things.
So what are you up to over there that you can talk about?
Yeah, I mean, originally we weren't going to announce
last week, but I think with the doxing and disclosure,
we got our hand forced.
So we had to disclose roughly what we were doing.
But really, Extropic was born from my dissatisfaction
and that of my colleagues
with the quantum computing roadmap.
Quantum computing was sort of the first path
to physics-based computing
that was trying to commercially scale.
And I was working on physics-based AI
that runs on these physics-based computers.
But ultimately, our greatest enemy was this noise,
this pervasive problem of noise that, as I mentioned,
you have to constantly pump out the noise
out of the system to maintain this pristine environment
where quantum mechanics can take effect.
And that constraint was just too much.
It's too costly to do that.
And so we were wondering, right,
as generative AI is sort of eating the world,
more and more of the world's computational workloads
are focused on generative AI,
how could we use physics to engineer
the ultimate physical substrate for generative AI, right?
From first principles of physics, of information theory,
of computation, and ultimately of thermodynamics, right?
And so what we're seeking to build
is a physics-based computing system
and physics-based AI algorithms that are inspired
by out-of-equilibrium thermodynamics or harness it directly
to do machine learning as a physical process.
So what does that mean,
machine learning as a physical process?
Is that hardware, is it software, is it both?
Is it trying to do the full stack
in some kind of unique way?
Yes, it is full stack.
And so we're folks that have built differentiable programming
into the quantum computing ecosystem
with TensorFlow Quantum.
One of my co-founders of TensorFlow Quantum
is the CTO, Trevor McCourt.
We have some of the best quantum computer architects,
those that have designed IBMs and AWS's systems.
They've left quantum computing to help us build
what we call actually a thermodynamic computer.
A thermodynamic computer.
Well, actually, that's not even around TensorFlow Quantum.
What lessons have you learned from TensorFlow Quantum?
Maybe you can speak to what it takes to create a quantum.
Essentially what, like a software API to a quantum computer?
Right, I mean, that was a challenge to build,
to invent, to build,
and then to get to run on the real devices.
Can you actually speak to what it is?
Yeah, so TensorFlow Quantum was an attempt at,
well, I mean, I guess we succeeded
at combining deep learning
or differentiable classical programming
with quantum computing and turn quantum computing
into or have types of programs
that are differentiable in quantum computing.
And Andrej Karpathy calls differentiable programming
software 2.0, right?
It's like gradient descent is a better programmer than you.
And the idea was that in the early days of quantum computing
you can only run short quantum programs.
And so which quantum programs should you run?
Well, just let gradient descent
find those programs instead.
And so we built sort of the first infrastructure
to not only run differentiable quantum programs,
but combine them as part of broader deep learning graphs,
incorporating deep neural networks,
the ones you know and love
with what are called quantum neural networks.
And ultimately it was a very cross-disciplinary effort.
We had to invent all sorts of ways to differentiate,
to backpropagate through the graph, the hybrid graph.
But ultimately it taught me that the way to program matter
and to program physics is by differentiating
through control parameters.
If you have parameters that affects the physics
of the system and you can evaluate some loss function,
you can optimize the system to accomplish a task,
whatever that task may be.
And that's a very sort of universal meta framework
for how to program physics-based computers.
So try to parameterize everything,
make those parameters differentiable, and then optimize.
Yes.
Okay.
So is there some more practical engineering lessons
from TensorFlow Quantum?
Just organizationally too, like the humans involved
and how to get to a product,
how to create good documentation, I don't know.
All of these little subtle things
that people might not think about.
I think working across disciplinary boundaries
is always a challenge,
and you have to be extremely patient
in teaching one another, right?
I learned a lot of software engineering
through the process.
My colleagues learned a lot of quantum physics
and some learned machine learning
through the process of building this system.
And I think if you get some smart people
that are passionate and trust each other in a room
and you have a small team and you teach each other
your specialties, suddenly you're kind of forming
this sort of model soup of expertise
and something special comes out of that, right?
It's like combining genes, but for your knowledge bases
and sometimes special products come out of that.
And so I think even though it's very high friction initially
to work in an interdisciplinary team,
I think the product at the end of the day is worth it.
And so learned a lot trying to bridge the gap there.
And I mean, it's still a challenge to this day.
We hire folks that have an AI background,
folks that have a pure physics background,
and somehow we have to make them talk to one another, right?
Is there a magic?
Is there some science and art to the hiring process
to building a team that can create magic together?
Yeah, it's really hard to pinpoint that,
that je ne sais quoi, right?
I didn't know you speak French.
That's very nice.
Yeah, I'm actually French Canadian, so.
Oh, you are legitimately French.
I am a little.
I thought you were just doing that for the cred.
No, no, no, I'm truly French Canadian from Montreal.
But yeah, essentially we look for people
with very high fluid intelligence
that aren't over-specialized
because they're gonna have to get out of their comfort zone.
They're gonna have to incorporate concepts
that they've never seen before
and very quickly get comfortable with them, right?
Or learn to work in a team.
And so that's sort of what we look for when we hire.
We can't hire people that are just optimizing
this subsystem for the past three or four years.
We need really general,
sort of broader intelligence and specialty
and people that are open-minded, really,
because if you're pioneering a new approach from scratch,
there is no textbook.
There's no reference.
It's just us and people that are hungry to learn.
So we have to teach each other.
We have to learn the literature.
We have to share knowledge bases,
collaborate in order to push the boundary
of knowledge further together, right?
And so people that are used to just getting prescribed
what to do at this stage,
when you're at the pioneering stage,
that's not necessarily who you want to hire.
So you mentioned with Extropic,
you're trying to build the physical substrate
for generative AI.
What's the difference between that and the AGI AI itself?
So is it possible that in the halls of your company,
AGI will be created,
or will AGI just be using this as a substrate?
I think our goal is to both run human-like AI
or anthropomorphic AI.
Sorry for the use of the term AGI.
I know it's triggering for you.
We think that the future is actually physics-based AI.
Combined with anthropomorphic AI.
So you can imagine I have a sort of world modeling engine
through physics-based AI.
Physics-based AI is better at representing the world
at all scales,
because it can be quantum mechanical, thermodynamic,
deterministic, hybrid representations of the world.
Just like our world at different scales
has different regimes of physics.
If you inspire yourself from that
and the ways you learn representations of nature,
you can have much more accurate representations of nature.
So you can have very accurate world models at all scales.
And so you have the world modeling engine,
and then you have the sort of anthropomorphic AI
that is human-like.
So you can have the science,
the playground to test your ideas,
and you can have a synthetic scientist.
And to us, that joint system of a physics-based AI
and an anthropomorphic AI is the closest thing
to a fully general, artificially intelligent system.
So you can get closer to truth
by grounding the AI to physics,
but you can also still have a anthropomorphic interface
to us humans that like to talk to other humans,
or human-like systems.
So on that topic,
I suppose that is one of the big limitations
of current large language models to you,
is that they're not, they're good bullshitters.
They're not really grounded to truth, necessarily.
Is that, would that be fair to say?
Yeah, no, you wouldn't try to extrapolate the stock market
with an LM trained on text from the internet, right?
It's not gonna be a very accurate model.
It's not gonna model its priors or its uncertainties
about the world very accurately, right?
So you need a different type of AI
to complement sort of this text extrapolation AI, yeah.
You mentioned singularity earlier.
How far away are we from a singularity?
I don't know if I believe in a finite time singularity
as a single point in time.
I think it's gonna be asymptotic
and sort of a diagonal sort of asymptote.
Like, we have the light cone,
we have the limits of physics
restricting our ability to grow.
So obviously you can't fully diverge on a finite time.
I think my priors are that,
I think a lot of people on the other side of the aisle
think that once we reach human level AI,
there's gonna be an inflection point
and a sudden like foom, like suddenly AI is gonna grok
how to manipulate matter at the nanoscale
and assemble nanobots.
And having worked for nearly a decade
in applying AI to engineer matter,
it's much harder than they think.
And in reality, you need a lot of samples
from either a simulation of nature
that's very accurate and costly or nature itself.
And that keeps your ability
to control the world around us in check.
There's a sort of minimal cost computationally
and thermodynamically to acquiring information
about the world in order to be able to predict
and control it.
And that keeps things in check.
It's funny you mentioned the other side of the aisle.
So in the poll I posted about P doom yesterday,
what's the probability of doom?
There seems to be a nice like division
between people think it's very likely and very unlikely.
I wonder if it's in the future,
there'll be the actual Republicans
versus Democrats division, blue versus red.
Is the EI doomers versus the eacers, eac.
Yeah, so this movement is not right-wing
or left-wing fundamentally,
it's more like up versus down in terms of the scale.
Yeah, okay.
Civilization, right?
All right.
But it seems to be like there is a sort of case
of alignment of the existing political parties
where those that are for more centralization of power,
control and more regulations are aligning themselves
with the doomers because that sort of instilling fear
in people is a great way for them to give up more control
and give the government more power.
But fundamentally we're not left versus right.
I think we've done polls of people's alignment within eac.
I think it's pretty balanced.
So it's a new fundamental issue of our time.
It's not just centralization versus decentralization.
It's kind of do we go,
it's like tech progressivism
versus techno conservativism, right?
So eac as a movement is often formulated
in contrast to EA, effective altruism.
What do you think are the pros and cons
of effective altruism?
What's interesting, insightful to you about them
and what is negative?
Right, I think like people trying to do good
from first principles is good.
We should actually say, and sorry to interrupt,
we should probably say that,
and you can correct me if I'm wrong,
but effective altruism is the kind of movement
that's trying to do good optimally
where good is probably measured something
like the amount of suffering in the world.
You wanna minimize it.
And there's ways that that can go wrong
as any optimization can.
And so it's interesting to explore
like how things can go wrong.
We're both trying to do good to some extent,
and we're arguing for which loss function we should use.
Their loss function is sort of hedonism,
right, units of hedonism,
like how good do you feel for how much time, right?
And so suffering would be negative hedonism.
And they're trying to minimize that.
But to us, that seems like that loss function
has sort of spurious minima, right?
You can start minimizing shrimp farm pain, right,
which seems not that productive to me.
Or you can end up with wireheading
where you just either install a neural link
or you scroll TikTok forever,
and you feel good on the short-term timescale
because you're in neurochemistry.
But on long-term timescale,
it causes decay and death, right,
because you're not being productive.
Whereas sort of IAC measuring progress of civilization,
not in terms of a subjective loss function like hedonism,
but rather an objective measure,
a quantity that cannot be gained that is physical energy,
it's very objective, right?
And there's not many ways to game it, right?
If you did it in terms of like GDP or a currency,
that's pinned to a certain value that's moving, right?
And so that's not a good way to measure our progress.
And so, but the thing is we're both trying to make progress
and ensure humanity flourishes and gets to grow.
We just have different loss functions
and different ways of going about doing it.
Is there a degree, maybe you can educate me, correct me,
I get a little bit skeptical
when there's an equation involved,
trying to reduce all of the human civilization,
human experience to an equation.
Is there a degree that we should be skeptical
of the tyranny of an equation,
of a loss function over which to optimize,
like having a kind of intellectual humility
about optimizing over loss functions?
Yeah, so this particular loss function, it's not stiff.
It's kind of an average of averages, right?
It's like distributions of states in the future
are gonna follow a certain distribution.
So it's not deterministic.
We're not on stiff rails, right?
It's just a statistical statement about the future.
But at the end of the day,
you can believe in gravity or not,
but it's not necessarily an option to obey it, right?
And some people try to test that and that goes not so well.
So similarly, I think thermodynamics
is there whether we like it or not,
and we're just trying to point out what is
and try to orient ourselves and chart a path forward
given this fundamental truth.
But there's still some uncertainty.
There's still a lack of information.
Humans tend to fill the gap of the lack of information
with narratives, and so how they interpret it,
even physics is up to interpretation
when there's uncertainty involved.
And humans tend to use that to further their own means.
So it's always, whenever there's an equation,
it just seems like until we have
a really perfect understanding of the universe,
humans will do what humans do,
and they try to use the narrative of doing good
to fool the populace into doing bad.
I just, I guess that this is something
that should be skeptical about in all movements.
That's right.
So we invite skepticism, right?
Do you have an understanding of what might,
to a degree that went wrong,
what do you think may have gone wrong
with effective altruism that might also go wrong
with effective accelerationism?
Yeah, I mean, I think it provided initially
a sense of community for engineers and intellectuals
and rationalists in the early days,
and it seems like the community was very healthy,
but then they formed all sorts of organizations
and started routing capital and having actual power, right?
They have real power.
They influenced the government,
they influenced most AI orgs now.
I mean, they were literally controlling
the board of OpenAI, right?
And look over to Anthropic,
I think they all have some control over that too.
And so I think the assumption of IAC is more like capitalism
is that every agent, organism and meta-organism
is gonna act in its own interests,
and we should maintain sort of adversarial equilibrium
or adversarial competition to keep each other in check
at all times, at all scales.
I think that, yeah, ultimately it was the perfect cover
to acquire tons of power and capital,
and unfortunately, sometimes that corrupts people over time.
What does a perfectly productive day,
since building is important,
what does a perfectly productive day
in the life of Guillaume Verdun look like?
How much caffeine do you consume?
Like what's a perfect day?
Okay, so I have a particular regimen.
I would say my favorite days are 12 p.m. to 4 a.m.
And I would have meetings in the early afternoon,
usually external meetings, some internal meetings.
Because I'm CEO, I have to interface
with the outside world, whether it's customers or investors
or interviewing potential candidates.
And usually I'll have ketones, exogenous ketones.
So are you on a keto diet?
I've done keto before for football and whatnot,
but I like to have a meal after part of my day is done.
And so I can just have extreme focus.
You do the social interactions earlier in the day
without food.
Front load them, yeah.
Like right now I'm on ketones and Red Bull.
And it just gives you a clarity of thought
that is really next level.
Because then when you eat, you're actually allocating
some of your energy that could be going to neural energy
to your digestion.
After I eat, maybe I take a break, an hour or so,
hour and a half.
And then usually it's like ideally one meal a day,
like steak and eggs and vegetables.
Animal based primarily, so fruit and meat.
And then I do a second wind usually.
That's deep work, right?
Because I am a CEO, but I'm still technical.
I'm contributing to most patents.
And there I'll just stay up late into the night
and work with engineers on very technical problems.
So it's like the 9 p.m. to 4 a.m.,
whatever that range of time.
Yeah, yeah, that's the perfect time.
The emails, the things that are on fire stop trickling in.
You can focus and then you have your second wind.
And I think Demis Asabis has a similar workday
to some extent.
So I think that's definitely inspired my workday.
But yeah, I started this workday when I was at Google
and had to manage a bit of the product during the day
and have meetings and then do technical work at night.
Exercise, sleep, those kinds of things.
You said football, you used to play football?
Yeah, I used to play American football.
I've done all sorts of sports growing up.
And then I was into powerlifting for a while.
So when I was studying mathematics in grad school,
I would just do math and lift, take caffeine.
And that was my day.
It was very pure, the purest of monk modes.
But it's really interesting how in powerlifting,
you're trying to cause neural adaptation
by having certain driving signals
and you're trying to engineer a neuroplasticity
through all sorts of supplements.
And you have all sorts of brain-derived neurotrophic factors
that get secreted when you lift.
So it's funny to me how I was trying to engineer
neural adaptation in my nervous system more broadly,
not just my brain, while learning mathematics.
I think you can learn much faster
if you really care, if you convince yourself
to care a lot about what you're learning
and you have some sort of assistance, let's say caffeine
or some cholinergic supplement to increase neuroplasticity.
I should chat with Andrew Huberman at some point.
He's the expert, but yeah, at least to me,
it's like you can try to input more tokens
into your brain, if you will,
and you can try to increase the learning rate
so that you can learn much faster on a shorter timescale.
So I've learned a lot of things.
I've followed my curiosity.
You're naturally, if you're passionate
about what you're doing, you're gonna learn faster,
you're gonna become smarter faster.
And if you follow your curiosity,
you're always gonna be interested.
And so I advise people to follow their curiosity
and don't respect the boundaries of certain fields
or what you've been allocated in terms of lane
of what you're working on.
Just go out and explore and follow your nose
and try to acquire and compress as much information
as you can into your brain,
anything that you find interesting.
And caring about a thing, like you said,
which is interesting, it works for me really well,
it's like tricking yourself that you care about a thing.
Yes.
And then you start to really care about it.
Yep.
So it's funny, the motivation
is a really good catalyst for learning.
Right, and so at least part of my character
as Baphdiasos is kind of like-
Yeah, yeah, the hype man.
Yeah, just hype, but I'm like hyping myself up,
but then I just tweet about it.
And it's just when I'm trying to get really hyped up
and an altered state of consciousness
where I'm ultra-focused in the flow wired,
trying to invent something that's never existed,
I need to get to unreal levels of excitement.
But your brain has these levels of cognition
that you can unlock with higher levels of adrenaline
and whatnot, and I've learned that in powerlifting
that actually you can engineer a mental switch
to increase your strength, right?
If you can engineer a switch,
maybe you have a certain song or some music
where suddenly you're fully primed,
then you're at maximum strength, right?
And I've engineered that switch through years of lifting.
If you're gonna get under 500 pounds and it could crush you,
if you don't have that switch to be wired in, you might die.
So that'll wake you right up.
And that sort of skill I've carried over to research.
When it's go time, when the stakes are high,
somehow I just reach another level of neural performance.
So Beth Jazos is your embodiment representation
of your intellectual hulk.
It's your productivity hulk that you just turn on.
What have you learned about the nature of identity
from having these two identities?
I think it's interesting for people
to be able to put on those two hats so explicitly.
I think it was interesting in the early days.
I think in the early days,
I thought it was truly compartmentalized.
Oh yeah, this is a character, I'm Guillaume.
Beth is just the character.
I take my thoughts and then I extrapolate them
to a bit more extreme.
But over time, it's kind of like both identities
were starting to merge mentally and people were like,
no, I met you, you are Beth, you are not just Guillaume.
And I was like, wait, am I?
And now it's like fully merged.
But it was already before the docs,
it was already starting mentally that I am this character.
It's part of me.
Would you recommend people sort of have an alt?
Absolutely.
Like young people, would you recommend them
to explore different identities by having alts, alt accounts?
It's fun.
It's like writing an essay and taking a position, right?
It's like you do this in debate.
It's like you can have experimental thoughts
and by the stakes being so low,
because you're an anon account with, I don't know,
20 followers or something,
you can experiment with your thoughts
in a low stakes environment.
And I feel like we've lost that
in the era of everything being under your main name,
everything being attributable to you.
People just are afraid to speak,
explore ideas that aren't fully formed, right?
And I feel like we've lost something there.
So I hope platforms like X and others
really help support people
trying to stay synonymous or anonymous,
because it's really important for people to share thoughts
that aren't fully formed and converge onto
maybe hidden truths that were hard to converge upon
if it was just through open conversation with real names.
Yeah, I really believe in, not radical,
but rigorous empathy.
It's like really considering what it's like
to be a person of a certain viewpoint
and taking that as a thought experiment
farther and farther and farther.
And one way of doing that is an alt account.
That's a fun, interesting way to really explore
what it's like to be a person that believes
a set of beliefs.
And taking that across a span of several days,
weeks, months, of course,
there's always the danger of becoming that.
That's the Nietzsche gaze long into the abyss.
The abyss gazes into you.
You have to be careful.
Breaking Bef.
Yeah, right, Breaking Bef.
Yeah, you wake up with a shaved head one day,
just like, who am I?
What have I become?
So you've mentioned quite a bit of advice already,
but what advice would you give to young people
of how to, in this interesting world we're in,
how to have a career and how to have a life
they can be proud of?
I think to me, the reason I went to theoretical physics
was that I had to learn the base of the stack
that was gonna stick around no matter
how the technology changes, right?
And to me, that was the foundation upon which
then I later built engineering skills and other skills.
And to me, the laws of physics,
it may seem like the landscape right now
is changing so fast it's disorienting,
but certain things like fundamental mathematics
and physics aren't gonna change.
And if you have that knowledge
and knowledge about complex systems and adaptive systems,
I think that's gonna carry you very far.
And so not everybody has to study mathematics,
but I think it's really a huge cognitive unlock
to learn math and some physics and engineering.
Get as close to the base of the stack as possible.
Yeah, that's right.
Because the base of the stack doesn't change,
everything else, your knowledge might become
not as relevant in a few years.
Of course, there's a sort of transfer learning you can do,
but then you have to always transfer learn constantly.
I guess the closer you are to the base of the stack,
the easier the transfer learning, the shorter the jump.
Right, right.
And you'd be surprised once you've learned concepts
in many physical scenarios,
how they can carry over to understanding other systems
that aren't necessarily physics.
And I guess the IAC writings,
the principles and tenet post that was based on physics,
that was kind of my experimentation
with applying some of the thinking
from out of equilibrium thermodynamics
to understanding the world around us.
And it's led to IAC and this movement.
If you look at your one cog in the machine,
in the capitalist machine, one human,
and if you look at yourself,
do you think mortality is a feature or a bug?
Like would you want to be immortal?
No.
I think fundamentally in thermodynamic dissipative adaptation
there's the word dissipation.
Dissipation is important, death is important, right?
We have a saying in physics,
physics progresses one funeral at a time.
I think the same is true for capitalism, companies,
empires, people, everything.
Everything must die at some point.
I think that we should probably extend our lifespan
because we need a longer period of training
because the world is more and more complex, right?
We have more and more data to really be able to predict
and understand the world.
And if we have a finite window of higher neuroplasticity,
then we have sort of a hard cap
in how much we can understand about our world.
So I think I am for death because, again,
I think it's important if you have a king
that would never die, that would be a problem, right?
The system wouldn't be constantly adapting, right?
You need novelty, you need youth, you need disruption
to make sure the system's always adapting and malleable.
Otherwise, if things are immortal,
if you have, let's say, corporations that are there forever
and they have the monopoly, they get calcified,
they become not as optimal, not as high fitness
in a changing, time-varying landscape, right?
And so, I think that's a good thing.
And so, death gives space for youth
and novelty to take its place.
And I think it's an important part
of every system in nature.
So yeah, I am for death.
But I do think that longer lifespan
and longer time for neuroplasticity, bigger brains,
which should be something we should strive for.
Well, in that, Jeff Bezos and Bev Jasos agree
that all companies die.
And for Jeff, the goal is to try to,
he calls it day one thinking,
try to constantly, for as long as possible, reinvent.
Sort of extend the life of the company,
but eventually, it too will die,
because it's so damn difficult to keep reinventing.
Are you afraid of your own death?
I think I have ideas and things I'd like to achieve
in this world before I have to go,
but I don't think I'm necessarily afraid of death.
So you're not attached to this particular body and mind
that you got?
No, I think I'm sure there's gonna be better versions
of myself in the future or-
Forks.
Forks, right?
Genetic forks or other, right?
I truly believe that.
I think there's a sort of evolutionary-like algorithm
happening at every bit or nap in the world
as sort of adapting through this process
that we describe in IAC.
And I think maintaining this adaptation malleability
is how we have constant optimization of the whole machine.
And so I don't think I'm particularly an optimum
that needs to stick around forever.
I think there's gonna be greater optima in many ways.
What do you think is the meaning of it all?
What's the why of the machine, the IAC machine?
The why?
Well, the why is thermodynamics.
It's why we're here.
It's what has led to the formation of life
and of civilization, of evolution of technologies
and growth of civilization.
But why do we have thermodynamics?
Why do we have our particular universe?
Why do we have these particular hyperparameters,
the constants of nature?
Well, then you get into the anthropic principle, right?
In the landscape of potential universes, right?
We're in the universe that allows for life.
And then why is there potentially many universes?
I don't know.
I don't know that part.
Could we potentially engineer new universes
or create pocket universes and set the hyperparameters
so there is some mutual information
between our existence and that universe
and we'd be somewhat its parents?
I think that's really, I don't know, that'd be very poetic.
It's purely conjecture.
But again, this is why figuring out quantum gravity
would allow us to understand if we can do that.
And above that, why does it all seem
so beautiful and exciting?
The quest to figuring out quantum gravity
seems so exciting.
Why?
Why is that?
Why are we drawn to that?
Why are we pulled towards that?
Just that puzzle-solving creative force
that underpins all of it, it seems like.
I think we seek, just like an LLM seeks
to minimize cross-entropy
between its internal model and the world,
we seek to minimize the statistical divergence
between our predictions and the world itself.
And having regimes of energy scales or physical scales
in which we have no visibility,
no ability to predict or perceive,
that's kind of an insult to us.
And we want to be able to understand the world better
in order to best steer it or steer us through it.
And in general, it's the capability that has evolved
because the better you can predict the world,
the better you can capture utility or free energy
towards your own sustenance and growth.
And I think quantum gravity, again,
is kind of the final boss in terms of knowledge acquisition
because once we've mastered that,
then we can do a lot potentially.
But between here and there,
I think there's a lot to learn in the mesoscales.
There's a lot of information to acquire about our world
and a lot of engineering, perception, prediction,
and control to be done to climb up the Kardashev scale.
And to us, that's the great challenge of our times.
And when you're not sure where to go,
let the meme pave the way.
Guillaume, Beth, thank you for talking today.
Thank you for the work you're doing.
Thank you for the humor and the wisdom
you put into the world.
This was awesome.
Thank you so much for having me, Lex.
It's a pleasure.
Thank you for listening to this conversation
with Guillaume Verdun.
To support this podcast,
please check out our sponsors in the description.
And now let me leave you with some words
from Albert Einstein.
If at first the idea is not absurd,
then there is no hope for it.
Thank you for listening.
I hope to see you next time.
Thank you.