This graph shows how many times the word ______ has been mentioned throughout the history of the program.
The following is a conversation with Peter Singer,
professor of bioethics and personal university,
best known for his 1975 book, Animal Liberation,
that makes an ethical case against eating meat.
He has written brilliantly from an ethical perspective
on extreme poverty, euthanasia, human genetic selection,
sports doping, the sale of kidneys,
and generally happiness,
including in his books, Ethics in the Real World,
and The Life You Can Save.
He was a key popularizer of the effective altruism movement
and is generally considered
one of the most influential philosophers in the world.
Quick summary of the ads.
Two sponsors, Cash App and Masterclass.
Please consider supporting the podcast
by downloading Cash App and using code LEX Podcast
and signing up at masterclass.com slash LEX.
Click the links by the stuff.
It really is the best way to support the podcast
and the journey I'm on.
As you may know, I primarily eat a ketogenic
or carnivore diet,
which means that most of my diet is made up of meat.
I do not hunt the food I eat,
though one day I hope to.
I love fishing, for example.
Fishing and eating the fish I catch has always felt
much more honest than participating
in the supply chain of factory farming.
From an ethics perspective,
this part of my life has always had a cloud over it.
It makes me think.
I've tried a few times in my life
to reduce the amount of meat I eat,
but for some reason, whatever the makeup of my body,
whatever the way I practice the dieting I have,
I get a lot of mental and physical energy
and performance from eating meat.
So both intellectually and physically,
it's a continued journey for me.
I return to Peter's work often to reevaluate the ethics
of how I live this aspect of my life.
Let me also say that you may be a vegan
or you may be a meat eater
and may be upset by the words I say or Peter says,
but I ask for this podcast
and other episodes of this podcast
that you keep an open mind.
I may and probably will talk with people you disagree with.
Please try to really listen,
especially to people you disagree with
and give me and the world the gift
of being a participant in a patient, intelligent
and nuanced discourse.
If your instinct and desire is to be a voice of mockery
towards those you disagree with, please unsubscribe.
My source of joy and inspiration here
has been to be a part of a community
that thinks deeply and speaks with empathy and compassion.
That is what I hope to continue being a part of
and I hope you join as well.
If you enjoy this podcast, subscribe on YouTube,
review it with 5 Stars on Apple Podcast,
follow on Spotify, support on Patreon
or connect with me on Twitter at Lex Freedman.
As usual, I'll do a few minutes of ads now
and never any ads in the middle
that can break the flow of the conversation.
This show is presented by Cash App,
the number one finance app in the App Store.
When you get it, use code Lex Podcast.
Cash App lets you send money to friends by Bitcoin
and invest in the stock market with as little as $1.
Since Cash App allows you to buy Bitcoin,
let me mention that cryptocurrency
in the context of the history of money is fascinating.
I recommend Ascent of Money as a great book in this history.
Debits and credits on ledgers started around 30,000 years ago.
The US dollar created over 200 years ago
and the first decentralized cryptocurrency
released just over 10 years ago.
So given that history,
cryptocurrency is still very much
in its early days of development,
but it's still aiming to and just might redefine
the nature of money.
So again, if you get Cash App from the App Store
or Google Play and use the code Lex Podcast,
you get $10 and Cash App will also donate $10 to first,
an organization that is helping to advance robotic system
education for young people around the world.
This show is sponsored by Masterclass.
Sign up at masterclass.com slash Lex to get a discount
and to support this podcast.
When I first heard about Masterclass,
I thought it was too good to be true.
For $180 a year, you get an all-access pass
to watch courses from, to list some of my favorites,
Chris Hadfield on Space Exploration,
Nielugas Tyson on Scientific Thinking and Communication,
Will Wright, creator of SimCity and Sims on Game Design.
I promise I'll start streaming games at some point soon.
Carlos Santana on Guitar,
Gary Kasparov on Chess, Daniel Lagrano on Poker,
and many more.
Chris Hadfield explaining how rockets work
and the experience of being launched into space alone
is worth the money.
By the way, you can watch it on basically any device.
Once again, sign up at masterclass.com slash Lex
to get a discount and to support this podcast.
And now here's my conversation with Peter Singer.
When did you first become conscious of the fact
that there is much suffering in the world?
I think I was conscious of the fact
that there's a lot of suffering in the world
pretty much as soon as I was able to understand
anything about my family and its background
because I lost three of my four grandparents
in the Holocaust.
And obviously I knew why I only had one grandparent
and she herself had been in the camps and survived.
So I think I knew a lot about that pretty early.
My entire family comes from the Soviet Union.
I was born in the Soviet Union.
Sort of World War II has deep roots in the culture
and the suffering that the war brought the millions
of people who died is in the music,
is in the literature, is in the culture.
What do you think was the impact
of the war broadly on our society?
The war had many impacts.
I think one of them, a beneficial impact
is that it showed what racism
and authoritarian government can do.
And at least as far as the West was concerned,
I think that meant that I grew up in an era
in which there wasn't the kind of overt racism
and anti-Semitism that had existed for my parents in Europe.
I was growing up in Australia.
And certainly that was clearly seen
as something completely unacceptable.
There was also a fear of a further outbreak of war,
which this time we expected would be nuclear
because of the way the Second World War had ended.
So there was this overshadowing of my childhood
about the possibility that I would not live to grow up
and be an adult because of a catastrophic nuclear war.
The film on the beach was made in which the city
that I was living, Melbourne, was the last place on earth
to have living human beings because of the nuclear cloud
that was spreading from the North.
So that certainly gave us a bit of that sense.
There were many, there were clearly many other legacies
that we got of the war as well
and the whole setup of the world
and the Cold War that followed.
All of that has its roots in the Second World War.
You know, there is much beauty that comes from war.
Sort of, I had a conversation with Eric Weinstein,
he said, everything is great about war
except all the death and suffering.
Do you think there's something positive
that came from the war,
the mirror that it put to our society,
sort of the ripple effects on it, ethically speaking,
do you think there are positive aspects to war?
I find it hard to see positive aspects in war
and some of the things that other people think of
as positive and beautiful may be questioning.
So there's a certain kind of patriotism.
People say, you know, during wartime, we all pull together,
we all work together against the common enemy.
And that's true, an outside enemy does unite a country
and in general, it's good for countries to be united
and have common purposes.
But it also engenders a kind of a nationalism
and a patriotism that can't be questioned
and that I'm more skeptical about.
What about the brotherhood that people talk about
from soldiers, the sort of counterintuitive,
sad idea that the closest that people feel to each other
is in those moments of suffering,
of being at the sort of the edge
of seeing your comrades dying in your arms.
That somehow brings people extremely closely together.
Suffering brings people closer together.
How do you make sense of that?
It may bring people close together,
but there are other ways of bonding
and being close to people, I think,
without the suffering and death that war entails.
Perhaps you could see, you could already hear
the romanticized Russian enemy.
We tend to romanticize suffering just a little bit
in our literature and culture and so on.
Could you take a step back and I apologize
if it's a ridiculous question, but what is suffering?
If you would try to define what suffering is,
how would you go about it?
Suffering is a conscious state.
There can be no suffering for a being
who is completely unconscious.
And it's the state of mind.
And it's distinguished from other conscious states
in terms of being one that considered just in itself.
We would rather be without.
It's a conscious state that we want to stop
if we're experiencing or we want to avoid having again
if we've experienced it in the past.
And that's, as I emphasize, for its own sake.
Because, of course, people will say,
well, suffering strengthens the spirit.
It has good consequences.
And sometimes it does have those consequences.
And of course, sometimes we might undergo suffering.
We set ourselves a challenge to run a marathon
or climb a mountain or even just to go to the dentist
so that the toothache doesn't get worse
even though we know the dentist is going to hurt us
to some extent.
So I'm not saying that we never choose suffering,
but I am saying that other things being equal,
we would rather not be in that state of consciousness.
Is the ultimate goal sort of,
you have the new 10-year anniversary release
of the Life You Can Save Book, really influential book.
We'll talk about it a bunch of times
throughout this conversation.
But do you think it's possible to eradicate suffering?
Or is that the goal?
Or do we want to achieve a kind of minimum threshold
of suffering and then keeping a little drop of poison
to keep things interesting in the world?
In practice, I don't think we ever will eliminate suffering.
So I think that little drop of poison, as you put it,
or if you like, the contrasting dash of an unpleasant color,
perhaps something like that in an otherwise harmonious
and beautiful composition, that is going to always be there.
If you ask me whether, in theory,
if we could get rid of it, we should.
I think the answer is whether, in fact, we would be better off
or whether, in terms of by eliminating the suffering,
we would also eliminate some of the highs, the positive highs.
And if that's so, then we might be prepared to say
it's worth having a minimum of suffering
in order to have the best possible experiences as well.
Is there a relative aspect to suffering?
When you talk about eradicating poverty in the world,
is this the more you succeed,
the more the bar of what defines poverty raises?
Or is there, at the basic human ethical level,
a bar that's absolute, that once you get above it,
then we can morally converge to feeling
like we have eradicated poverty?
I think they're both.
And I think this is true for poverty as well as suffering.
There's an objective level of suffering or of poverty
where we're talking about objective indicators,
like you're constantly hungry, you can't get enough food,
you're constantly cold, you can't get warm,
you have some physical pains that you're never rid of.
I think those things are objective.
But it may also be true that if you do get rid of that
and you get to the stage where all of those basic needs
have been met, there may still be then new forms of suffering
that develop.
And perhaps that's what we're seeing in the affluent societies
we have, that people get bored, for example.
They don't need to spend so many hours a day earning money
to get enough to eat and shelter.
So now they're bored, they lack a sense of purpose.
That can happen.
And that then is a kind of a relative suffering
that is distinct from the objective forms of suffering.
But in your focus on eradicating suffering,
you don't think about that kind of,
the kind of interesting challenges and suffering
that emerges in affluent societies,
that's just not in your ethical, philosophical brain,
is that of interest at all?
It would be of interest to me if we had eliminated
all of the objective forms of suffering,
which I think are generally more severe
and also perhaps easier at this stage anyway to know
how to eliminate.
So yes, in some future state,
when we've eliminated those objective forms of suffering,
I would be interested in trying to eliminate
the relative forms as well.
But that's not a practical need for me at the moment.
Sorry to linger on it because you kind of said it,
but just is elimination the goal for the affluent society?
So do you see a suffering as a creative force?
Suffering can be a creative force.
I think I'm repeating what I said about the highs
and whether we need some of the lows to experience the highs.
So it may be that suffering makes us more creative
and we regard that as worthwhile.
Maybe that brings some of those highs with it
that we would not have had if we'd had no suffering.
I don't really know.
Many people have suggested that
and I certainly can't have no basis for denying it.
And if it's true, then I would not want
to eliminate suffering completely.
But the focus is on the absolute not to be cold,
not to be hungry.
Yes, that's at the present stage
of where the world's population is.
That's the focus.
Talking about human nature for a second,
do you think people are inherently good
or do we all have good and evil in us
that basically everyone is capable of evil
based on the environment?
Certainly, most of us have potential for both good and evil.
I'm not prepared to say that everyone is capable of evil.
Maybe some people who even in the worst of circumstances
would not be capable of it.
But most of us are very susceptible
to environmental influences.
So when we look at things that we were talking about previously,
let's say, what the Nazis did during the Holocaust,
I think it's quite difficult to say
I know that I would not have done those things
even if I were in the same circumstances
as those who did them.
Even if, let's say, I had grown up under the Nazi regime
and had been indoctrinated with racist ideas,
had also had the idea that I must obey orders,
follow the commands of the Fuhrer.
Plus, of course, perhaps the threat
that if I didn't do certain things,
I might get sent to the Russian front,
and that would be a pretty grim fate.
I think it's really hard for anybody to say,
nevertheless, I know I would not have killed those Jews
or whatever else it was that they did.
Well, what's your intuition?
How many people will be able to say that?
Truly, you'd be able to say it.
I think very few, less than 10%.
To me, it seems a very interesting and powerful thing
to meditate on.
So, I've read a lot about the war, the World War II,
and I can't escape the thought
that I would have not been one of the 10%.
Right.
I have to say, I simply don't know.
I would like to hope that I would have been one of the 10%,
but I don't really have any basis for claiming
that I would have been different from the majority.
Is it a worthwhile thing to contemplate?
It would be interesting if we could find a way of really finding
these answers.
There obviously is quite a bit of research on people during
the Holocaust, on how ordinary Germans got led to do terrible
things, and there are also studies of the resistance.
Some heroic people in the White Rose group, for example,
who resisted even though they knew they were likely to die
for it.
But I don't know whether these studies really can answer
your larger question of how many people would have been
capable of doing that.
Well, the reason I think it's interesting is, in the world,
as you described, when there are things that you'd like to do,
they're good, that are objectively good.
It's useful to think about whether I'm not willing to do
something or I'm not willing to acknowledge something as good
and the right thing to do because I'm simply scared of
damaging my life in some kind of way.
And that kind of thought exercise is helpful to understand
what is the right thing in my current skill set and the
capacity to do.
There are things that are convenient, and I wonder if
there are things that are highly inconvenient, where I would
have to experience derision or hatred or death or all those
kinds of things, but it's truly the right thing to do.
And that kind of balance is, I feel like in America we don't
have, it's difficult to think in the current times, it seems
easier to put yourself back in history where you can sort of
objectively contemplate whether, how willing you are to do
the right thing when the cost is high.
True, but I think we do face those challenges today and I
think we can still ask ourselves those questions.
So one stand that I took more than 40 years ago now was to
stop eating meat and become a vegetarian at a time when you
hardly met anybody who was a vegetarian or if you did they
might have been a Hindu or they might have had some weird
theories about meat and health.
And I know thinking about making that decision, I was
convinced that it was the right thing to do, but I still did
have to think, are all my friends going to think that
I'm a crank because I'm now refusing to eat meat?
So I'm not saying there were any terrible sanctions,
obviously, but I thought about that and I guess I decided,
well, I still think this is the right thing to do and I'll
put up with that if it happens.
And one or two friends were clearly uncomfortable with that
decision, but that was pretty minor compared to the
historical examples that we've been talking about.
But other issues that we have around too, like global
poverty and what we ought to be doing about that is another
question where people I think can have the opportunity to
take a stand on what's the right thing to do now.
Climate change would be a third question where again,
people are taking a stand.
I can look at Greta Thunberg there and say, well, I think it
must have taken a lot of courage for a schoolgirl to say,
I'm going to go on strike about climate change and see what happened.
Yeah, especially in this divisive world, she gets
exceptionally huge amounts of support and hatred both.
That's right.
She's a very difficult for a teenager to operate in.
In your book, Ethics in the Real World, amazing book,
people should check it out.
Very easy read.
82 brief essays on things that matter.
One of the essays asks, should robots have rights?
You've written about this, so let me ask, should robots have rights?
If we ever develop robots capable of consciousness,
capable of having their own internal perspective on what's happening to them
so that their lives can go well or badly for them,
then robots should have rights.
Until that happens, they shouldn't.
So is consciousness essentially a prerequisite to suffering?
So everything that possesses consciousness is capable of suffering put another way.
And if so, what is consciousness?
I certainly think that consciousness is a prerequisite for suffering.
You can't suffer if you're not conscious.
But is it true that every being that is conscious will suffer
or has to be capable of suffering?
I suppose you could imagine a kind of consciousness,
especially if we can construct it artificially,
artificially, that's capable of experiencing pleasure,
but just automatically cuts out the consciousness when they're suffering.
Sort of like an instant anesthesia as soon as something is going to cause suffering.
So that's possible, but doesn't exist as far as we know on this planet yet.
You asked, what is consciousness?
Philosophers often talk about it as there being a subject of experiences.
So you and I and everybody listening to this is a subject of experience.
There is a conscious subject who is taking things in,
responding to it in various ways, feeling good about it, feeling bad about it.
And that's different from the kinds of artificial intelligence we have now.
I take out my phone, I ask Google directions to where I'm going.
Google gives me the directions and I choose to take a different way.
You know, Google doesn't care.
It's not like I'm offending Google or anything like that.
There is no subject of experiences there.
And I think that's the indication that Google AI we have now is not conscious
or at least that level of AI is not conscious.
And that's the way to think about it.
Now, it may be difficult to tell, of course, whether a certain AI is or isn't conscious.
It may mimic consciousness and we can't tell if it's only mimicking it or if it's the real thing.
But that's what we're looking for.
Is there a subject of experience, a perspective on the world from which
things can go well or badly from that perspective?
So our idea of what suffering looks like comes from just watching ourselves
when we're in pain, sort of...
Or when we're experiencing pleasure.
It's not only...
Pleasure and pain.
Yes.
So and then you could actually, you could push back on this, but I would say that's how we
kind of build an intuition about animals is we can infer the similarities between humans and
animals and so infer that they're suffering or not based on certain things and they're conscious
or not. So what if robots, you mentioned Google Maps and I've done this experiment so I work in
robotics just for my own self or I have several Roomba robots and I play with different speech
interaction, voice based interaction.
And if the Roomba or the robot or Google Maps shows any signs of pain like screaming or moaning
or being displeased by something you've done that in my mind I can't help but immediately
upgrade it.
And even when I myself programmed it in just having another entity that's now for the moment
disjoint from me showing signs of pain makes me feel like it is conscious.
Like I immediately then the whatever the I immediately realize that it's not obviously
but that feeling is there. So sort of I guess I guess what do you think about a world
where Google Maps and Roombas are pretending to be conscious and we descendants of apes are not
smart enough to realize or not or whatever or that is conscious they appear to be conscious
and so you then have to give them rights.
The reason I'm asking that is that kind of capability may be closer than we realize.
Yes that kind of capability may be closer but I don't think it follows that we have to give
them rights. I suppose the argument for saying that in those circumstances we should give them
rights is that if we don't we'll harden ourselves against other beings who are not robots and who
really do suffer. That's a possibility that you know if we get used to looking at it being
suffering and saying yeah we don't have to do anything about that that being doesn't have any
rights maybe we'll feel the same about animals for instance and interestingly among philosophers
and thinkers who denied that we have any direct duties to animals and this includes people like
Thomas Aquinas and Immanuel Kant they did say yes but still it's better not to be cruel to them
not because of the suffering we're inflicting on the animals but because if we are we may develop
a cruel disposition and this will be bad for humans you know because we would more like to be cruel
to other humans and that would be wrong. But you don't accept that kind of.
I don't accept that as the basis of the argument for why we shouldn't be cruel to animals I think
the basis of the argument for why we shouldn't be cruel to animals is just that we're inflicting
suffering on them and the suffering is a bad thing but possibly I might accept some sort of
parallel of that argument as a reason why you shouldn't be cruel to these robots that mimic
the symptoms of pain if it's going to be harder for us to distinguish.
I would venture to say I'd like to disagree with you and with most people I think
at the risk of sounding crazy I would like to say that if that Roomba is dedicated
to faking the consciousness and the suffering I think it will be impossible for us.
I would like to apply the same argument as with animals to robots that they deserve rights in that
sense. Now we might outlaw the addition of those kinds of features into Roombas but once you do
I think I'm quite surprised by the upgrade in consciousness that the display of suffering
creates. It's a totally open world but I'd like to just serve the difference between animals
and other humans is that in the robot case we've added it in ourselves therefore we can say something
about how real it is but I would like to say that the display of it is what makes it real
and I'm not a philosopher I'm not making that argument but I'd at least like to add that as a
possibility and I've been surprised by it is all I'm trying to sort of articulate poorly I suppose.
So there is a philosophical view has been held about humans which is rather like what you're
talking about and that's behaviorism so behaviorism was employed both in psychology people like B.F.
Skinner was a famous behaviorist but in psychology it was more a kind of a what is it that makes
this science where you need to have behavior because that's what you can observe you can't
observe consciousness but in philosophy the view is defended by people like Gilbert Ryle who
was a professor of philosophy at Oxford wrote a book called The Concept of Mind in which you know
in this kind of phase this is in the 40s of linguistic philosophy he said well the meaning
of a term is its use and we use terms like so-and-so is in pain when we see somebody writhing or
screaming or trying to escape some stimulus and that's the meaning of the term so that's what
it is to be in pain and you point to the behavior and Norman Malcolm who was another philosopher
in the school from Cornell had had the view that you know so what is it to dream after all we can't
see other people's dreams well when people wake up and say I've just had a dream of you know here I
was undressed walking down the main street or whatever it is you've dreamt and that's what it
is to have a dream it's to basically to wake up and recall something so you could apply this to
what you're talking about and say so what it is to be in pain is to exhibit these symptoms of
pain behavior and therefore these robots are in pain that's what the word means but nowadays not
many people think that Ryle's kind of philosophical behaviorism is really very plausible so I think
they would say the same about your view so yes I just spoke with Noam Chomsky who basically was part
of dismantling the behaviorist movement but and I'm with that a hundred percent for studying human
behavior but I am one of the few people in the world who has made Roomba's scream in pain and
I just don't know what to do with that empirical evidence because it's hard sort of philosophically
I agree but the only reason I philosophically agree in that case is because I was the programmer
well if somebody else was a programmer I'm not sure I would be able to interpret that well
so it's a I think it's a new world that I was just curious what your thoughts are for now you feel
that the display of the what we can kind of intellectually say is a fake display of suffering
is not suffering that's right that would be my view but that's consistent of course with the
idea that it's part of our nature to respond to this display if it's reasonably authentically
done and therefore it's understandable that people would feel this and maybe as I said it's even a
good thing that they do feel it and you wouldn't want to harden yourself against it because then
you might harden yourself against beings who are really suffering but there's this line you know
so you said once a artificial journal intelligence system a human level intelligence system become
conscious I guess if I could just linger on it now I've wrote really dumb programs that just say
things that I told them to say but how do you know when a when a system like Alexa which is
officially complex that you can't introspect of how it works starts giving you signs of consciousness
through natural language that there's a there's a feeling there's another entity there that's
self-aware that has a fear of death a mortality that has awareness of itself that we kind of
associate with other living creatures I guess I'm sort of trying to do this slippery slope from
the very naive thing where I started into into something where it's sufficiently a black box
to where it's starting to feel like it's conscious where's that threshold where you would start getting
uncomfortable or the idea of robot suffering do you think I don't know enough about the
programming that we're going to this really to answer this question but I presume that somebody
who does know more about this could could look at the program and see whether we can explain
the behaviors in a parsimonious way that doesn't require us to suggest that some sort of consciousness
has emerged or alternatively whether you're in a situation where you say I don't know how this is
happening I the program does generate a kind of artificial general intelligence which is autonomous
you know starts to do things itself and is autonomous of the basics programming that set it up
and so it's quite possible that actually we have achieved consciousness in a system of
artificial intelligence sort of the the approach that I work at most of the community is really
excited about now is with learning methods so machine learning and the learning methods are
unfortunately are not capable of revealing which is why somebody like Noam Chomsky criticizes them
you've created powerful systems that are able to do certain things without understanding the
theory the physics the science of how it works and so it's possible if those are the kinds of
methods that succeed we won't be able to know exactly sort of try to reduce try to find whether
there is this thing is conscious or not this thing is intelligent or not it's simply giving
when we talk to it it displays wit and humor and cleverness and emotion and fear and then we won't
be able to say where in the billions of nodes neurons in this artificial neural network is
is the fear coming from so in that case that's a really interesting place where we do now start
to return to behaviorism and say yeah that's that's that is an interesting issue I would say that
if we have serious doubts and think it might be conscious then we ought to try to give it the
benefit of the doubt just as I would say with animals we I think we can be highly confident that
vertebrates are conscious but when we get down and and some invertebrates like the octopus but
but with insects it's much harder to be to be confident of that I think we should give them
the benefit of the doubt where we can which means you know I think it would be wrong to torture
an insect but this doesn't necessarily mean it's wrong to slap a mosquito that's about to bite you
and stop you getting to sleep so I think you you try to achieve some balance in these circumstances
of uncertainty if it's okay with you if we can go back just briefly so 44 years ago like you
mentioned 40 plus years ago you've written animal liberation the classic book that started
that launched it was a foundation of the movement of animal liberation do can you
summarize the key set of ideas that underpin that book certainly the the key idea that underlies that
book is the concept of speciesism which I did not invent that term I took it from a man called
Richard Ryder who was in Oxford when I was and I saw a pamphlet that he'd written about
experiments on chimpanzees that used that term but I think I contributed to making it
philosophically more precise and to getting it into a to a broader audience and the idea is that
we have a bias or a prejudice against taking seriously the interests of beings who are not
members of our species just as in the past Europeans for example had a bias against taking
seriously the interests of Africans racism and men have had a bias against taking seriously the
interests of women sexism so I think something analogous not completely identical but something
analogous goes on and has gone on for a very long time with the way humans see themselves vis-a-vis
animals we see ourselves as more important we see animals as existing to serve our needs in
various ways and you can find this very explicit in earlier philosophers from Aristotle through
to Kant and others and either we don't need to take their interest into account at all or
we can discount it because they're not humans they they can't a little bit but they don't
count nearly as much as humans do my book I use that that attitude is responsible for
a lot of the things that we do to animals that are wrong confining them indoors in very crowded
cramped conditions in in factory farms to produce meat or eggs or milk more cheaply
using them in some research that's by no means essential for our survival or well-being
and a whole lot you know some of the sports and things that we do to animals so I think that's
unjustified because I think the significance of pain and suffering does not depend on the
species of the being who is in pain or suffering anymore than it depends on the race or sex of
the being who is in pain or suffering and I think we ought to rethink our treatment of animals
along the lines of saying if the pain is just as great in animal then it's just as bad that it
happens as if it were a human maybe if I could ask I I apologize hopefully it's not a ridiculous
question but so as far as we know we cannot communicate with animals through natural language
but we would be able to communicate with robots some returning to sort of a small parallel between
perhaps animals in the future of AI if we do create an AGI system or as we approach creating
that AGI system what kind of questions would you ask her to try to to try to intuit whether
whether there is consciousness whether or more importantly whether there's capacity to suffer?
I might ask the AGI what she was feeling well does she have feelings and if she says yes to
describe those feelings to describe what they were like to see what the phenomenal account of
consciousness is like that's one question I might also try to find out if the AGI has a sense of
itself so for example the idea would you you know we often ask people so suppose you're in a
car accident and your brain were transplanted into someone else's body do you think you would survive
or would it be the person whose body was still surviving you know your body having been destroyed
and most people say I think I would you know if my brain was transplanted along with my memories
and so on I would survive so we could ask AGI those kinds of questions if they were transferred
to a different piece of hardware would they survive what would survive get it that sort of
sort of on that line another perhaps absurd question but do you think having a body
is necessary for consciousness so do you think digital beings can suffer?
Presumably digital beings need to be running on some kind of hardware right?
Yes it ultimately boils down to but this is exactly what you just said is moving the brain
right so you could move it to a different kind of hardware you know and they could say look
you know your hardware is needs is getting worn out we're going to transfer you to a fresh
piece of hardware so we're going to shut you down for a time but don't worry you know you'll
be running very soon on a nice fresh piece of hardware and you could imagine this conscious
AGI saying that's fine I don't mind having a little rest just make sure you don't lose me
yet I mean that's an interesting thought that even with us humans the suffering is in the
software we right now don't know how to repair the hardware but we're getting better at it and
better and the idea I mean a lot of some people dream about one day being able to transfer certain
aspects of the software to another piece of hardware what do you think just on that topic
there's been a lot of exciting innovation in brain computer interfaces I don't know if you're
familiar with the companies like Neuralink with Elon Musk communicating both ways from a computer
being able to send activate neurons and being able to read spikes from neurons with with the
dream of being able to expand sort of increase the bandwidth at which your brain can like look
up articles on Wikipedia kind of thing yeah expand the kept the knowledge capacity of the brain
do you think that notion is is that interesting to you as as the expansion of the human mind
yes that's very interesting I'd love to be able to have that increased bandwidth
and I you know I want better access to my memory I have to say too is yet older you know you
I talked to my wife about things that we did 20 years ago or something her memory is often
better about particular events where were we who was at that event what did he or she wear even
she may know and I have not the faintest idea about this but perhaps it's somewhere in my memory
and if I had at this extended memory I could I could search that particular year and rerun those
things I think that would be great in some sense we already have that by storing so much of our
data online like pictures of different yes well Gmail is fantastic for that because you know people
people email me as if they know me well and I haven't got a clue who they are but then I
searched for their name I just emailed me in 2007 and I know who they are now yeah so we're
ready we're taking the first steps already so on the flip side of AI people like Stuart Russell
and others focus on the control problem value alignment in AI which is the problem of making
sure we build systems that align to our own values our ethics do you think sort of high level how do
we go about building systems do you think is it possible that align with our values align with
our human ethics or living being ethics presumably it's it's possible to do that I know that a lot
of people who think that there's a real danger that we won't that will more or less accidentally
lose control of of AGI do you have that fear yourself personally I'm not quite sure what to
think I talked to philosophers like Nick Bostrom and Toby Ord and they think that this is a real
problem we need to worry about then I talked to people who work for Microsoft or DeepMind or
somebody and they say no we're not really that close to producing AGI you know superintelligence
so if you look at Nick Bostrom sort of the argument that it's very hard to defend so I'm of
course an IMSF engineer AI system so I'm more with the DeepMind folks where it seems that we're
really far away but then the counter argument is is there any fundamental reason that will never
achieve it and if not then eventually there'll be a dire existential risk so we should be concerned
about it and do you have do you have uh do you find that argument at all appealing in this domain
or in your domain that eventually this will be a problem so we should be worried about it yes I
think it's a problem I think there's that's a valid point of course when you say eventually
that raises the question how far off is that and is there something that we can do about it now
because if we're talking about this is going to be a hundred years in the future and you consider
how rapidly our knowledge of artificial intelligence has grown in the last 10 or 20 years it seems
unlikely that there's anything much we could do now that would influence whether this is going to
happen 100 years in the future you know people in 80 years in the future would be in a much better
position to say this is what we need to do to prevent this happening than than we are now so
to some extent I find that reassuring but I'm all in favor of some people doing research into this
to see if indeed it is that far off or if we are in a position to do something about it sooner
I'm I'm very much of the view that extinction is a terrible thing and therefore even if the
risk of extinction is very small if we can reduce that risk that's something that we ought to do
my disagreement with some of these people who talk about long-term risks extinction risk
is only about how much priority that should have as compared to present questions
no such if you look at the math of it from a utilitarian perspective if it's existential
risk so everybody dies that there's like it feels like an infinity in the math equation
that that makes the math with the priorities difficult to do that if we don't know the
time scale and you can legitimately argue that it's non-zero probability that it'll happen tomorrow
that how do you deal with these kinds of existential risks like from nuclear war
from nuclear weapons from biological weapons from I'm not sure global warming falls into
that category because global warming is a lot more gradual and people say it's not an existential
risk because there'll always be possibilities of some humans existing farming Antarctica or
northern Siberia or something of that sort yeah but you don't find the sort of the complete
existential risks a fundamental like an overriding part of the equations of ethics of what
no you know certainly if you treat it as an infinity then it plays havoc with any calculations
but arguably we shouldn't only one of the ethical assumptions that goes into this is that
the loss of future lives that is of merely possible lives of beings who may never exist at all
is in some way comparable to the sufferings or deaths of people who do exist at some point
and that's not clear to me I think there's a case for saying that but I also think there's
a case for taking the other view so that has some impact on it of course you might say ah yes but
still if there's some uncertainty about this and the the costs of extinction are infinite then
still it's going to overwhelm and everything else but I suppose I I'm not convinced of that I'm not
convinced that it's really infinite here and even Nick Bostrom in his discussion of this doesn't
claim that there'll be an infinite number of lives lives he and what is it 10 to the 56th or
something it's a vast number that I think he calculates this is assuming we can upload consciousness
onto these you know digital form digital forms and therefore there'll be much more energy efficient
but he calculates the amount of energy in the universe or something like that so the numbers
are vast but non-infinite which gives you some prospect maybe of resisting some of the argument
the beautiful thing with Nick's arguments is he quickly jumps from the individual scale to the
universal scale which is just awe-inspiring to think when you think about the entirety of the
span of time of the universe it's both interesting from a computer science perspective AI perspective
and from an ethical perspective the idea of utilitarianism because you say what is utilitarianism
utilitarianism is the ethical view that the right thing to do is the act that has the greatest
expected utility where what that means is it's the act that will produce the best consequences
discounted by the odds that you won't be able to produce those consequences that something will go
wrong but in simple case let's assume we we have certainty about what the consequence of
actions will be then the right action is the action that will produce the best consequences
is that always and by the way there's a bunch of nuanced stuff that you talked with Sam Harris
on this podcast on the people should go listen to it's great it's like two hours of moral philosophy
discussion but is that an easy calculation no it's a difficult calculation and actually there's
one thing that I need to add and that is utilitarians certainly the classical utilitarians think that by
best consequences we're talking about happiness and the absence of pain and suffering there are
other consequentialists who are not really utilitarians who say there are different things
that could be good consequences justice freedom you know human dignity knowledge they all kind
as good consequences too and that makes the calculations even more difficult because then
you need to know how to balance these things off if you are just talking about well-being
using that term to express happiness and the absence of suffering I think that the calculation
becomes more manageable in a philosophical sense it's still in practice we don't know how to do
it we don't know how to measure quantities of happiness and misery we don't know how to calculate
the probabilities the different actions will produce this or that so at best we can use
it as a as a rough guide to different actions and one where we have to focus on the short-term
consequences because we just can't really predict all of the longer-term ramifications
so what about the sort of what about this the extreme suffering of very small groups
sort of utilitarianism is focused on the overall aggregate right how do you would you say you
yourself are you utilitarian yes I'm utilitarian sort of do you what do you make of the the difficult
ethical maybe poetic suffering of very few individuals I think it's possible that that
gets overridden by benefits to very large numbers of individuals I think that can can
be the right answer but before we conclude that is the right answer we have to know how severe
the suffering is and how that compares with the benefits so I I tend to think that extreme suffering
is worse than or is further if you like below the neutral level than extreme happiness or bliss
is above it so when I think about the worst experience is possible and the best experience
is possible I don't think of them as equidistant from neutral so like it's a scale that goes from
minus 100 through zero as a neutral level to plus 100 because I know that I would not exchange
an hour of my most pleasurable experiences for an hour of my most painful experiences even
and I wouldn't have an hour of my most painful experiences even for two hours or 10 hours of
my most painful experiences did I say that correctly yeah maybe 20 hours then yeah well
what's the exchange rate so that's the question what is the exchange rate but I think it's it
can be quite high so that's why you shouldn't just assume that you know it's okay to make one person
suffer extremely in order to make two people much better off it might be a much larger number
but at some point I do think you should aggregate and and the result will be even though it violates
our intuitions of justice and fairness whatever it might be giving priority to those who are
worse off at some point I still think that will be the right thing to do yeah some complicated
non-linear function can I ask a sort of out there question is the more more we put our data out
there the more we're able to measure a bunch of factors of each of our individual human lives
and I could foresee the ability to estimate well-being of whatever whatever we public we
together collectively agree and is a good objective function for from a utilitarian perspective do you
think it do you think it'll be possible and is a good idea to push that kind of analysis to make
then public decisions perhaps with the help of AI that you know here's a tax rate here's a tax rate
at which well-being will be optimized and yeah that would be great if we could if we really knew
that if we could really could calculate that no but do you think it's possible to converge towards
an agreement amongst humans towards an objective function is just a hopeless pursuit I don't
think it's hopeless I think it's difficult would be difficult to get converged towards agreement
at least at present because some people would say you know I've got different views about
justice and I think you ought to give priority to those who are worse off even though I acknowledge
that the gains that the worse off are making are less than the gains that those who are sort of
medium badly off could be making so we still have all of these intuitions that we we argue about
so I don't think we would get agreement but the fact that we wouldn't get agreement
doesn't show that there isn't a right answer there do you think who gets to say what is right
and wrong do you think there's place for ethics oversight from from the government so I'm thinking
in the case of AI overseeing what is what kind of decisions AI can make or not but also if you
look at animal animal rights or rather not rights or perhaps rights but the ideas you've
explored in animal liberation who gets to so you eloquently and beautifully right in your book
that this here you know we shouldn't do this but is there some harder rules that should be imposed
or is this a collective thing we converse towards a society and thereby make the better and better
ethical decisions politically I'm I'm still a democrat despite looking at the flaws in democracy
and the way it doesn't work always very well so I don't see a better option than
allowing the public to vote for governments in accordance with their policies and I hope that
they will vote for policies that reduce the suffering of animals and they reduce the
suffering of distant humans whether geographically distant or distant because they're future humans
but I recognize that democracy isn't really well set up to do that and in a sense you could imagine
a wise and benevolent you know omnibenevolent leader who would do that better than democracies
could but in the world in which we live it's difficult to imagine that this leader isn't
going to be corrupted by a variety of influences you know we've we've had so many examples of
people who've taken power with good intentions and then have ended up being corrupt and favoring
themselves so I don't know you know that's why as I say I don't know that we have a better system
than democracy to make these decisions well so you also discussed effective altruism which is a
mechanism for going around government for putting the power in the hands of the people to donate
money towards causes to help you know do you know remove the middleman and give it directly to the
to the causes that they care about sort of maybe this is a good time to ask you've 10 years ago
wrote the life you can save that's now I think available for free online that's right you can
download either the ebook or the audiobook free from the life you can save.org and what are the
key ideas that you present in in the book the main thing I want to do in the book is to make people
realize that it's not difficult to help people in extreme poverty that there are highly effective
organizations now that are doing this that they've been independently assessed and verified by
research teams that are expert in this area and that it's a fulfilling thing to do to
for at least part of your life you know we can't all be saints but at least one of your goals should
be to really make a positive contribution to the world and to do something to help people who through
no fault of their own are in very dire circumstances and and living a life that is
barely or perhaps not at all a decent life for a human being to live.
So you describe a a minimum ethical standard of giving what what advice would you give to people
that want to be effectively altruistic in their life like live an effective altruism life.
There are many different kinds of ways of living as an effective altruist and if you're at the point
where you're thinking about your long-term career I'd recommend you take a look at a website called
80,000 hours 80,000 hours.org which looks at ethical career choices and they range from for
example going to work on Wall Street so that you can earn a huge amount of money and then donate
most of it to effective charities to going to work for a really good non-profit organization so that
you can directly use your skills and ability and hard work to further a good cause or perhaps going
into politics maybe small chances but big payoffs in in politics go to work in the public service
where if you're talented you might rise to a high level where you can influence decisions
do research in an area where the payoffs could be great there are a lot of different opportunities
but too few people are even thinking about those questions they're just going along in some sort of
pre-ordained rut to particular careers maybe they think they'll earn a lot of money and have a
comfortable life but they may not find that as fulfilling as actually knowing that they're
making a positive difference for the world what about in terms of uh so that's like long term
80,000 hours sort of shorter term giving part of well actually it's a part of that and go to
walk or work at Wall Street if you would like to give a percentage of your income that you talk about
and life you can save that I mean it's it I was looking through it's quite a compelling um
it's I mean I'm I'm uh I'm just a dumb engineer so I like there's simple rules there's a nice
percentage okay so I do actually set out uh suggested levels of giving because people often
ask me about this um a popular answer is you know give 10 percent the traditional tithes that's
recommended in Christianity and also Judaism uh but you know why should it be the same percentage
irrespective of your income uh tax scales reflect the idea that the more income you have the more
you can pay tax and I think the same is true in what you can give so uh I do set out a progressive
donor scale which starts at 1 percent for people on modest incomes and rises to 33 and a third
percent for people who are really earning a lot and my idea is that I don't think any of these
amounts really impose real hardship on on people uh because they are progressive and geared to income
so I think anybody can do this and can know that they're doing something significant to
play their part in reducing the huge gap between people in extreme poverty in the world and people
living affluent lives and aside from it being an ethical life it's one need find more fulfilling
because like there's something about our human nature that or some of our human natures maybe
most of our human nature that enjoys doing the the ethical thing yeah so I make both those
arguments that it it is an ethical requirement in the kind of world we live in today to help
people in great need when we can easily do so but also that it is a rewarding thing and there's
good psychological research showing that uh people who give more tend to be more satisfied
with their lives and I think this has something to do with with having a purpose that's larger than
yourself um and therefore never being if you like never never being bored sitting around oh you know
what will I do next I've got nothing to do um in a world like this there are many good things that
you can do and enjoy doing them plus you're working with other people in the effective altruism
movement who are forming a community of other people with similar ideas and they tend to be
interesting thoughtful and good people as well and having friends of that sort is another big
contribution to having a good life so we talked about big things that are beyond ourselves but we
we're we're also just human and mortal do you ponder your own mortality is there insights
about your philosophy the ethics that you gain from pondering your own mortality clearly you know
as you get into your 70s you can't help thinking about your own mortality uh
but I don't know that I have great insights into that from my philosophy um I don't think there's
anything after the death of my body you know assuming that we won't be able to upload my mind
into anything at the time when I die um so I don't think there's any afterlife or anything to look
forward to in that sense uh be feared death so if you look at Ernest Becker and describing the
motivating aspects of the our ability to be cognizant of our mortality do you have any of those
elements in in your driving your motivation life I suppose the fact that you have only a limited
time to achieve the things that you want to achieve um gives you some sort of motivation to
get going and achieving them and if we thought we're immortal we might say ah you know I can put
that off for another decade or two um so there's that about it but otherwise you know no I'd rather
have more time to do more I'd also like to be able to see how things go that I'm interested in you
know is climate change going to turn out to be as dire as a lot of scientists say that it is is
going to be will we somehow scrape through with less damage than we thought um I'd really like
to know the answers to those questions but I guess I'm not going to well you said there's
nothing afterwards so let me ask the even more absurd question what do you think is the meaning
of it all I think the meaning of life is the meaning we give to it I don't think that we were
brought into the universe for any kind of larger purpose but given that we exist I think we can
recognize that some things are objectively bad extreme suffering is an example and other things
are objectively good like having a rich fulfilling enjoyable pleasurable life and we can try to do our
part in reducing the bad things and increasing the good things so one way the meaning is to do a
little bit more of the good things objectively good things and a little bit less of the bad things
yes so do as much of the good things as you can and as little of the bad things
Peter beautifully put I don't think there's a better place to end it thank you so much for
talking today thanks very much like it's been really interesting talking to you thanks for
listening to this conversation with Peter singer and thank you to our sponsors cash app and master
class please consider supporting the podcast by downloading cash app and use the code lex podcast
and signing up at masterclass.com slash lex click the links buy all the stuff it's the best way to
support this podcast and the journey I'm on my research and startup if you enjoy this thing
subscribe on youtube review it with five thousand app a podcast support on patreon are connecting
me on twitter at lex freedman spelled without the e just f-r-i-d-m-a-n and now let me leave you
some words from Peter singer what one generation finds ridiculous the next accepts and the third
shudders when looks back what the first did thank you for listening and hope to see you next time