Is humanity prepared to handle catastrophic threats? My long-read Q&A with Toby Ord

As technology progresses, we find that our advancements
offer may protections against some existential risks while creating new ones. So
what are the odds that we are living in the last century of human history? What
sorts of risks do we face from natural disasters, dystopic technology, and
totalitarian world government? On this episode, Toby Ord joins the Political
Economy podcast to discuss the risks we face and what we have to lose (or gain).

Toby is a senior research fellow at the Future of Humanity Institute at Oxford University, where he studies the long-term future of humanity. He is the author of the recently released “The Precipice: Existential Risk and the Future of Humanity.”

What follows is a lightly edited transcript of our conversation, including brief portions that were cut from the original podcast. You can download the episode here, and don’t forget to subscribe to my podcast on iTunes or Stitcher. Tell your friends, leave a review.

Pethokoukis: Astronomers estimate that there are something like six billion earth-like planets in the Milky Way Galaxy, but we’ve never detected any sign of life beyond Earth. A previous guest, Robin Hanson, explains this with the theory of the “Great Filter” — the idea that at least one of the steps to colonizing space must be really unlikely. It’s unclear if humanity has already passed through this filter or not, and your book is about the possibility that we haven’t passed through it yet — that there are all of these ways that our civilization could still collapse or go extinct. So what’s going to get us, and when’s it going to get us?

Ord: Well, I hope that nothing gets us. Ultimately, my
best guess is that there’s about a 50/50 chance that we just make it through
the long term and we last as long as we could last, within whatever constraint
that is — be that until the earth is no longer habitable or until this entire
part of our galaxy or beyond is no longer habitable. So I hope that we will
make it a very long time. But I am very interested in asking: What are the
challenges that could stop us before we get there? And we’re faced with existential
risks.

There are risks that could cause the extinction of
humanity or otherwise destroy our long-term potential, such as an unrecoverable
collapse of civilization. We’ve faced them for a long time. So the homo sapiens
species is about 200,000 years old — 2000 centuries — during which we have been
subjected to all of these risks from asteroids and comets and supernova
explosions and things like that. And we know, therefore, that those natural
risks must be fairly low per century, or we couldn’t have got through 2000
centuries.

But I’m particularly worried about humanity’s exponentially
escalating power, which I think first reached this point where we could
threaten our own survival with the development of nuclear weapons in the 20th
century. The natural risk each century has to be lower than, I think, something
like one in 2000. But the man-made risk, I think, could well be substantially
higher than that. And I think it is. So it’s these anthropogenic risks I’m most
worried about.

So, when you look at both those kinds of the natural risks
— like an asteroid hitting the earth — or the risks that we’re creating such as
climate change or runaway artificial intelligence — you mentioned a variety of
these risks… when you put those together, the odds of something terrible
happening over the next 100 years is what?

I think about one in six.

That’s high. A very high number to me.

I think so. I mean, that’s why I wanted to give a number
in the book. A lot of people just would say, “Eh, you can’t really put a
precise number on it. So don’t put a number at all.” But I think that if I
said, “There is a grave risk,” or “The risk is all too
great” or something like that, people wouldn’t know whether I was talking
about something like one in 1000 — which is still a one-in-1000 chance of
losing everything that humanity has ever worked towards over 2000 centuries, so
it’s still a grave risk — or whether I’m talking about 90 percent or something.
So I tried to be a bit more clear about that.

But I don’t want to risk over-precision. I say one in six,
but if someone said one in 60 or one in two, I would think that they’re talking
the same language as me and that we’re in the same ballpark.

And, it’s really those sort of man-made risks, which those
didn’t exist before. In the past, you really only had to worry about the
natural risk, whether it’s asteroids or super volcanoes or something like that.
But certainly… would you say since 1945, that’s when the manmade risks began?
And those are the ones which had become sort of evermore dangerous?

That’s right. With some kind of artificial precision, I
date this current period of heightened risk, which I call “the precipice” —
hence the name of the book — to the exact moment of the Trinity Test of the
atomic bomb.

A view of the Able nuclear explosion at Bikini Atoll, July 1, 1946. Courtesy U.S. National Archives/Handout, Via REUTERS.

And the atomic bomb has sort of been looming over humanity
for over a half century, as a possibility for how humanity could go extinct.
That’s how we could lose it all. But that’s not the one that scares you the
most?

That’s right. Over time, people did really work out that
they should be scared.

Even though in the book, you do highlight some pretty
scary close calls involving nuclear weapons.

Yeah. That’s right. Things have gotten a lot closer to
triggering an all-out nuclear war than many people recognize. But an all-out
nuclear war shouldn’t be equated with the end of humanity. I think there’s
something like a 99 percent chance that we could pull through in some form or
another. And we really don’t know that much about exactly how bad nuclear
winter would be. It seems serious enough that the milder scenarios still may
involve billions of people dying. So it’s certainly nothing to take lightly. But
the risk of human extinction from it, while very plausible, is nothing like a
sure thing.

And then I think that when it comes to climate change,
it’s often talked about as if it’s almost surely an existential risk. But it’s
something where, again, while it could do extremely serious damage, it is
difficult to actually come up with scenarios that scientists think are
plausible that would cause the unrecoverable collapse of civilization or the
extinction of humanity, dire though some of the outcomes could be. So I think
that it, again, could be an existential risk, but it’s not what I’m most
worried about. I’m ultimately most worried about engineered pandemics and about
unaligned artificial intelligence — two things that aren’t going to kill us
this year or next year, but over the next few decades they could well come
online as serious possibilities.

All right. So let’s maybe take a minute or two to address
each of those and sort of explain why you think those are the ones which really
scare you and should scare us.

Yeah. So if you look at the track record of major
catastrophes that have befallen humanity, the pandemics are really right up
there. The two biggest events, in terms of the proportion of the world’s
population who were killed, would appear to be the Black Death, where about a
quarter to a half of all people in Europe were killed and around about one in
10 people in the entire world were killed — so this is far beyond the current
pandemic, maybe 100 times worse — and there could be a similar proportion of
the world’s population that was killed in the so-called Colombian Exchange,
when the meetings of people from the Americas and the Old World exchanged
diseases. And the people in the Americas got by far the worst end of that. And
that could be as many as a tenth of the people in the world being killed.

But even then, that shows how serious natural pandemics
can be, but it is still difficult for the natural ones to do us in, as partly
seen by the fact that we have survived 2000 centuries. And in fact, most
species survive for something like a million years, despite the risk of being
wiped out by pandemics. But look at the rate of improvement in biotechnologies
and the things that we’re now able to do. And look at the rate of
democratization of biotechnology. So the gap in time between the world’s best
scientists hitting and developing a major new technique, such as gene drives or
CRISPR, and then it being used by undergraduate students to win science
competitions, is about two years.

So the timespan from the point where only one person can
do something to the point where there are tens of thousands of people in the
world who could be able to do it is just a couple of years. And then you think,
‘What about a decade or two later? How long will it be before a kind of
reasonably bright high school student could do this thing?’ It may not be that
long. And then the proportion of the people who could do it is so large that
there’s a reasonable chance that it could include someone with very bad
motivations.

And what are the examples in the past, which suggest to
you that this could happen? Because we haven’t… Obviously there have been
people who’ve speculated about the current pandemic that was some sort of
engineered weapon, but in the past, there have been accidents where things have
gotten out.

Yeah. So, there’ve been a lot of lab escapes of extremely
serious pathogens. The last case of smallpox in the world — and smallpox is
something that killed hundreds of millions of people in the 20th century — got
out of a lab in the UK. The last foot-and-mouth epidemic among sheep and cattle
in the UK was a lab escape from a UK lab. I think that the most recent case of
SARS was a lab escape from a Beijing lab, and that’s confirmed. It does happen
surprisingly often. And the Soviets had a number of major errors with their
biological weapons program. They accidentally sprayed anthrax over a major
city, and they accidentally released smallpox. They were trying smallpox bombs
in a lake and accidentally infected a ship, which then took it back to shore.

And they had to stamp out this outbreak after smallpox had
been eliminated in their country. So this kind of thing can definitely happen.
Biological weapons programs by state actors is another thing that I’m concerned
about. There could be some reasons for them to have extremely damaging weapons,
even if they don’t have immunity, like equivalent to a nuclear deterrent in
terms of a mutually assured destruction policy. And maybe states that can’t
afford nuclear weapons would be able to do mutually assured destruction that
way.

Stefan Hoermansdorfer, specialized veterinary sugeon for microbiology, displays a smallpox vaccine phial at an institute in Oberschleissheim near Munich, February 27, 2003. Via REUTERS/Michaela Rehle

Well, based on what we’ve seen in the past and the
evolution and the democratization of this technology, what are the odds of
this? What odds would you attach to an engineered pandemic? It seems like they
should be 100 percent, because if it’s out there and people can do it — whether
it’s a bright high school student who makes a terrible mistake or it’s someone
who’s intentionally trying to do it — it seems like the odds for this should be
awfully, awfully high.

Yeah. I should add, we’ve also seen groups like Aum
Shinrikyo in Japan that did the sarin gas attacks, and they had the scientists
on their team and at least some of their objectives were to destroy all of
humanity. So if they could have had access to technologies that would do that,
it looks like they quite likely would have tried to do it.

This is the 12 Monkeys scenario where you get a scientist — and as you said, maybe it doesn’t really require a bright scientist at some point — with a vision who thinks humanity’s a problem and engineers something.

Yeah.

It just doesn’t seem… So again, it seems like the odds
of something like this happening should be pretty high.

So I think over the next century, the chance that people
try stuff like this is greater than half. But the chance that it actually works
will be notably lower, particularly if there are any kind of warning shots, any
attempts that people have to do something like this. Suppose someone tries
something and it kills a million people. Then the effort on defensive
technologies or surveillance of people who have the skills to be able to do
things like this or surveillance of the facilities in which they could do it,
and so on, might be very extreme as a response. That might offer us some kind
of protection. Ultimately, my best guess is about a one-in-30 chance over the
next century that a successful attempt at this destroys humanity.

I want to return to this topic because it’s an obvious one
to discuss during the current pandemic. But let’s turn to concerns about
artificial intelligence. It’s a very popular theme in films, and certainly some
AI experts seem concerned about it. You’re saying they’re right to be
concerned?

Yeah. There are actually a lot of leading AI experts who
are concerned about it. I tried to zoom out a bit in the book and really kind
of tell the story of humanity and where we’ve come from and where we might be
headed. And you can look at where we came from and why it is that humanity is
the unique species on the planet that’s in control of its destiny, that has
this extreme potential to fashion the future potentially in an amazingly
wonderful way, the reason that it’s us who have that potential and that other
species, like chimps or blackbirds, don’t, that what happens to them is
fundamentally in our control. Why is it that it’s all in our control and we’re
not in someone else’s control?

It’s ultimately because we are the most cognitively
capable species on the planet. Something like intelligence, but perhaps more
broad, to include the ability to cooperate and work together. And leading AI
scientists have been surveyed and the typical researcher thinks there’s a more-than-50-percent
chance that over the next century we will develop systems that exceed the
abilities of humans across all domains.

If we do create more cognitively advanced systems, why
would we remain in control? Or would we just be at their mercy? How would we
survive that transition, fundamentally? And I think that we probably will
survive such a transition, but I think there’s a really non-negligible chance
that we don’t. The ways of surviving it, such as making sure that these systems
are within our control, even though they’re more powerful than us, or making
sure that these systems are motivated to produce the kind of utopia that we
would dream of, is extremely difficult to do. And it’s the people who are
trying to work out how to solve those problems who are the leading voices of
concern about this. So, yeah.

Via Twenty20

I mean, of course there are going to be people, whether
it’s AI or it’s CRISPR, who say these numbers are all just too high, so let’s
stop it. Let’s just put a moratorium. Let’s ban these technologies until the
point, until our ethics and our wisdom becomes greater — and that may never
happen, so until that happens, we just need to stop.

If there was a fundamental kind of renunciation of
technological progress, I think that itself would destroy our future potential
—we would achieve only a tiny fraction of what we could have done, if we were
to do that. But there could be a more careful version of going slow on the most
risky areas until we have shown ourselves ready. For example, maybe we’ve gone
an entire century without a world war, as a kind of like, “Then we can
unlock this technology and we’re mature enough to actually start pushing on
with it.” That wouldn’t seem like an unreasonable thing. That could be a
good approach for a more sane and coordinated world.

In our less sane and less coordinated world, I’m not sure
that having the few people who care about these risks pushing for going slow
would achieve very much, because you really do need a lot of different groups
to be going slow all at the same time. Otherwise, the more responsible groups
are effectively abdicating the control of the technology to the less
responsible ones.

What lessons do you draw from the COVID pandemic, both
regarding our ability to coordinate solutions on a global level and our ability
to anticipate crises? It seems like we were woefully unprepared for this, even
though there had been plenty of less serious pandemics over the past 20 years
that should have served as warnings. For me, our handling of this pandemic
makes me very pessimistic about doing anything for these other problems,
particularly ones which we have not yet experienced.

Yeah, I think that’s quite fair. I think that the
preparations were shown to be woefully inadequate. There’s a lot of
conversation assuming this was an unprecedented event. Whereas, in fact, it’s
entirely precedented. It’s been about 100 years since the last pandemic of this
scale. That’s not very long. A once-in-100-years event means there’s about a
one-in-10 chance of it happening in every 10-year planning horizon, which is
pretty huge when it comes to a big risk. So it was just kind of kicked down the
road by lots of administrations, and once it fell outside the length of most
people’s lifespans, then there wasn’t that kind of cultural memory of the last
time. So it’s extremely difficult to get governments and other institutions to
care about things that people don’t vividly remember having happened.

So at least one piece of good news about that is that at
least it is hopefully providing some kind of inoculation for humanity, to
prepare us, to remind us that we’re still vulnerable, for all of our
improvements that we’ve made over this time. Hopefully we learn the more
general lesson, not just to prepare for pandemics, but to prepare for
catastrophic risks. But the track record of learning the appropriately broad
lesson is not great. To some extent, all existential risk is by definition
unprecedented, but just because we haven’t experienced it yet doesn’t mean that
we’re invulnerable to it. It’s an extremely difficult challenge.

Even though we’re soaked in a culture which is focused on
catastrophe — whether it’s AI, nuclear war, pandemics, or zombie apocalypses —
we still have trouble getting people to prevent or prepare for these crises.
What is it about humanity that makes us less likely to look ahead and prepare
for these risks? Are there some psychological shortcomings we’re fighting
against?

The popular culture thing, I think, is a mixed bag. In the
case of asteroid protection, the Deep Impact and Armageddon films
that came out actually seem to have helped a lot in terms of getting the
funding that was needed to scan the skies for asteroids, somewhat surprisingly.
But when it comes to other risks, I wonder whether it actually just makes it
worse by associating it very directly with the kind of comic book plot type
thing. It’s often this super stimulus that people who want to take the lazy way
out when writing fiction, often for children, say, “The whole world’s at
stake. You saved the world.” It’s seen as a kind of gauche device and
makes it feel very unreal. At least that’s the kind of reaction that I tend to
get. So I’m not actually sure whether it being all around there is actually
helping at all.

Via Twenty20

As for our own psychological shortcomings, there’re a
couple of these. One of them is scope neglect, which is the inability for
people to take seriously if something could affect, say, a thousand times as
many people or a million times as many people, that it’s a thousand or a
million times worse, and to take appropriate measures. That’s a serious
problem.

There’s also a probability neglect. When probabilities are
very small, they often will either get overemphasized incorrectly or just rounded
off to zero, as opposed to trying to actually multiply them out by the very
large number of people who’d be affected and see what the expected value is.

And there’s also a problem where if something isn’t very
vivid to us, then we have a lot of trouble responding to it. So if something’s
happened very recently, we can see it and feel it, and we make appropriate
responses. But if it’s just someone telling us that it’s important and writing
some numbers and the numbers are big enough that we should be paying attention,
but they’re just kind of marks on a piece of paper, we have trouble acting.

And I think that’s the one that really got us when it
comes to COVID. I was a bit surprised by this, because I would have thought
that the epistemic problems — like the challenges for scientists who really
care about these risks in working out how on earth to even put probabilities on
them — would be one of the biggest challenges. But in the case of COVID, we
actually had pretty good probability estimates and they were basically ignored.
And then part of that was due to the psychological problems. And part of it was
to do with incentive problems, market failures, and political incentives.

What about the risk of a “world in chains” scenario, where
there’s some totalitarian takeover of the world — forever stunting humanity’s
progress. Look at China: 20 years ago we thought technology and the internet
would make it freer, but now they’re putting together a sophisticated
surveillance state. So to me, the scenario of an oppressive state aided by
powerful technology does not seem like a crazy scenario.

Yeah. I don’t think it is, especially looking further into
the future as these technologies get more advanced. In some ways, it’s one of
the earliest scenarios to be contemplated.

Like 1984?

Yeah, exactly.

Yeah.

So if you think of the Stazi as a good real-world example,
it’s not clear that they could have quite scaled up what they had, because they
needed humans to spy on other humans. And you kind of need some trusted humans
to start that off. But with AI techniques and digital surveillance, it may be
possible to build such a kind of surveillance infrastructure that we get
trapped and a dictator could maintain control indefinitely long or through
their dynasty.

Via Twenty20

The way to think about that is we may not yet have the
technologies to do this, but it looks like the kinds of technologies we’re
developing make that easier and easier. And it seems plausible that this may
become possible at some time in the next 100 years. And you don’t need to think
of it as it being locked in for a million years or something. It’s just enough
that it gets locked in, say, for 100 years, during which even more advanced
technologies are developed in order to lock it in for a thousand years during
which more advanced things are developed and so on.

So yeah, it does concern me.

How do we avoid these catastrophes? How we’ve reacted to
the pandemic and climate change does not make me hopeful, so what do you see in
the world today that makes you confident we can avoid these risks?

Well, look, I’m not confident that we can do it, is one
thing to say. I mean, ultimately, I say there’s about a one-in-six chance that
humanity’s long-term potential is lost during this century. And that’s my best
guess, taking into account our efforts to deal with it. I think that our kind
of worst attempt, a business-as-usual if we didn’t really make any strong
attempts, puts us at more like a one-in-three chance. And if we really got our
act together, I would say it’s like a one-in-100 chance. So I do think that
part of my optimism that we survive the century is just because maybe the risks
don’t come to fruition in this century, rather than us doing a great job of mitigating
them.

But I think that there’s a whole lot of different levels
in which we could act. We should be thinking about institutional change. In
terms of incremental change, such as developing a new body within the UN that
is focused on safeguarding humanity from existential risk, coordinating action
between countries and scientists, and so on. Maybe an IPCC for existential risk
or something like that. That would be an incremental change.

But we should also be open to much more radical changes. It
was only at the end of the Second World War, less than 100 years ago, that most
of our global international institutions were created. Maybe in the next 100
years there’ll be another juncture like that. And we should focus on making
sure that the institutions that respond to the next big warning shot have major
changes to help protect the future.

And then there’s also questions about what individuals can
do. So I think that people who are experts in science, perhaps in some of these
dangerous technologies, can work with professional bodies for those
technologies to help do more responsible research. And also some people who are
good at science and technology should go into government and take up the other
sides of these things. They’re often promoting the fact that people in
government don’t really understand the science and technology, but that’s
because the people who do understand it tend not to go into government. So they
need to actually cross over and work from the other side too.

What shouldn’t we do?

You could look at some of the things with climate change
and work out a bit about what one shouldn’t do, and more generally. I think
people who are concerned about this shouldn’t just be monotonous and nag
everyone about it. I think that you’ve got to be careful on that. There have
been suggestions that the solution is world government. This is not fitting in
with the climate change analogy here, but Einstein and others suggested this as
a way out of the nuclear impasse. But that also would produce its own
existential risks. It would increase the chance that, if the world government
went bad in a totalitarian direction, we’d get trapped.

World Health Organization (WHO) Director-General Tedros Adhanom Ghebreyesus attends a news conference organized by Geneva Association of United Nations Correspondents (ACANU) amid the COVID-19 outbreak, caused by the novel coronavirus, at the WHO headquarters in Geneva Switzerland July 3, 2020. Fabrice Coffrini/Pool via REUTERS

There’d be nowhere to run.

Exactly. Increasing international cooperation or
coordination could well be good, but there’s, perhaps, a limit to how far you
want to increase it before it starts to become bad. I don’t know where the line
is there. And also I think people who really care about this shouldn’t do any
kind of illegal or illegitimate actions. Again, if you become an extremist
about something, in general that’s going to turn everyone off and just set your
calls back, as well as just arguably being a terribly wrong thing to do in the
first place.

So there are a few examples of what not to do. I have a
whole section on it in the book.

To wrap up, sometimes I wonder if people don’t focus on
these risks because they just don’t see what we have to lose. There’s so much
negativity in popular media about the future, so we just don’t have an
optimistic image of the future that makes them think we have the chance for
something pretty spectacular. So what is that great image of the future that’s
at risk, which should motivate us to overcome these risks?

I think actually you can come at this from a couple of
different ways, and I try to do so in the book. One is based on the future. And
I think that if you look at the past and you see this accumulation of
innovations over the hundred billion humans that have come before us and everything
that they built up around us, it’s no surprise that our lives are of higher
quality than lives in the past because we have a hundred billion people who
worked together to build this for us. And if you look in more detail at the
statistics, lifespans have more than doubled over the last 200 years. The
country with the lowest life span now is higher than the country with the
highest lifespan was 200 years ago.

So we’ve had massive improvements in prosperity, and in
things like literacy and a lot of areas that really matter. And if you look at
the history of pessimism on this, at this kind of continuous progress at some
kind of scale — if you zoom out enough, let’s say every 50 years getting
substantially better than 50 years before, even though there could well be
serious downturns for particular areas — it’s very hard to understand why, seeing
all of this improvement behind us, you’d predict that we’re at the peak now and
it’s going to get worse. So that’s one reason just to kind of see this
continuing quest for a more just and prosperous and free society.

But also you could look back and ask, “What have we
got to lose?” As well as losing the future, we would lose everything from
the past. We would be the first generation out of thousands, 10,000
generations, to break this chain. And if you think about what’s bad if a
culture is destroyed, everything about that would be even worse. In this case,
it would be the final ruin of every language, culture, and tradition. Every
temple and cathedral all destroyed forever. And ultimately, the force in the
universe that was pushing towards what is good or just, in terms of moral
action — the fact that humans, unlike chimpanzees or birds, can actually see
that something would be better for others or it would be just, and that’s a
reason to do it and to push in that direction — that would be gone. As well as
love and appreciation of beauty — all of these things would be forever stripped
from the world.

So I think we’ve got a lot to lose. I can understand why
people in a moment of despair kind of throw up their hands and say these kinds
of things, but I think if they really reflect on it, we have everything to
lose.

My guest today has been Toby Ord. Toby, thanks for coming
on the podcast.

Thank you.

Social Media Auto Publish Powered By : XYZScripts.com