MICHAEL BIEHN AS KYLE REESE: The terminator is an infiltration unit, part man, part machine. Underneath it's a hyper alloy combat chassis, microprocessor controlled, fully armored, very tough.
BROOKE GLADSTONE: In this clip from the original “Terminator” movie, Kyle Reese is explaining to Sarah Connor who or what that humanlike machine trying to kill her was. The threat of manmade machines rising up against human beings is perfect fodder for science fiction films, but a group of academics in England are looking at the possibility that a scenario like that might one day leave the realm of fantasy and become reality. A new research center at the University of Cambridge plans to look into extinction-level risks to humanity.
MARTIN REES: There are things that could be so catastrophic that we should be scared by even a small chance of them happening.
BROOKE GLADSTONE: Martin Rees, no relation to Kyle Reese, [LAUGHS] is a professor of cosmology and astrophysics at the University of Cambridge and is cofounder of its Center for the Study of Existential Risk.
MARTIN REES: Nuclear war, bio terror, pandemics, cyber terrorism, climate change and perhaps computer networks will develop a mind of their own. That may be science fiction, it may not be, but what we’re trying to do is to get together a group of people to try and think about which are the serious risks.
BROOKE GLADSTONE: Why did you see the need for a center like this? How pressing, really, are these existential risks?
MARTIN REES: People fret rather too much about what are actually rather small risks, like carcinogens in food, train crashes and things like that, but they don't worry enough about these other kinds of riskg which are perhaps improbable but which are so serious if they happened that even one occurrence is too many. And just as you still pay for insurance on your house, even if you don't expect it to be burnt down, so we feel it’s worth a bit of investment in trying to understand and trying to protect against these new kinds of risks.
BROOKE GLADSTONE: When you say that people may worry overmuch about carcinogens in food and not enough about these larger existential risks, how much is enough?
MARTIN REES: Well, I think enough is a bit more than they’re doing now -
- because hardly anyone is thinking about them, really. That means they could be taken over by flaky scare mongers who make get them out of proportion, and what we want to do is to try and look at them seriously. These are a growing kind of threat because we are in a more interconnected world –
BROOKE GLADSTONE: Mm-hmm.
MARTIN REES: - we depend on a worldwide distribution of food, worldwide computer networks for the financial system and others, and that makes us more vulnerable to breakdowns and, indeed, to ill-intentioned small groups who can cause huge damage. The way I like to put it is that we are in a global village and the village idiot in a global village can have a global reach. And we’ve got to protect against them and protect against breakdowns. So all these things deserve a bit more attention than they’ve had up ‘til now.
BROOKE GLADSTONE: What’s your lowest probability existential risk?
MARTIN REES: I hope the runaway computer catastrophe is no probability, but I don’t know. When I say I’m an astronomer, people say, are you worried about asteroid impacts? Well, I am a bit, but they’re no more likely now than they were for the Neanderthals. It’s a small risk that we've had throughout the history of life on our planet. But the kind of risk that’s worrying me more are the kind that we bring upon ourselves. Although our Earth has existed for 45 million centuries, this is the first century when the main threats are caused by one species, namely us.
BROOKE GLADSTONE: You guys are sitting, let us say, in an ivy-covered environment, thinking big thoughts –
MARTIN REES: Indeed, yes.
BROOKE GLADSTONE: And I’m just wondering how much impact those big thoughts could have on the real world.
MARTIN REES: I hope we can quantify these threats and also perhaps exert a bit of propaganda, so that people take this more seriously. There was a big article in a British newspaper which publicized this, and I’ve been amazed at the generally positive response. And so, we are just getting up steam, and we hope that we will be able to gain sufficient funding for a research center where we will be able to have people who can focus substantial fraction of their time on these issues, rather than leaving it just to a few professors like myself.
BROOKE GLADSTONE: You know, I don't know if you’re a great student of how the media work, but there is nothing we love more than stories about imminent destruction.
MARTIN REES: Especially when you can have illustrations of robots from science fiction movies and the Terminator and all that. That’s had all the ingredients of a good story.
BROOKE GLADSTONE: Okay, so I feel for it, I admit it. We’re – we’re cheap. But that said, there is an almost direct relationship between the amount of potential mayhem and the amount of media coverage.
MARTIN REES: Yes.
BROOKE GLADSTONE: And the likelihood of the event really doesn't play into it very much.
MARTIN REES: Indeed, that’s true, and that, I think, emphasizes why it is important to get some more academic people who can actually decide which of the risks are serious enough to deserve attention, to try and address the balance between the most journalistically appealing risks, which may be the least probable ones, and the others which we are still in denial about and aren’t taking seriously enough.
BROOKE GLADSTONE: Martin Rees, thank you very much.
MARTIN REES: Thank you for having me on the program.
BROOKE GLADSTONE: Martin Rees is a professor of cosmology and astrophysics at the University of Cambridge and a cofounder of its Center for the Study of Existential Risk.