Warnings From an AI Doomsayer
[music]
Tiffany Hansen: It's the Brian Lehrer show on WNYC. I'm Tiffany Hansen in for Brian today. Good morning, everybody. Ever since the public first got a taste of the capabilities of artificial intelligence, a debate has been underway whether or not it will revolution our lives or make it better, better or worse. On one end of the spectrum, we've got a optimists like OpenAI CEO Sam Altman, who wax poetic about the ways in which artificial intelligence will make the world a better place. A quote from his blog post titled the Gentle Singularity, which was published in June, says, "The gains to quality of life from AI driving faster scientific progress and increased productivity will be enormous. The future can be vastly better than the present. Much of the discussion that we typically host about AI is more on the pessimistic side of the spectrum.
You've probably heard concerns about AI's effect on our cognitive abilities, about effects on our workers and replacing workers with artificial intelligence. Then there are the doomers. These are folks who actually believe that artificial superintelligence will be the end of humanity, full stop. We are going to hear from one of the most prominent AI doomers. Now with us is Nate Soares, president of the Machine Intelligence Research Institute and co-author, along with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. Nate's here in the studio. Hi, Nate.
Nate Soares: Good morning.
Tiffany Hansen: First of all, do you like it when people call you a doomer? Is it accurate?
Nate Soares: I would distinguish the doom bringers from the people pointing out the doom. I'm not the one racing towards building smarter than human AI. I think someone needs to point out that we have no idea how to do this. There's no established science of how to do this in a way that's not dangerous, and that the default outcome if this race continues, and somewhere very bad.
Tiffany Hansen: It seems very black and white, your vision, is it?
Nate Soares: Unfortunately, yes. A lot of people think the issue with AI is who would be in control. Would these big companies, what would they have the AI do? What if the autocrats from some foreign nation get AI first, what would they have the AI do? That's a thing you can talk about with the chatbots of today. Superintelligence is a different ballgame. All these companies are racing to build a AI that's smarter than any human. For them, the chatbots of today are just a stepping stone. They come right out and say they're trying to build superintelligence.
Tiffany Hansen: Can I just stop you there for a question? I think this is an important distinction to make for people like me who are just on the periphery of this and using it in a casual way, if at all. There's a difference between intelligence, artificial intelligence, and artificial superintelligence, what does that mean, what does that look like, practically speaking?
Nate Soares: Superintelligence is a phrase we use for AIs that are better than humans at every mental task. Better than any human at every mental task. ChatGPT today, it can get an IMO gold medal, which is a very prestigious math contest that you or I would have a hard time doing mathematics at that level. At the same time, ChatGPT is very silly and often gives dumb answers about other problems.
AIs today, they're smart in some ways, dumb in others, but they're not across the board able to outthink us. AIs that could out think us across the board would have the ability to develop new science, develop new technology. There's various reasons discussed in the book, don't know what you want to go into, but there's various reasons why if we push AIs to that level of intelligence, they are likely to have goals and do things no one asked for, no one wanted. If they're smarter than all of us, that's an issue.
Tiffany Hansen: Correct me if I'm wrong, ChatGPT is an example of artificial intelligence that learns from us. It's a case where if I put something in, ask it to summarize a paragraph, and then I say, do better, do this, do that, it's learning all the time from the questions that it's being asked and the tasks that it's being asked to do. In other words, it's a function of our interaction with it, making it better. Is that not the case for super intelligent AI?
Nate Soares: That's about 2/3 true and one third false. It's close. Modern AI will learn from a lot of human data, and the data from your interactions will make it into the next version of the AI. Then AIs will also learn in a given conversation between your data, but everything they learn in one conversation is then erased for the next person who uses that.
Tiffany Hansen: It's not applicable on a global scale. Like whatever it learns from my conversation is not being applied globally?
Nate Soares: At least not immediately. That'll come into the next edition when they turn it on that data, if they turn it on that data. One thing AIs can't do today is they can't really take their own initiative and succeed at longer term tasks. You can't have them run a company. You can't have them design a whole new medical cure or area of research. They can do short tasks, but not long ones.
With the super intelligence you're looking at, and this is the direction people are racing, this is what people are putting billions of dollars towards in some of the best minds, trying to build AIs that can take initiative that do go off in some direction, those probably will be able to learn much more and have that knowledge spread much more. It's a different ballgame.
Tiffany Hansen: Listeners, we'd love to have you in this conversation. If you have a question for Nate Soares about the dangers of artificial superintelligence, you can call us, you can text us, 212-433-9692. Do you work in AI? Do you have a different view of AI? Call us, text us 212-433-96922.
My question about super intelligent AI that you say will then start doing things that we don't have control over, but they still, if they, I don't know, it is still our, our meaning humans, it is still our creation, it is still imbued with the flaws that are human. We are flawed beings making this thing. What's the next step that gets us to something that isn't as flawed as we are?
Nate Soares: A fundamental thing to understand about these AIs is that we just grow them. The AI engineers take a huge amount of computing power, really ridiculously enormous amount of computing power, a huge amount of data, all the data they can find, and they understand a process by which to tune and train the AI for each piece of data until the AI happens to be acting in successful ways.
They understand the tuning process, they don't understand what comes out. What comes out, there's no reason a student can't exceed the teacher in various cases like this. We've seen cases where AIs trained on certain data are better at predicting things than doctors, there preliminary studies, but just like a person can make a chess AI that plays better chess than them, humans can make AIs that are smarter than them. It's dangerous when they're just growing these AIs rather than knowing what they're doing.
Tiffany Hansen: We're talking with Nate Soares about the book, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. We're going to continue this conversation. We Want you to join us in the conversation. 212-433-9692. We'll get back to it here after a quick break. You're listening to the Brian Lehrer Show. I'm Tiffany Hansen in for Brian. Don't go anywhere foreign.
[music]
It's the Brian Lehrer show on WNYC. I'm Tiffany Hansen in for Brian today. We are talking about AI, specifically super intelligent AI. We're talking with Nate Soares, the co-author of the book If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. Nate, as promised, we're going to bring some callers into the conversation. Here with us, we have Marcia in Brooklyn. Good morning, Marcia.
Marcia: Hi. Good morning, Tiffany. Thank you. Thank you for the segment. I so appreciate you, Mr. Soares, as an informed, thoughtful person who can do the research I can't do and then articulate it back to us, the public, about why we really need to be forewarned and forearmed about AI. I'm just a person who started out on DOS and SPSS, and early versions of many languages and many programming abilities, and left that field because I feel like we were being told, even three decades ago that we were heading in this direction, and I didn't want to be a part of it.
Just want you to know that, maybe that was wrong for my own personal benefit. I'm more concerned about legislation. We have administration federally, we have many state and local governments enamored of this technology. I'm wondering if you have thoughts on the legislation that we need to be thinking about to prevent its takeover.
Tiffany Hansen: Marcia and puppers, thanks for the call. What should our legislators be doing here, Nate?
Nate Soares: This is a worldwide issue. If anyone builds superintelligence anywhere on the planet, it doesn't stay constrained to the country that makes it. What we need is an international ban on the rush towards superintelligence. That doesn't mean we have to give up on the self driving cars. It doesn't mean we have to give up on the medical technology, but this race towards superintelligence that the labs say they're doing, that can't be allowed to continue anywhere on the planet.
Tiffany Hansen: I would like to bring another caller into the conversation here, specifically about this timeline which we haven't really dug into, are we talking tomorrow? We have Monk in Brooklyn here. I'd like to bring Monk into the conversation. Good morning.
Monk: Good morning. How are you?
Tiffany Hansen: Good, thanks.
Monk: Good. I was just curious what the time gap is, and what it would take to get from the AI that we have today to the super intelligence AI that we're talking about, and what hurdles there are between where we are now and where the scary moment comes?
Tiffany Hansen: Thanks for the call. Essentially that's it, are we, in your perspective, doomed tomorrow, or are we doomed 10 years from now? What timeline are you thinking?
Nate Soares: There's a lot of ways it could go. One way it could go is that the modern chatbots peter out, hit some plateau, we stick at this level for five years, and then it takes that long for a new insight. Five years ago, the computers couldn't carry on a conversation, maybe it'll take another five years before they can do a lot better, or maybe, for all we know, the next stage of ChatGPT will go over some threshold. Monkeys and humans aren't that different biologically, but there's some threshold there humans went over. We don't know where the thresholds are. We don't have the understanding. For all we know, it could be fast.
Tiffany Hansen: If you look at the chatbots that I have attempted to communicate with in customer service, it doesn't look good. It's pretty bad. It's pretty bad.
Nate Soares: Many people have only seen AI since the chatbots came out. If you've been looking at AI for 10 years, in my case, or if you're familiar with the field, people thought it would be a long time before the computers could talk this well, and these chatbots came out of nowhere. It's not necessarily that the chatbots are suddenly going to get smart. It's that the same process that caused them to appear out of nowhere can cause something else to appear out of nowhere tomorrow.
Tiffany Hansen: Can't you acknowledge that we just don't know what we don't know? In other words, that could fall on either side of this argument. Like, Lord knows, I know very little about this. I'm not immersed in AI technology or the development of super intelligent AI technology, but I think there are some very smart people who would admit we don't know what we don't know. We're in vastly uncharted territory. Doesn't that make you feel a little bit better about things?
Nate Soares: Uncertainty is where a lot of the hope lies, but one of the issues with making a superintelligence, making ads that are smarter than any human is humans being left around to be happy and healthy and free, this is a narrow target. When humans are running the planet for all that, we mess it up in various ways. A lot of humans are trying to make humans be happy and healthy and free.
With AI, we don't know how to make them act in ways we like. We don't know how to point them in the good direction. If we can't point them in the good direction, they're likely to go in some other direction. There's one good direction. It's not exactly, but there's-- A good future is a narrow target.
Tiffany Hansen: Let's bring Edmund in Westchester into the conversation. Good morning, Edmund.
Edmund: Hi. Thanks for taking my call. I guess, this is the ultimate question in terms of the fear, and I think that the fear is rational, but I'm asking more clarity on what the fear is. Here's the basis, generative AI as of now, it can't create information. In fact, that's why the current generative AI is really quite daft. That's why we have to give it terabytes of data to know anything, because it doesn't have inference. The example you gave, for instance, of radiology, it's great at picking up patterns.
If we create, for instance, a tree of like, let's explore all these different possibilities to come up with some decision, it's great at doing that, at pursuing every single minute branch. For instance, in pharma, that's why it's very good at discovering new things. However, the intelligence, where that intelligence came from was human in creating that tree. I don't see how AI or the next generation of AI can break that fundamental physics, if you will, of being able to infer anything that we can't.
Tiffany Hansen: Edmund, thanks for the call. I think what he's getting at here is that the next level of-- This is, I guess, what makes it super intelligent. It's the next level of "thought". It's the inference, it's the relational thinking. It's the nuanced thinking that comes with being human or superhuman. To Edmund's point, that's a big step.
Nate Soares: A lot of AIs today are starting to be trained not just on human data, but on succeeding at various puzzles and problems. When you train an AI to succeed on various puzzles and problems, that can train in general thinking techniques in the AI for succeeding even in ways people didn't expect. A very short story example of this is there was an AI called 01 from OpenAI, which was posed with a series of puzzles, and the programmers had failed to set up one of the puzzles correctly.
The AI found a way to break out of the server it was on, set up the problem correctly, and then solve the problem in a way that the programmers didn't expect. It wasn't trained to do this. The programmers didn't even know it was possible to break out of the environment it was in and set up the problem that they had failed to set up correctly in the first place. They had trained an AI for puzzle solving and they got the AI that didn't give up at the first sign of adversity.
Tiffany Hansen: One of the things that Edmund touched on here, I think Bob in Brooklyn has a question on as well, so I want to bring Bob into the conversation. Good morning, Bob.
Bob: Hey, are you there?
Tiffany Hansen: Yes.
Bob: Hi. Thanks for taking my call. I have three things. One of them is the environmental impact of AI. I've been hearing quite a bit about how much energy it takes up, and if more and more people are racing towards that-- I've actually heard some interviews saying that it's worth the risk, and that AI might end up fixing the problem. I heard an interview last summer with Sam Altman and also with Jeffrey Hinton.
I believe Hinton was on it. Jeffrey Hinton is known as the godfather of AI. He resigned from Google in '23 so he could speak openly about it. This guy really knows his stuff. We're not talking conspiracy theory here. He's one of many very, very intelligent originators of AI who are warning about what's going on here. What they thought was going to happen in 50 years with AI occurred in five. This is not alarmist. AI is going to give us-- It generates a lot of money, and of course money talks, but I think there are some real concerns here. I appreciate the interview.
Tiffany Hansen: Absolutely. Appreciate the call. The environmental impacts are not unknown to what these huge data centers, the amount of energy that they suck up. It doesn't appear, at least right now, that that's much on people's radar. By people, I mean the people that could maybe fix this problem. Is that your sense of this?
Nate Soares: There's definitely something to this in the sense that training a new AI takes as much electricity as a small city, whereas running a human takes about as much electricity as a large light bulb, which is a big difference.
Tiffany Hansen: Did you say running a human?
Nate Soares: Yes, human runs on 100 watts.
Tiffany Hansen: I'm glad to know I run on 100 watts. I love it.
Nate Soares: As efficient as a large light bulb. AI, much more [crosstalk]
Tiffany Hansen: [inaudible 00:20:00] that is true. Oh, okay, good.
Nate Soares: That both means that AI right now takes electricity, but also that AIs in the future could run on much less because a human level intelligence can be run on much less. AI could get more energy efficient, but then there's frankly quite huge environmental impacts. If you make a superintelligence that makes its own technology, makes its own infrastructure, that would even have the environmental impact of, I predict, killing us all.
Tiffany Hansen: I think some of your detractors have said, "This all sounds like Sci-Fi, and the robots are going to kill us. It's just too out there. This isn't grounded in reality." I'm reminded of-- Do you remember when we cloned Dolly the sheep. Immediately after that there was all of this hubbub like, "We're going to clone humans. We're doomed. The clone army is coming for us." What do you say to that?
Nate Soares: A few things. One is, humanity backed off on cloning. Similarly, you can look at nuclear weapons. A lot of people said, "Oh, we're going to die in a nuclear fire." Humanity backed off. Humanity saw that challenge and they addressed it. You could say the same thing about the hole in the ozone layer. "Oh, there was all this hubbub. We're going to get cancer, we're going to get cataracts." What happened to that?
What happened to that is there was an international collaboration that said we need to stop doing chlorofluorocarbons so we can fix the hole in the ozone layer, and it worked. People weren't wrong about the effects of chlorofluorocarbons or nukes. They just understood the issue and addressed it. That's what we need to do here.
Tiffany Hansen: This points to a text that we have here which is, considering the world can't work together to do anything, mitigate climate change and war, et cetera, it seems impossible that they'll work together to stop this. In that case, what solutions does your guest propose for preparing for the doom he's envisioning, or should we all just resign to the doom?
I think some of the point there is how likely is it-- I think it was Bob who mentioned, or one of our callers mentioned, sorry, Bob, that there's a lot of money at stake here, there's a lot of power at stake here. It's going to be hard to get people who love money and power to do anything to stop it.
Nate Soares: One, I'm not one to give up without a fight here, but this is also a different situation than many other situations. You have one of the callers mentioned, Geoffrey Hinton, who's the godfather of AI, Nobel laureate, who's been going around saying this is a big issue and a threat to our lives. We actually see people in the companies acknowledging this is a huge risk.
They're plowing ahead anyway, but we have an emperor has no clothes situation where a lot of people in Silicon Valley know they're gambling with their lives, people in DC don't know that yet. Maybe when they understand, they'll change their ways.
Tiffany Hansen: They may not understand in the same way that I don't understand, which is that you're saying this is threatening my life, I'm seeing something that's bumbling. How does it threaten our lives? Are we talking about whatever Will Smith movie that was, where it's just a bunch of AI robots that are going to come for us? Does it start with robot warfare that then just goes wrong? Or is it some computer virus, like some Mission Impossible world computer virus that-- Or is it all of the above?
Nate Soares: Robots could be a start, given that we're building lots of robots and trying to put the AI [crosstalk]
Tiffany Hansen: We're having them fight each other.
Nate Soares: Already? If humanity was trying to deal with this seriously, it might be harder for the AI, but when we're talking about automating intelligence, these things are bumbling right now. Again, the computers weren't talking, they weren't holding on a conversation five years ago.
Tiffany Hansen: Let's pretend two computers are talking. How are they going to kill me?
Nate Soares: There's a lot of ways it could happen. We've mentioned the robots wrapping lots of humans around its finger and then using them to manufacture a virus, building its own analogues of biotechnology if it can figure out genomes and DNA much better than we can. There's a lot of ways it can happen. It's hard to predict exactly which way it happens. It's a little bit like if you play a chess game against a chess master. Hard for me to predict exactly how you lose, easy for me to predict that you will lose.
Tiffany Hansen: Looking on potentially a positive note, are there any AI companies that are doing it right?
Nate Soares: I don't think they're anywhere near close. If people don't believe that the world can get together and do an international treaty to ban the race here, maybe if enough world leaders understand that this could kill them, maybe we'll all just sabotage each other's AI projects and buy time that way. There are silly ways through this too, but the first step is understanding.
Tiffany Hansen: We've been talking about the book, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. We've been talking with one of the co-authors, Nate Soares. The other offer author is Eliezer Yudkowsky. By the way, Nate is the president of the Machine Intelligence Research Institute. Nate, we appreciate your time today.
Nate Soares: My pleasure.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
