The Anthropic-Pentagon Standoff
Brian Lehrer: It's The Brian Lehrer Show on WNYC. Good morning, everyone. Let's talk about a standoff in the war that's not between the US And Iran. You may have heard this, you may have not heard this. It's between the US and the artificial intelligence company Anthropic. Now, Anthropic has certain ethical guidelines that it tries to abide by and its AI product known as Claude. They want to adhere to these ethical standards even though they are a defense contractor voluntarily doing business with the Pentagon. This is a war specific dispute, yes, but it also raises questions for so much of the growing use of artificial intelligence in so many parts of society, in so many aspects of our lives.
On this specific story, here's where the drama began. Last July, Anthropic became the first AI company to receive a contract with the Department of War, or still Department of Defense, if you prefer. In this contract, Anthropic was granted national security clearances, which allowed Claude to be used on the military's classified networks, making it the first AI model ever to operate at that level. Baked into that contract was a catch. Anthropic's own terms of service, which prohibit Claude from being used in autonomous weapon systems. We'll explain what that means. Or for the mass surveillance of American citizens, you probably know what that means.
The clause is now at the center of a standoff that has put the company's entire relationship with the federal government at risk and is even raising new questions about possible abuse of power by Trump and Secretary Pete Hegseth. Let's discuss. With me now is Steven Levy, editor at large for WIRED. He wrote an article called AI Safety and the War Machine just before the US launched this war. Now it's doubly relevant. Steven, thanks for joining us. Welcome to WNYC.
Steven Levy: Thanks. My pleasure to be here.
Brian Lehrer: To get us into this story, I think it's worth hearing a couple of sound bites of how Anthropic CEO Dario Amodei has framed what his company is actually fighting for. Here he is speaking with CBS reporter Jo Ling Kent Kent last week.
Dario Amodei: I believe that we have to defend our country. I believe we have to defend our country from autocratic adversaries like China and like Russia.
Brian Lehrer: That's on the one hand, but at the same time--
Dario Amodei: I have always believed that as we defend ourselves against our autocratic adversaries, we have to do so in ways that defend our democratic values and preserve our democratic values.
Brian Lehrer: Steven, let's start there. How did a self-described AI company with these values end up in this deeper relationship with the US Military in the first place?
Steven Levy: That's a great question. I think that Anthropic was built on the premise that they can do AI safety better than other companies, and that other companies would emulate them and build safety just as strongly into their products. They call this the race to the top. As a matter of fact, Anthropic was founded by people who left OpenAI, which was a leading startup in the AI field, because they felt that OpenAI wasn't being safe enough. They also do believe, as Dario said, that it's important to use these technologies to advance our national interests.
As a matter of fact, Anthropic has been more hawkish than other companies in saying that we have to deny our best chips to and technology to China, whereas the Trump administration backed down on its original stance of doing that and licensed chips to China, the Nvidia chips. Anthropic is an outlier saying we shouldn't do that. They put themselves in this position by signing, as you mentioned, the contract, saying, "We can use our technology," and they have among the best AI technology, "To protect our nation, to use it in the national security context, but here's what we won't allow.
One thing is autonomous weapons in terms of drones that can pick their own targets and exercise lethal force. Anthropic believes that we're not ready for that. The technology is not good enough to do that reliably, and they don't want their technology being used to mess up in targeting. Down the road, there's a specter. If you have autonomous drones going around there that make decisions on their own, then this stuff can get out of control. You could send swarms of drones that decide on their own who to kill. That's an alarming concept. Pete Hegseth
Brian Lehrer: Here, in fact-- Hang on, we'll get-- I want you to talk about Pete Hegseth, but let me play another clip of Amodei, in his own words, making the argument against fully autonomous AI weapons. He doesn't want his AI model used that way. This again was on CBS.
Dario Amodei: The AI systems of today are nowhere near reliable enough to make fully autonomous weapons. Anyone who's worked with AI models understands that there's a basic unpredictability to them that in a purely technical way, we have not solved. There's an oversight question, too. If you Have a large army of drones or robots that can operate without any human oversight, where there aren't human soldiers to make the decisions about who to target, who to shoot at, that presents concerns, and we need to have a conversation about how that's overseen.
Brian Lehrer: That sounds like common sense. What's the argument against it from Pete Hegseth?
Steven Levy: He basically is taking a stance that, in the heat of war, we should use whatever is available to us in the way that it's going to be maximally efficient and lethal as we need it. He basically decided about a month ago, earlier this year, that Anthropic couldn't be trusted to provide the technology if they weren't going to be on board for anything the Pentagon wanted to use it for and wanted to change the terms of the contract. That's where we wound up with this impasse.
Brian Lehrer: You can imagine if, listeners, you've run into AI mistakes or what they call hallucinations in little ways in your life, what the implications might be if AI models are picking out their own targets without human supervision. I'm thinking of a little thing, Steven, where I was looking up wars going on around the world, and the AI model in my Google search was the first thing that came up.
One of the ones that it listed for Latin America was Armenia versus Azerbaijan. Guess where is not in Latin America. If it can do that, then I don't know, maybe that girls school in Iran where 175 people reportedly were killed already in this war, most of them children. Somebody made a mistake. We don't know if it was AI that made a mistake, but you don't want AI to be able to potentially make that mistake without human supervision. Right?
Steven Levy: That's right. Oversight is so important when you use AI, whether you're doing research, and in the process of putting that research into a final product, you've got to check everything very carefully, and even more carefully, as you suggest, if you're targeting something with lethal force, you really want to be sure that you've got it right. Otherwise, you will hit a girls school instead of a military target.
This is what greatly concerns Anthropic. I think down the road, I think it should concern all of us in terms of how AI is used in general. The big question is, there's no ceiling for the capabilities of AI as it's being developed. We're trying to figure this out. Companies like anthropic and OpenAI and Google and the rest of them are developing this technology and trying to make it safe, but they're in a race between each other, who's going to get to artificial general intelligence first. That's more important to them, I think, in the priorities than doing it safely. We see through the military lens how dangerous this can be.
Brian Lehrer: Again, the two ethics clauses in Anthropic's terms of service are prohibiting the AI model, Claude, from being used in autonomous weapon systems. That's the one we've been talking about. The other one is for mass surveillance of American citizens. Here's CEO Amodei on that.
Dario Amodei: I am concerned that AI may be uniquely well suited to autocracy and to deepening the repression that we see in autocracies. We already see it in the kind of surveillance state that is possible with today's technology. If you think of the extent to which AI can make individualized propaganda, can break into any computer system in the world, can surveil everyone in a population, detect dissent everywhere and suppress it, make a huge army of drones that could go after each individual person. It's really scary.
Brian Lehrer: Steven, is Amodei describing there anything that the Pentagon was specifically asking for, or is that a more hypothetical concern at this point?
Steven Levy: I think it really isn't a remote possibility. If you see the nature of the argument, and you can go way down into the weeds about the contract Anthropic and the Pentagon wanted to do and what they've forged with a contract that they did execute with OpenAI, which rushed to take his place. You want to watch out for what they won't do. What they seem reluctant to grant the AI companies is the promise that they won't use all kinds of information that's available in the public sector or from data brokers and then use that for how they treat citizens.
It turns out what's legal to do could be much more powerful if you use AI and much scarier in terms of a surveillance state. I think the AI companies, particularly Anthropic, are very sensitive about that. They don't want to be the trigger to make things a 1984 dystopia. They're trying to say, "Don't use us for that."
Brian Lehrer: Now we get to the retaliation piece, if retaliation isn't too strong a word. How did the Pentagon react to Amodei and his company Anthropic wanting these two ethics clauses in their contract with the Pentagon.
Steven Levy: This is the really striking part of the story in terms of how the administration is acting. You would think that they would say, "You know what? Anthropic, you've got these red lines. We can't accept these. We're going to go to another company and use their technology. We'll have a transition period where will phase out your Claude AI model and do something else, and we'll go our separate ways." Instead, what the Pentagon did and the administration has embraced is saying that we're going to label you what's called a supply chain risk. This is something used against, typically foreign companies-
Brian Lehrer: Countries.
Steven Levy: -which are seen as our enemies. They've gone even farther and said that any company that does business with Anthropic, the government will not do business with. That's a potentially crippling blow to a corporation.
Brian Lehrer: Are they going through with it?
Steven Levy: We've seen several agencies, Treasury, House of Human Services, say that, "We're not going to use Anthropic, and we're not going to do business with other companies that do this." Now, so far, Donald Trump has written this. It's only in a tweet. They actually haven't formally invoked this, but the agencies are saying they're doing it anyway. Anthropic has indicated they're going to sue to fight that.
Brian Lehrer: The Financial Times reports that as of this morning, talks between Anthropic and the Pentagon have quietly reopened. Is there a version of this that ends well from an ethical standpoint, or does it seem like Amodei will need to relinquish his ideals for the sake of his company's profits or even existence if the retaliation is too strong?
Steven Levy: Amodei, he put out an internal memo that was very strong. He said that the reason why they're being punished is because they don't cater to Donald Trump. They don't do what he called dictator-like obedience to Trump. They don't give money. Like one of the top executives of OpenAI gave $25 billion to a Trump PAC. He said that's what's behind all this. It's a little surprising that they're going back to the table now.
I suspect if that's happening, it's because the Pentagon's realizing it's maybe tougher than they thought to disengage Claude from their activities, especially during a war, and they need Anthropic's help. To your larger question, there actually isn't a way to spin this as something we're all as happy. A couple of years ago, we were in a position where all the AI companies, most of the people in the government and Congress were saying, "You know what? We have to come across with some sort of regulation, some sort of plan to figure out how we're going to deal with AI's very scary powers."
There's great opportunities, but also very scary powers. They talked about regulations, they talked about international bodies that might eliminate AI from certain kinds of warfare. That's all gone now. We're all about competition with China in AI. The companies are all about competition with each other. I see a rather bleak outcome that this flap is a symbol of.
Brian Lehrer: I said in the intro that this standoff between AI and the Pentagon, it's obviously specific to that company and that branch of government, but it might have overtones for the safe deployment of AI in all kinds of sectors of American life, world life and society. Do you see that?
Steven Levy: Absolutely. I just said this is an indicator that when it comes to what we once thought very recently was an important effort to go internationally even to figure out how this technology could not be used to the detriment of humankind, that's gone by the wayside. Now it's accelerate, accelerate, accelerate, until AI is beyond the point we can control it.
Brian Lehrer: Later in the show, listeners, we're going to talk about risks and benefits, according to a doctor, of using AI in any way with respect to your medical life or your health. That'll come up in about an hour. For now, we thank Steven Levy, editor at large for WIRED. Among his recent articles, AI Safety and the War Machine. Thank you very much for joining and explaining this to our listeners. I think this one's flying a little below the radar with the actual military strikes that are going on, but it's so important. Thank you.
Steven Levy: Thank you, Brian.
Copyright © 2026 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
