The AI-Powered War Machines Are Here
Brooke Gladstone: The US Military is using AI tools to identify targets in its strikes on Iran, even as the government feuds with the maker of that same technology.
Alan Rozenshtein: President Trump, in his Truth Social post, criticized Anthropic for being far left and for being woke. They're trying to impose essentially a sanctions regime.
Brooke Gladstone: When disagreeing with the government can put you out of business. From WNYC in New York, I'm Brooke Gladstone. Want to see modern AI-powered weaponry at work? Check out Ukraine, where both sides are using these.
Siva Vaidhyanathan: A hovering drone that stays far enough up in the atmosphere that it's almost impossible to even sense from the ground, and it's poised to strike.
Brooke Gladstone: Plus, nations that have recently struggled to preserve their democracies offer some possible scenarios for the US.
Zack Beauchamp: Ping ponging between authoritarianism and democracy, which is frightening to think about, but it could very well be the future.
Brooke Gladstone: All coming up after this. From WNYC in New York, this is On the Media. Micah Loewinger's out this week. I'm Brooke Gladstone. If, like much of America, you're puzzled by the who, what, where, forget about the why of our nation's war with Iran; never fear, the administration's brain trust has you covered.
News clip: The White House is using video game footage to promote its war strategy.
News clip: This is a video that was posted on X and other platforms that mixes Call of Duty video game footage with actual clips of actual American missile strikes from inside Iran.
Brooke Gladstone: Of course, that doesn't cover everything, like the fact that the US and Israel have launched well over 1000 strikes so far from drones and missiles killing over 1,000 people, including more than 160 at an Iranian girls' school, mostly children.
News clip: New video released by Iranian state media shows the grim work in Minab has already begun digging tiny graves for some of the littlest victims of this three-day-old war.
Brooke Gladstone: Also noteworthy, the Pentagon has used an advanced AI system called Claude in its attacks, even as it has condemned its maker, Anthropic, as a security risk. Puzzling indeed.
Siva Vaidhyanathan: For more than a decade, the US Military has been integrating AI tools into a whole lot of what it does.
Brooke Gladstone: Siva Vaidhyanathan is a professor of media studies at the University of Virginia, currently working on a book about AI and democracy.
Siva Vaidhyanathan: In its later years in Afghanistan, it was using tools from Palantir and other firms to help vet intelligence. The stuff that people had posted on social media, YouTube videos, anything to help identify a potential target or potential threat, or get a real sense of the political conditions on the ground. That practice now has reached a higher level.
Brooke Gladstone: Right. You mentioned Palantir, which has been providing intelligence analysis, military targeting to the US government or allies, including Israel. It has AI technologies embedded in Israeli and US Weapons systems. I'm still not clear about Palantir's role. What does it do?
Siva Vaidhyanathan: Well, Palantir supplies what you might think of as the operating system of a whole lot of military intelligence projects. They've done this largely through the army, also through the Marines, and now increasingly through all the areas of armed forces through a set of independent contracts. There was a lot of resistance during the early years of the Biden administration to having Palantir deploy an intelligence system that would replace one that soldiers had been using all the way back to the Iraq war.
Palantir managed, through its lobbying efforts and publicity efforts, to convince enough people in Congress and then ultimately in the Pentagon to become the dominant supplier of a platform that soldiers in the field and officers in headquarters around the world use to pick out targets and analyze potential intelligence and help them make judgments, if not make judgments itself.
Brooke Gladstone: Yes. The owner of Palantir is a great, good supporter of Trump. Gives him tons of money.
Siva Vaidhyanathan: Well, Palantir is a publicly traded company, so we actually probably own a piece of Palantir, like if you have a retirement fund. The chairman of the board is Peter Thiel, who is, of course, notorious for being the first leading Silicon Valley billionaire to openly support Donald Trump in 2016. When I look at Silicon Valley now, I see all of this stuff that is shaking.
The very premise of NATO, of the international order that has allowed the United States to be so dominant in military and economic matters, is now in doubt. I can't help but think it's largely under the influence of this handful of billionaires who want to see a multipolar world where the United States is operating as a rogue nation.
Brooke Gladstone: We'll be covering the really interesting contract battle between Anthropic and the Defense Department later in the show. Meanwhile, the Wall Street Journal reported that our military used Anthropic's Claude AI tool in its attacks on Iran, mere hours after Trump ordered government agencies on Friday to stop working with Anthropic because it posed a threat. This is interesting timing, no?
Siva Vaidhyanathan: It looked to me, and it looked to a lot of people who follow the AI industry, that the Pentagon and Trump were taking advantage of the moment and taking advantage of the fact that Anthropic has these principles built into their contracts. Two major principles. One, Anthropic does not want its technology used for mass surveillance of US citizens. Anthropic does not want its technology used to develop autonomous weapons, weapons that will decide to deploy themselves, decide to kill people in the absence of human judgment.
Brooke Gladstone: The head of Anthropic said that their AI is not yet capable of making good decisions in this regard.
Siva Vaidhyanathan: Every one of my students who's ever come up with a fake citation in a paper knows this as well as anybody. You do not trust even the most high-level large language models to guide crucial decisions.
Brooke Gladstone: Claude isn't just a large language model.
Siva Vaidhyanathan: Claude is basically a large language model. It also does things like many large language models at the enterprise level that are really complicated. This particular version of Claude happens to do so with enough quality control for security and privacy that can work with the government in this way. Which is why this sudden decision to abandon Claude to shun Anthropic as a company was so shocking, especially that it occurred just 48 hours before an attack on a major country and the beginning of a major war. It's just wacky.
Brooke Gladstone: We'll hear more from Siva later in the show, but first, let's dig into just how wacky the Pentagon's pronouncements about Anthropic are. Alan Rozenshtein, research director and senior editor at Lawfare, notes that Defense Secretary Pete Hegseth, demanding unrestricted use for all 'lawful purposes,' is using a rarely applied statute to crush Anthropic altogether by declaring it a 'supply chain risk.'
Alan Rozenshtein: This allows the Secretary of Defense, largely on his own authority, to designate a company a supply chain risk based on concerns that that company would engage in subterfuge or sabotage. The statute is written reasonably broadly. It talks about any company, any entity, any person. When you look into why Congress enacted it, when you look into the debates that were happening in Congress at the time, it's clear that Congress was concerned about the fact that a lot of high end technological services and goods were being made by countries like China, like Russia.
That would be very concerning to have those in the defense supply chain. That's why Congress gave the Secretary of Defense the power to identify individual companies, block them from being part of defense contracts, and also block defense contractors from using those companies as part of defense contracts. Because you don't necessarily want Chinese technology as your prime contractor, but you don't also want to discover that it's being used as a subcontractor or some subcontractor for one of your projects. The marquee examples here are the Chinese companies Huawei and ZTE. These are makers of high-end telecommunications equipment. There's long been concern that-
Brooke Gladstone: -they're putting bugs in their equipment.
Alan Rozenshtein: Essentially. Then, for Russia, Russia doesn't produce so much hardware, but there is a famous cybersecurity company called Kaspersky, which produces a lot of software. Similar concerns that that cybersecurity software might have bugs, backdoors, all sorts of things like that.
Brooke Gladstone: Now, Hegseth is trying to impose this supply chain risk thing to Anthropic in an unusually punitive way, right, or is this how it's always been?
Alan Rozenshtein: We don't actually know how it's always been because these are relatively new statutes, and they've only been publicly used once. There's certainly no evidence that they've ever been used against a domestic company.
Brooke Gladstone: This sounds to me a little bit like what Trump was trying to do to what he deemed overly woke. Big law firms and universities. Isolate them from income.
Alan Rozenshtein: I think that's actually exactly the right comparison to draw. President Trump, in fact, in his Truth Social post, criticized Anthropic literally for being far left and for being woke. This president, this administration, is singling out entities that do not do business on the terms that the administration would prefer. Instead of simply saying, "Okay, well, we're not going to do business with you," they're trying to impose essentially a sanctions regime.
Brooke Gladstone: Explain how that works.
Alan Rozenshtein: Well, sure. The law permits the Secretary of Defense to, again, not just ban a supply chain risk from doing direct business with the government, but from indirectly doing business with the government through another defense contractor. What Hegseth is purporting to do, and this is beyond the scope of the statute and almost certainly illegal, he's purporting to ban anyone who does business with the Defense Department from having any commercial relationship with Anthropic. Now, that's breathtakingly wide.
Not only would that potentially cut off a lot of Anthropic's enterprise customers, many of whom also happen to be defense contractors, even if they're not using Claude for any defense projects. It would also potentially ban companies like Amazon and Google, which are massive defense contractors, not to mention Nvidia, the chip maker, another defense contractor. From, for example, selling cloud compute to Anthropic or selling computer chips to Anthropic, which would effectively destroy the company.
Brooke Gladstone: Despite the fact that Claude may be the most advanced of these large language model AIs, the government really could put Anthropic out of business.
Alan Rozenshtein: Well, I think what the government is doing is pretty clearly illegal. Of course, there's always a chance that courts rule the other way. Also, even if it wins the legal case, if the government can scare away some of its clients because those clients understand that the government doesn't like anthropic and therefore might not like companies that themselves do business with anthropic, that itself could be a big problem.
Let me back up just for a second and try to separate two issues that are often being conflated by all sides in this current situation. The first question is, what should the relationship be between the military and the private sector? Or to put it another way, if we're going to have a military industrial complex, should the military run that, or should the industrial side of that hyphen run that?
Brooke Gladstone: That's where the government has a point in its favor.
Alan Rozenshtein: Absolutely. In a democracy where you have civilian control over the military, you do fundamentally want the military to be able to use services and goods according to its judgment, as run by the civilian leadership and according to the laws. Now, I think there are reasons to be very skeptical that this administration, in particular, even can be trusted to abide by the laws. If we just abstract for a moment, there's a very strong argument that the military should push back against attempts by AI companies to restrict the use of those services.
The second question is, if the military cannot get an AI company to agree to do business on the terms the military wants, what should the military do? Should it say, okay, no, thank you, we're going to take our business elsewhere, shake hands, and leave? Or should the military try to burn the company to the ground? This second question does not seem to me to be a difficult one. I would have thought we could get bipartisan support for not burning the company to the ground, but here we are.
I still think, though it is useful because however this resolves itself, the broader policy question is not going to go away. In an ideal world, what should the relationship between the military and private industry be? There, I think this is a tricky question. We could have an adult conversation. Unfortunately, the people running the Pentagon are not the adults we need to have that conversation.
Brooke Gladstone: Just to clarify, there hasn't been any formal announcement of the Pentagon's decision, nor has it issued official certifications and notifications to Congress. All we have is this X post from Hegseth and Trump's Truth Social post announcing, and I quote, and when my voice goes up, that's when it's all caps.
"I am directing every federal agency in the United States government to immediately cease all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them. Again, there will be a six-month phase-out period for agencies like the Department of War who are using Anthropic's products at various levels." I'm assuming that's why Anthropic hasn't taken any legal action.
Alan Rozenshtein: That is my understanding as well. Though first, I should say, Brooke, I think you may have missed your calling as a voice actress. My understanding is that, in fact, Anthropic has not received a formal designation. Given this administration, I don't think it's so surprising that they're not doing a great job dotting the I's and crossing the T's. Presumably at some point, they will get around to doing a designation.
Though I should also say it's possible that the same Taco principle that applies to the President might apply here to the Secretary of Defense. That either because they're currently slightly occupied with the war in Iran, or because they just realize that what they're doing is both silly and legally unsupportable, this all goes away. I think for the moment we have to take them at their word and assume that a designation is coming. At which point Anthropic will sue, as they've said that they will.
Brooke Gladstone: What legal arguments could Anthropic use against it?
Alan Rozenshtein: This is, as they say in the military, a target-rich environment for Anthropic. This is about as legally unsupportable as I think it gets. That's a big deal, because if there's any place where the government gets maximal deference. Where judges do not want to be in the business of second-guessing executive branch determinations, it's when it comes to national security, that's saying quite a bit. There are basically three reasons why this designation is so flawed.
The first reason is that it's not at all obvious that the law at issue even applies to an entity like Anthropic, a US company that is transparently describing the limitations that it wants. The second reason is, whatever the ability of the government to exclude a company from government contracts, that in no way extends to the extremely broad secondary boycott of all commercial relations. Then the third reason is that just on its face, the government's declaration that Anthropic is a supply chain risk is absurd.
For a simple reason. Before the designation, Secretary Hegseth was threatening to invoke the Defense Production Act, which would require Anthropic to sell Claude to the government. Still, there's reporting that people within the administration are considering invoking the Defense Production Act. In addition, hours after the designation was made, we went to war with Iran, and Anthropic services were used in that operation. As far as I know, are continuing to be used.
The government is simultaneously arguing that the services are so vital that Anthropic might be compelled to provide them to the government. They're so safe that they can be used in ongoing military operations, and they're so dangerous that Anthropic must not only be excluded from government contracts, but basically raised to the ground. The math doesn't math, as they say.
Brooke Gladstone: Coming up, more from Alan Rozenshtein on the wild, wild world of military supply chains. This is On the Media.
[music]
This is On the Media. I'm Brooke Gladstone, picking up on my interview with Lawfare's Alan Rozenshtein. Right after Anthropic was thrown to the curb, its main rival, OpenAI, was waiting in the wings to pick up the defense contract for itself. OpenAI says there's language in their contract that includes even stricter provisions than Anthropic.
Alan Rozenshtein: Well, so the question of what OpenAI agreed to is very unclear. You are correct. OpenAI, just as anthropic was being designated a supply chain risk, announced that it had reached an agreement to use its systems on Pentagon classified networks. OpenAI CEO Sam Altman has insisted that OpenAI had succeeded in implementing its own red lines, which were very similar to Anthropics and, in fact, had gone even beyond what Anthropic wanted. OpenAI then released a few paragraphs from the contract publicly.
The problem for OpenAI is that the contract terms that they released did not support their claim about redlines. Then, in recent days, OpenAI has said, "Well, actually, some of the critics had a point. We've gone back to the Department of Defense. We've gotten them to agree to stronger protections for surveillance specifically." I would say that there are three possibilities here. Three doors, if you will. It's to me completely unclear which door the car is behind.
Door number one is that OpenAI has succeeded where Anthropic failed. It has succeeded in getting the government to agree to meaningful red lines that limit the government's use beyond 'all lawful uses.' That would raise the question, of course, and it would give Anthropic strong litigating ammunition. Well, if the government is okay with OpenAI's red lines, why aren't they okay with Anthropic's red lines? That's door number one.
Door number two is that OpenAI did not, in fact, negotiate meaningful red lines, but it has presented itself as having done so, presumably for PR purposes. That would be very bad. OpenAI, as an institution, has a lot of skepticism around it for not always being candid. If it is door number two, that would add to that skepticism.
Door number three, and if I had to pick a door, I would suspect it's door number three. That OpenAI thinks that it got real concessions from the Pentagon, but the Pentagon disagrees, and that there is not. This is what we in the law call a meeting of the minds between contract parties. This happens all the time with complicated contracts.
You think you have an agreement, but each side reads the contract slightly differently. That's a big problem for OpenAI because the government tends to already have some structural advantages when it comes to contracts with private entities. That's a big problem if OpenAI finds out down the line that the government is not upholding what it thinks is the government side of the bargain.
Brooke Gladstone: Now, one of the reasons for one of the red lines that Anthropic imposed is that Anthropic chief Dario Amodei doesn't think Claude is ready to handle autonomous weapons operation, that it's just not there.
Alan Rozenshtein: That is correct. Anthropic's position is often, I think, mischaracterized as being against autonomous weapons. I actually don't think that's Anthropic's position. I think they think that there may be an appropriate role for autonomous weapons in military actions, that those weapons actually can be more protective of civilians, for example, than human beings. I tend to agree that that is a possibility and should be seriously considered, that we should not simply rule out the possibility of autonomous weapons.
My understanding is that Anthropic's main concern is that its systems are not currently ready for autonomous weapons. I'm not even sure the government disagrees with that. I think the dispute, and here I think both sides have reasonable arguments who should decide when the autonomous weapon that uses Claude as its brain is ready for primetime.
Brooke Gladstone: I would assume that would be the person who knows Claude's brain best.
Alan Rozenshtein: I'm not sure about that, actually. Anthropic probably understands the abstract capabilities of its model the best, but the military quite possibly understands how that integrates into military strategy and doctrine best. Also, it's the military, not Anthropic, that ultimately has the final say of what counts as a sufficiently accurate system. Under both military guidelines and under the international laws of war, a system does not have to be foolproof to be lawful. It has to be sufficiently foolproof relative to military objectives.
Obviously, this question is a combination of technical considerations, where I think Anthropic will have a lot of expertise, but also military and military law questions. The substantive dispute is who should have the veto. I think that's a fair question to ask.
Brooke Gladstone: I just fear a lack of expertise. I see what you're saying. Ultimately, and as a legal matter, we don't want corporations running our foreign policy or our wars. I get that. Right now, it seems like OpenAI has won that $200 million Pentagon contract, and Anthropic might be struggling with their designation as a supply chain risk, at least in the next few months. You're proposing that in the long run, this conflict could actually strengthen the brand?
Alan Rozenshtein: Oh, very much so. Anthropic might lose some money from this contract, but even more valuable for all these companies than computational resources is human talent. AI is a field where literally tens of billions of dollars of value can be locked in the head of a singularly talented engineer. A lot of these engineers they've already worked for these companies. They're already multimillionaires. They don't need more money. What they want is to work for companies that are at the frontier of research, and also companies where they can feel good about working for them.
I think what Anthropic has done by really standing up for principles, you can agree with the principles, disagree with the principles, but you can't question that they have stood up for their principles, is that they have uniquely marked themselves out as the only company in Silicon Valley that has really put its money where its mouth is. It's not to say that other companies wouldn't necessarily do that, but we don't know.
Brooke Gladstone: Well, they've had chances in the past, and they haven't.
Alan Rozenshtein: That's also true. I would expect that this will hugely, hugely, hugely benefit Anthropic, both in terms of its ability to recruit the top talent, but also more generally in its brand among normal people. One thing I'll note is that the musician Katy Perry had a post on X saying how she was signing up for Claude, which is to say, I think this has broken through to normies, as they say. Reputationally, this is a huge windfall for Anthropic, and I think it's a well-deserved one.
Brooke Gladstone: In your Lawfare podcast, you said you saw this moment as the opening shot in what will be the most important conversation about AI regulation. How much the Industry will or should be nationalized in the future.
Alan Rozenshtein: Let me back up just for a second and say what I mean when I say the word nationalized. When I'm talking about whether AI will be nationalized, I'm not limiting myself to Cuba-style nationalization.
Brooke Gladstone: Taking over an oil well.
Alan Rozenshtein: Exactly. It comes in and expropriates private property. That's conceivable, but I don't think that's what's going to happen in the United States. There are various constitutional barriers. What I mean is that the most important questions in AI, who will develop the most advanced models? Who will get access to the most advanced models? Will the public get access, or will the best stuff be reserved for the government? Those kinds of questions in a world where the government is making those decisions, I consider that a nationalization.
Of course, we've had this before. The whole history of the military industrial complex, from World War II through the modern day, is of figuring out the parameters of control between the government and the military. There's a wide range of plausible answers that analysts can advocate for in good faith. We obviously have been talking about AI now for several years. We have been debating various forms of regulation, whether it's state-level bans on this or that, whether it's questions about copyright, all of those are obviously very important.
The more fundamental question of just how much authority will the government exercise over the core building and operating of these models, that to me is the most important question. I think this dispute between Anthropic and the Pentagon, and also this side question of what is the agreement between OpenAI and the Pentagon? This is the first time that we as a society have truly grappled with this fundamental question.
Now, because of how frankly, silly this Pentagon is behaving, we're having this debate in the dumbest way possible. The fact that the current debate is being done on such foolish grounds does not change the fact that this is a debate that desperately needs to happen, and where the outcome is far from clear.
Brooke Gladstone: Even if Anthropic does okay, the decision in this case simply opens that debate up; it doesn't close it down.
Alan Rozenshtein: This may be a silver lining of the situation that, despite the shambolic nature of what the administration is doing, it's at least forcing a debate. The Democrats are getting involved in this now. This is becoming something that not just nerds like me who are obsessed with AI talk about. I think people are realizing that there are real rule of law implications to this. Again, this is not how I would want to have the debate, but it's better to have the debate than not have the debate.
Brooke Gladstone: Alan, thank you so much.
Alan Rozenshtein: Thanks for having me.
Brooke Gladstone: Alan Rozenshtein is an associate professor of law at the University of Minnesota Law School and a research director and senior editor at Lawfare.
Now back to Siva Vaidhyanathan with background on the evolving use of AI-powered weapons in Ukraine and Gaza. Starting with Ukraine.
Siva Vaidhyanathan: Ukraine instantly turned itself into a laboratory of technological innovation. The Ukrainian military, which has done a remarkable job against a much better-funded and staffed military, has done so by outsourcing so many important things to private companies. This experimentation has resulted in one of the most, I think, important military weapons developed in the last couple of decades, and that is a hovering drone, a drone that stays far enough up in the atmosphere that it's almost impossible to even sense from the ground, and it's poised to strike.
That means Russian troops in the field are under constant possible attack. Now, of course, the Russian military developed the same technology, and so troops in Ukraine and civilians in Ukraine are also under the same constant threat. We can move forward to Gaza and see how Israel, which has had its own programs of mass surveillance using artificial intelligence for more than a decade to try to identify hostile parties both in the West Bank and in Gaza, and, frankly, in Lebanon as well.
Given that for the last three years, Israel has been trying to pick out whom to attack in a densely populated urban scene like Gaza is really hard. What they've done is they've outsourced it to machines that they hope are quite precise. Now, what we do know is given the large number of civilian casualties, the medical facilities, the human rights organizations that have been interrupted, bombed, et cetera. The number of journalists killed, that either Israel is being duplicitous about their desires for targeting, or the systems just aren't as good as they hope.
Brooke Gladstone: You write that what was normalized in Gaza will not stay in Gaza. You say this isn't just about an export of technology. It's an export of a habit of mind.
Siva Vaidhyanathan: That basically says we can do everything faster, we can do everything easier, we can take human beings out of the loop as much as possible.
Brooke Gladstone: We're living in an age where accountability seems an impossible dream. You call autonomous weapons 'accountability-dissolving machines.'
Siva Vaidhyanathan: That's right. If stuff is happening so fast and so furiously, it's a lot harder for inspectors general, for judge Advocate generals, attorneys general, it's a lot harder for courts to make individual, discreet judgments in a timely fashion, to actually affect anything in the world. The more we raise the metabolism of warfare, the harder it is to oversee the proper execution of warfare. The moral execution of warfare, if there is such a thing, the humanity that might be embedded in warfare.
Look, we've been struggling definitely since World War I, with increased attention after Vietnam War, with all of its excesses of civilian casualties and human rights violations, to embed within the US Military a sense of honor, a sense of duty, a sense of legality, a sense that lethality should be deployed only under the most extreme circumstances. The US Military has done an outstanding job of that since the end of the Vietnam War.
They've done it while certainly under pressure from Congress and the public. The US Military had done a remarkable job to police itself right up through the Afghan war. All of that is out the window now. AI being introduced at this level, at this moment, only makes it more dangerous.
Brooke Gladstone: I'm going to raise this experiment at New College London, which tested out three leading AI systems from OpenAI, Anthropic, and Google in war simulations. They found that the AI didn't have nearly the same reservations as humans and used nuclear weapons in 95% of cases. Dean Ball, who helped develop President Trump's AI policy in a senior post at the White House last year, posted on X, "If near medium future AI systems can be used by the executive branch to arbitrary ends with zero restrictions, the US will functionally cease to be a republic." Where do you think all this is going?
Siva Vaidhyanathan: To, Dr. Strangelove. I think we're in a moment where, if you remember, Dr. Strangelove, and I know you do, the whole film ends with a moment at which the momentum of decision-making for nuclear war was unstoppable. This imposition and installation of AI decision-making systems that minimize human influence over what weapons we're going to use, when we will use them, is like satisfying the dream of the extremists among the nuclear strategists of the 1950s and '60s.
Look, we've also had, to great credit of the United States, an actual debate for 70 years over when and how to use the most lethal weapons. We actually had debate at the highest levels. The fact that we trusted a president, not just a president, like eight presidents, with the lives of the entire world, understanding that most human beings would make the right decision when offered the right information. That level of trust that we put into presidents of both parties, many of whom were not really great people, says a lot about the fact that we've managed to live this long and we had a system that mostly worked.
Are we in that situation? Do we have an executive we can trust with the future of the world who actually thinks about the children in a girls' school in a remote corner of Iran? Do we have a president who thinks about the long-term repercussions of the use of certain weapons and the deployment of certain technologies? I think it's safe to say we do not. All of that difficult work that has been done in the military, in the RAND Corporation, in the halls of Congress, in White House after White House after White House for decades has been stripped out. We are in a very dangerous moment in which to rely on these systems.
Brooke Gladstone: Siva, thank you so much.
Siva Vaidhyanathan: [laughs] I wish I could say talking about this was a pleasure, Brooke, but it is always a pleasure to speak to you and be part of this really important program.
Brooke Gladstone: Siva Vaidhyanathan is a professor of media studies at the University of Virginia. Wonder why some would be dictators succeed, and others fail. Coming up, a history lesson with news you can actually use. This is On the Media.
[music]
This is On the Media. I'm Brooke Gladstone. This week, as the war widened and the death toll rose in the Middle East, House and Senate Democrats proposed resolutions to curb our president's ability to wage more war without legislative approval. Those resolutions failed, adding to the heap of headlines marking what many scholars have called America's democratic backsliding.
Those same headlines spurred Vox senior correspondent Zack Beauchamp to delve into previous power grabs by would-be despots to see who won or lost or how some nations managed to climb back up that slide. Then he put all he learned in a book, The Reactionary Spirit. Zack, welcome to the show.
Zack Beauchamp: Nice to be here, Brooke.
Brooke Gladstone: You decided to study democratic resilience because you were getting sick of all those articles about Trump being bad for democracy. Although weirdly, your conclusion is that we need those articles about Trump being bad for democracy.
Zack Beauchamp: I know, it was really funny. This project was born out of a crisis of confidence in the journalism profession in Trump's second term. We had spent 10 years roughly doing our best to try to illustrate that this man was an exceptional danger to American democracy. I don't mean that in the sense that reporters were out to get him. I started to wonder, what's the point of all those warnings if we're in the middle of a crisis? I went to go study. Not okay, things are bad. What do we do to make them better?
Brooke Gladstone: You used recent history as a guide, starting with research done by Laura Gamboa, political scientist at the University of Notre Dame, who compared two democracies directly. Colombia resisted. Venezuela didn't. Tell me what you learned.
Zack Beauchamp: Basically, the situation in Venezuela and Colombia under Presidents Hugo Chávez and Álvaro Uribe was that you had an elected executive who was doing the sorts of things that are very familiar, trying to concentrate power in their own hands, limit checks on their own authority. What diverged is the opposition strategies. In Venezuela, the opposition pushed really hard to go outside of the system and try to topple the president, just like in one fell swoop. We're talking things like a military coup.
That ended up backfiring. It gave the Chávez government the justification that it needed to say, "The opposition is not trying to play within the rules of the democratic game. They're enemies of the state. We need to stand up for ourselves, and we need to go after these traitors." The end result was that Chávez developed a level of domestic political legitimacy to crack down on democracy that he didn't have otherwise.
The Colombian opposition adopted a very different strategy. What they wanted to do was slow down Uribe's agenda in the legislature to prevent him from passing laws that could concentrate power in his own hands until the next elections, which they gambled they could win. They did. Columbia's what might have been a democratic emergency ended up being a case of maybe a near miss.
Brooke Gladstone: You found that people actually like democracy if they can be shaken out of their complacency to see that it is actually under threat. The example that you like to give is Peru.
Zack Beauchamp: We're talking 2022 under President Pedro Castillo.
Female 2: Pedro Castillo has announced a dissolution of Congress and called for legislative elections to draft a new constitution that comes hours before an impeachment debate. Now, members of the Constitutional Court described the move as a coup.
Zack Beauchamp: There's immediate outcry, and he backs down really, really quickly. He was impeached on the very same day, which is absolutely wild to think of in the American context. That illustrates that when you're too flagrant, you can incite opposition.
Brooke Gladstone: You argue that aspiring dictators for this very reason are particularly wary of damaging their self-image.
Zack Beauchamp: I've spent a lot of time studying Hungary and Israel, which are two of the most notable examples of democratic backsliding right now. One thing that those leaders share is a deep commitment to using plausible-sounding democratic justifications for the power grabs. For example, when Viktor Orbán works to limit the rights of civil society groups that might criticize him. He says, "Well, it's because they're supporting the invasion of our country by foreign hordes, or they're acting on behalf of Brussels and these people who will control us and destroy our sovereignty."
In Israel, it's the Supreme Court. When they're trying to limit Netanyahu's power and preserve the foundations of the system. The court is unelected bureaucrats, really dictators who are running Israel from behind the scenes, and we need to restore power to our voters.
Brooke Gladstone: That is familiar rhetoric.
Zack Beauchamp: You can hear Stephen Miller say the same thing on an almost daily basis when they get an adverse ruling from a lower court. Classic populist argument used to maintain this democratic facade.
Brooke Gladstone: You say that political scientists and democratic activists typically focus on structural factors like economic development or the institutional design, the president versus the parliamentary system, to explain why authoritarians succeed or fail. You think it's the legibility of the threat to democracy that matters most?
Zack Beauchamp: Very much so. What happened in Minneapolis is a determined level of civilian resistance to what they saw to be an authoritarian policy by the government that was abusing their neighbors, disrupting their communities. Ultimately claimed the lives of two American citizens, that that motivated significant resistance.
Brooke Gladstone: Overreach by the would be autocrat.
Zack Beauchamp: That's right. One common pattern I found in countries where an authoritarian's bid to consolidate power in a democracy fails is that they end up doing something that's just so obviously authoritarian that it sets off massive levels of resistance that can't be overcome. This didn't obviously bring down the Trump presidency, but it's really important for listeners to understand this isn't like a movie where there's one big fight. It's a war of attrition. Each little struggle matters.
Brooke Gladstone: Talk about another example of democratic resilience in Brazil and Trump's great good friend who he tried to get pardoned.
Zack Beauchamp: All right. That's Jair Bolsonaro, who was elected to the presidency of Brazil in 2018. I went to Brazil as part of this reporting project because I thought it was a country that had the most similarities with the US in a lot of important ways. Bolsonaro was frequently called the Trump of the tropics when he was running for office, and he lived up to that reputation. He'd been an open fan of Brazil's military dictatorship that ended back in 1985.
When Bolsonaro won for reasons mostly related to corruption in the rest of the political system, there was a real awareness that this guy might be dangerous. The place that you saw that most clearly was in Brazil's Supreme Court. Some of what they did, I want to be clear, was not great. At the same time, the justices were instrumental in stopping some of Bolsonaro's most authoritarian moves. I'll give you one striking example.
When he was up for reelection that he was widely expected to lose. He sent the national highway police to block the buses taking voters to the polls in Brazil's northeast, the opposition stronghold. Within hours, the Supreme Court said no, people got to the polls. They voted. Bolsonaro's opponent, the current president, Luiz Inácio da Silva, called Lula, won, but by a very narrow margin.
Brooke Gladstone: If Bolsonaro was the Trump of the tropics, the South Korean president Yoon Suk Yeol was the Bolsonaro of East Asia.
Zack Beauchamp: That's right. I wrote in the piece a little bit tongue-in-cheek, but it's accurate. Yoon, like Bolsonaro, was an open fan of the military dictatorship that also in Korea had gone into the 1980s. When Yoon was elected again, due to some problems people had with the left-wing opposition party, he was not very popular and was someone who was very concerned about communist infiltration. The Korean left is always concerned about the right wanting to bring back the dictatorship.
The right is always concerned about the left trying to open the door to North Korean influence on politics, so it can make the stakes really high. Yoon was, even by the standards of this high-stakes politics, extreme and paranoid. Then, after he loses the 2024 midterm elections, he decides to declare a state of emergency one night in early December. It's very dramatic. There's a proclamation of martial law that bans the media from publishing, that declares opposition political activity temporarily unlawful. This immediately, immediately prompts pushback.
The most famous example of this is the legislators who jumped the fence in the legislative building in order to go in and vote to end the state of emergency. The head of the opposition livestreamed himself doing it. One thing that I think gets a little bit less attention is the way in which ordinary Koreans reacted. Thousands of Koreans showed up on the streets of Seoul. Yoon had, as part of the martial law order, ordered the arrest of leading opposition figures. The protesters put themselves between the security services and access to the broader city to opposition lawmakers and so on.
Brooke Gladstone: They just walked directly in front of trucks and stood there.
Zack Beauchamp: That may have prevented Yoon from successfully executing on his blueprint for seizing control.
Brooke Gladstone: Your last example that we'll be examining here is Poland versus Hungary. What Trump has been doing has been likened again and again to what Orbán did in Hungary. The thing that was obviously most noticeable to us were the regular attacks on the press.
Zack Beauchamp: Yes. What he did was coerce various different Hungarian media outlets into selling to conglomerates that were owned by allies of the government. Then the tycoons who owed their position specifically to the prime minister would fire the journalists not willing to toe the government line, and would remake the entire institution.
Brooke Gladstone: That is how Hungary succeeded in defeating democracy. How did Poland wrest it back?
Zack Beauchamp: The Polish government that was in power until recently tried to follow a similar playbook. They hollowed out the state broadcaster. It was much more subtle than what you saw in South Korea or Brazil, and much more like this Hungarian model. It worked for them for a long time. They even won reelection in 2019, right after first being elected in 2015.
They really created a deep sense of threat among one really important group, which is the political opposition. Prior to 2015, Poland had a multi-party system. These opposition parties spanned the ideological gamut. They were not inclined to cooperate. Once the threat became clear in 2019, they came to an agreement not to run against each other in Poland's election.
Brooke Gladstone: Just mind-boggling.
Zack Beauchamp: The alliance that they made really spanned ideological divides. Like, it would be as if the Mitt Romney faction of the Republican party, back when it existed in meaningful numbers, broke with Trump and started campaigning with the Democrats. We're talk large numbers of representatives and senators, and they said it was because of democracy. Explicitly. They knew what the stakes were, and they took back the Senate, which is less important than the lower house in Poland, but still, it allowed them to obstruct legislation.
Then, in 2023, they worked really hard to campaign against the current government as a change block. The opposition was pushed to do that by the government, targeting the judiciary and trying to take it over, but also leading an incitement campaign against Donald Tusk, who was the most visible opposition figure. They tried to paint him as a foreign interloper due to his work with the European Union. Didn't work very well. All it did do was convince the opposition that the current government really howed it out to get them. They pushed a law that would have disqualified him from running from office. If they can do that to one person, well, they could do that to anybody, right?
Brooke Gladstone: Isn't that stuff happening here? Trump is saying, will no one rid me of this meddlesome priest, over and over and over again, about all his enemies? What can the average person who feels alarmed for the state of their democracy take from this history that you created.
Zack Beauchamp: The big lesson here is that people actually do care about democracy. It's not the canard that has come up since the 2024 election that democracy isn't a motivating factor.
Brooke Gladstone: Just talk about the economy, they say.
Zack Beauchamp: Look, that might be true if you're, let's say, a Senate candidate in Ohio or Texas or something like that, that your message should be different because democracy's motivating power depends on what I've called the legibility of democratic threat. If you can point out that democracy is under threat, if you can make that case to them in a way that resonates with their experience, you can galvanize and motivate politically significant actions.
Brooke Gladstone: How do you make that case?
Zack Beauchamp: You talk about it. That's why we're right back where I started at the beginning. When I said I was frustrated with saying democracy is under threat all the time. Now I'm convinced you need to say democracy is under threat all the time. Don't be annoying, and your friends be like, "What should we do this weekend? Should we, I don't know, go to a bar?" No, no, democracy is under threat. Don't be that guy. Operate within bounds. You have to talk about this stuff. Treat people as if they are your fellow citizens.
Brooke Gladstone: As if they have power. You believe they do.
Zack Beauchamp: They do. It's not just that we pull the lever at the voting box once every two or four years. It's that we have the ability to speak to convince our fellow citizens that something is happening.
Brooke Gladstone: Jefferson talked about eternal vigilance. What evidence do we have that countries can resist these attacks on democracy for more than just one election cycle?
Zack Beauchamp: It's honestly too soon to say. Part of the reason that we don't know a lot about how democracies save themselves is because the very notion of democratic backsliding is relatively modern. It used to be the case that democracies were understood as 'consolidated,' which is the political science term for, you've had a democracy for so long, and it seems so stable that surely it'll just stay that way. Right now, it turns out even old, very well-established democracies can come under existential threat. That's a very modern phenomenon, though. It's mostly a 21st-century thing.
Brooke Gladstone: Poland's far right is making headway again. You noted that Bolsonaro's son is running in Brazil.
Zack Beauchamp: That's right. One poll I saw recently had him dead, even with Lula. It's not impossible to imagine that one of the norms in countries like the United States, where democracy has come under threat, is ping ponging between authoritarianism and democracy, which is frightening to think about, but it could very well be the future. The truth is, we don't know.
Brooke Gladstone: Good times, Zack.
Zack Beauchamp: Good times. Good times indeed.
Brooke Gladstone: Zack, thank you very much.
Zack Beauchamp: I always love doing the show. It's a pleasure.
Brooke Gladstone: Zack Beauchamp is senior correspondent at Vox and author of The Reactionary Spirit. That's the show. On the Media is produced by Molly Rosen, Rebecca Clark-Callender, and Candice Wang. Travis Mannon is our video producer. Our technical director is Jennifer Munson, with engineering from Jared Paul. Eloise Blondiau is our senior producer, and our executive producer is Katya Rogers. On the Media is produced by WNYC. Micah Loewinger will be back next week. I'm Brooke Gladstone.
Copyright © 2026 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
