Cybersecurity Concerns Over Anthropic's Mythos Model
Title: Cybersecurity Concerns Over Anthropic's Mythos Model [MUSIC - Marden Hill: Hijack]
Kousha Navidar: It's The Brian Lehrer Show on WNYC. Good morning again, everybody. I'm Kousha Navidar, filling in for Brian today. It seems like every other day, there is some big news in the AI world, usually in the context of whether or not AI functions will replace human jobs. This latest story feels different, at least to me. On April 7th, Anthropic announced they would not be releasing their new AI model, Claude Mythos Preview, to the public because of cybersecurity concerns. One of Anthropic's researchers wrote in a Mythos assessment, "This time the threat is not hypothetical. Advanced language models are here."
As part of The Brian Lehrer Show's ongoing coverage of cybersecurity and digital safety in our lives, let's hear more about this from Miranda Nazzaro, The Hill's senior technology reporter who covers artificial intelligence, online misinformation, data privacy, and other topics in the overlapping spheres of tech and politics. She also co-authors The Hill's Technology newsletter. Miranda, hi. Welcome to the show.
Miranda Nazzaro: Thanks so much, Kousha. Happy to be here.
Kousha Navidar: Happy to have you. Help us unpackage this. What exactly is Mythos?
Miranda Nazzaro: Sure. Mythos is not like the AI models that you and I may see like a ChatGPT or even a Claude, which is another one of Anthropic's chatbots. It's actually a model that's purpose is to spot cybersecurity vulnerabilities. Basically, what this means is it's taking the job that say a cybersecurity vendor itself would have done of trying to find different areas where an internet web browser might be vulnerable to attacks. On the outset of this, it looks like this could be a very useful product. We are seeing it can basically work 10 times faster than one human alone trying to spot vulnerabilities in any given software.
On the flip side, it has a lot of risks given that, if it's in the wrong hands, then it makes it much easier for hackers to basically spot these vulnerabilities and then hack software infrastructure, that sort of thing. As you mentioned, it does feel different because this is very security-based. This is the sort of things that we've been hearing about for months, if not years, about both the pros and cons when it comes to artificial intelligence.
Kousha Navidar: Let me summarize that there. A Claude or a ChatGPT might help you write a better email, like, "Hey, plan my trip to New York City," for instance. You're saying Mythos is about cybersecurity. It is a model that is probably used by IT professionals, companies that are worried about cybersecurity. Is that distinction the fair summary there?
Miranda Nazzaro: Absolutely. You're not going to see it necessarily. Obviously, it's not even being released to the public yet. Even if it was, you're probably not going to be seeing your average everyday consumer going to use Mythos, and you probably never will. I don't want to make assumptions. AI is pretty unpredictable. I think sometimes when the general public thinks of AI, they just assume it is that chatbot function, but something like this, it can run on its own. It doesn't necessarily need human oversight all the time. It's more of a tool than a chatbot, which most of us are familiar with.
Kousha Navidar: You brought up something that I think really hits the nail on the head when you said the public is probably never going to see this because what really grabbed people's attention with Mythos, my attention at least, was the fact that Anthropic chose to not release their new AI to the public but instead warn of its power. How unusual is that?
Miranda Nazzaro: It's interesting because we actually do see limited rollouts a lot with technology products. It's just that this one, given the very broad statements that Anthropic is making about its capabilities and frankly scary statements, I think, is what grabbed people's attentions. I spoke with a lot of technologists who actually are usually involved in these sorts of rollouts. It is not all that uncommon. Because of this risk about having it in the hands of the public is very dangerous, I think, is what is freaking people out. Now it's not clear if Anthropic will ever actually release it to the public.
I think it would be pretty wild to think that this could be limited for years and years. I do think in the interim, they are taking a very cautious approach, giving it to other technologists to basically test out and see if they can spot their vulnerabilities in their own products as well. We're seeing the government also potentially going to be using this. It's not that uncommon, but I think just given the dangers of what we don't know is what's freaking people out.
Kousha Navidar: Terms like "freaking people out" and "scary" are big words. I want to make sure listeners understand this, make sure that I understand it, really. What is so scary that Anthropic has said?
Miranda Nazzaro: As you mentioned, the quote that you said in the beginning about how they call it the threat is not hypothetical. It is here. It's the idea Anthropic is suggesting that this tool could make it much easier and much quicker for our foreign adversaries. Essentially, hackers don't necessarily have to be foreign adversaries to be able to take down entire infrastructure. What I mean by that is it could be web browsers, it could be the infrastructure of the big banks on Wall Street. Essentially, that raises concerns about data. It raises concerns about financial problems, hacking into, and transferring money out of.
These are all hypothetical examples, or at least I think. These are the sorts of things that Anthropic is suggesting. There's a reason why Treasury Secretary Scott Bessent convened a bunch of Wall Street leaders just days after Anthropic spoke with the government about this, because I think the banks specifically are a great example of the risks that come with it if their software were to be hacked, essentially. I know that the "freaked out" and "scary" can be these broad terms, but think of it as a cybersecurity hacker being given the keys to do their job 10 times faster, essentially.
Kousha Navidar: That's a good way of phrasing it, I think. Listeners, we can take some calls about this for Miranda Nazzaro, The Hill's senior technology reporter. Are you listening right now? Do you have a question about Mythos? Call or text us at 212-433-WNYC. That's 212-433-9692. If we have some IT folks out there who work on cybersecurity, what are you making of the news about Mythos? Call or text us. We're at 212-433-WNYC. That's 212-433-9692. Miranda, throughout all of this, there has been some skepticism of Anthropic's true intentions.
I'm thinking Katie Miller, a former top advisor at DOGE and aide to Elon Musk, wrote in a post on X, "It sounds like Anthropic is running a giant public relations scheme to manipulate industry fears, a playbook Dario has used in the past." She's referring there to Anthropic's chief executive, Dario Amodei. Is that the prevalent reaction to this, that it's like a marketing ploy, PR stunt?
Miranda Nazzaro: I would say that there is a contingent of folks in DC who do believe it's a PR stunt. You mentioned Katie Miller. David Sacks, the White House's now former AI and cryptocurrency czar, also expressed this skepticism. I don't think that it's as widespread as individuals like Miller and Sacks might be suggesting. I was curious about this myself and asked people within the administration, as well as those who have recently departed. It seems like the White House is actually taking this threat pretty seriously. I think Anthropic is in a really weird position right now where they're caught in a culture war.
They have an ongoing court battle with the Pentagon over even allowing their technology to be used by the military, as well as the entire federal government. I think that Miller and Sacks were really clinging onto this culture war because there seems to be quite the distrust of Dario and his intentions. I don't think that that's as much of a widespread sentiment. I do think it's something that's important to recognize. Axios reported this morning that Dario is meeting with the White House chief of staff, Susie Wiles, today, and that is very significant because it's felt like Anthropic is been sort of blacklisted in DC, but it's clear that the White House still wants to engage with them.
I think it goes both ways. I think in this particular instance, you're actually seeing the majority of people in the White House's orbit want to engage on this and at least want to make sure that it's contained.
Kousha Navidar: You literally read my mind because I was going to bring that up an hour before the show started. I read about the scoop from Axios. Listeners, just to put a point on it that Miranda mentioned there, there's news that Amodei is meeting with the White House today, as Axios put it, "to make peace," because a month ago Anthropic was in the news because of a dispute with the Pentagon over what their AI can be used for, especially in a military setting.
Miranda, I want to hear more about that. What do you think that meeting means, because at the same time, there is this huge culture war among AI companies as well, where Anthropic is making a line in the sand? It appears to say, "We care about the ethics behind AI in a way that would limit the way that we want government to be able to use it for war-making, basically." Talk to me about that.
Miranda Nazzaro: Anthropic has really tried to set itself apart from the other AI firms by putting this whole safety-first mission first in line. We saw various technology leaders have been to the White House to visit with Trump and other staff over the past year. You'll notice that officials from Anthropic it's usually a little less. They are still present often. It does not feel like they are on the same page ideologically.
Anthropic is unlike a lot of the AI firms where they actually do support some type of regulation when it comes to AI development, whereas we see a lot of tech firms as well as the administration promote more of this light-touch regulation, basically letting developers innovate as they choose because they believe it's a matter of whether that's national security, keeping up with China in the race. I think today's meeting is a very, very big sign. You can't get too much higher besides Susie Wiles, besides meeting the president face-to-face.
Wiles is the brain behind a lot of the administration's actions. I don't know at this point who invited or who initiated this, but the fact that they are taking this meeting, it seems now that the dust has settled on this Anthropic-Pentagon battle, that maybe there is room for some sort of reconciliation here. I don't know what this means for the court case itself. It seems like the federal government, not necessarily the defense agency, is feeling like it is important to have Anthropic involved in the conversation, whether that's because of the threat or they think that the Pentagon deal went too far. I do think it's a major step and potentially the first of what could be a rekindling. I don't want to speak too soon, but at least more of a cordial understanding between the White House and Anthropic.
Kousha Navidar: At least optically, it appears to be a big deal. Listeners, this is The Brian Lehrer Show on WNYC. I'm Kousha Navidar, filling in for Brian today. We're speaking to Miranda Nazzaro, senior technology reporter at The Hill, about Anthropic's Mythos model. Miranda, we're starting to get some texts and calls in. I'm going to read a text first-
Miranda Nazzaro: Awesome.
Kousha Navidar: -then go to a call. The text says, "I'm totally freaked out about Mythos. People compare it to a nuclear arms race. It's only a few weeks before there's a DeepSeek version of Mythos, and then what? Everyone's home router is hacked. The best defense is to keep updating, but not everything can be updated quickly and often." We thank the texter for sharing that anxiety that they're feeling. I want to pivot from there to a caller. David in Kingston, New York. David, hi. Welcome to the show.
David: Hi, can you hear me?
Kousha Navidar: Yes. Hi. Welcome.
David: Hi. Great. Thanks for taking my call. My comment is simply that if this thing is so dangerous, it takes no imagination to get the idea that it's going to get into the wrong hands. There's plenty of people working with this at Anthropic and all over the place. Any one of these people are just going to do it as a blackmail or ransom, but they ultimately take the thing and put it into the wrong hands, and then it's out there everywhere, and then the stuff really hits the fan.
With regard to what you guys were recently talking about, this is going to Trump and Washington. That's probably the worst place for it to go, because there they're going to make totally the wrong decisions, bad decisions with it. Then, like I say, it's going to be out there, and the stuff is really going to hit the fan. I hope you get what I'm trying to say.
Kousha Navidar: Yes, David, I appreciate what you're trying to say. What I hear you saying is that there is not just risk, there's almost a certainty that it could get into the wrong hand because it's so easy. Miranda, I guess hearing the text first, hearing David, it makes me wonder when it would ever be safe enough for any of these models, as they're getting more powerful, to be released to the public. How is that conversation within the walls of these companies going?
Miranda Nazzaro: Yes, for sure. I appreciate both questions there. I think they do highlight the anxieties that these veiled messaging that comes out from these AI companies, where they're talking about these hypothetical scenarios. It's important to note, I don't think, at least from my perspective, that these AI firms, Anthropic included, are suggesting that they are going to be able to control this sort of technology from being developed in other forms in other countries into foreign adversaries' or bad actors' hands.
I think that they are trying to give this technology right now to some of the biggest companies and some of the government and banks because they want them to use it as a defensive mechanism, basically, "Prepare your infrastructure," because yes, the inevitable is our foreign adversaries. Bad actors are probably already working on similar technology. Somebody mentioned DeepSeek earlier. The reality is that China is the closest to the US when it comes to how far they are in their development, and they probably are working on something like this.
In terms of releasing it to the public, I think there is a chance they will not release something like this, or if they do, it might not be until, say, they get evidence that our adversaries or bad actors already have these capabilities. I guess what I'm trying to say is I think that these companies are also admitting it's inevitable, but they're trying to get ahead on the defensive side of things, if that makes any sense.
Kousha Navidar: They're trying to play what cards they have, knowing that this technology is going to improve inevitably, and the best that maybe they feel is put it in the hands of the big institutions that affect a lot of people to help on the defensive play proactively. I think that's what I heard from you. Is that a fair summary?
Miranda Nazzaro: Yes, for sure. I don't even exactly know how the general public would use this model, I guess is what I'm also trying to say. It feels very structured for corporations, for companies, for governments. Even if they were to release it, I don't think there is any benefit until they know it's not going to be giving bad actors a competitive advantage.
Kousha Navidar: Got it. Miranda, you had mentioned before that we had seen a strong response from the government, with Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell calling an emergency meeting with Wall Street executives. What exactly were they discussing?
Miranda Nazzaro: As far as we know from multiple reports at this point, basically, Bessent and Powell were telling these Wall Street companies about what they've heard about Mythos, which we know now that the White House, including Bessent and Vice President Vance, were briefed days before this, before it actually went public, basically telling these banks, "There is this new cyber threat coming, whether it's Mythos directly and/or just other bad actors developing similar technology. You need to essentially be amping up your own security."
This includes giving them access to a model like Mythos that allows it to pinpoint where those vulnerabilities are. I believe it was Bloomberg who reported that Bessent actually encouraged the use of Mythos to be doing this. Again, they're really trying, I think, to hone in on that defensive side of it, using it as a tool before it becomes a weapon, if that makes sense.
Kousha Navidar: It does. When I think of Wall Street, I think of power. If federal leaders are going to talk to such a big source of power about, "Hey, you need to prepare for this." Does that change how you view the urgency of this, or for you, is it more like, "Yes, that makes sense. Eventually, this flag was going to be raised towards Wall Street"? Tell me about that.
Miranda Nazzaro: I think that it is expected. I also think, while it is expected, given we've been hearing warnings about AI and the security risks for years now, at this point, it feels a long time coming. At the same time, I think it's unique given the stance of the administration. I spoke with Dean Ball earlier this week. Dean Ball was one of the authors of Trump's White House AI action plan. He was in the White House until last August. He really got a first look at the administration's policies.
Something he told me is that the administration actually, or at least people in the administration, not all of the administration, believed that AI would soon plateau, that its development has basically peaked. Ball did not agree with this assessment. He said we are nowhere near the peak of AI development. He believes that Mythos was a wake-up call. He said it is notable given how much action the White House has already taken in the past two weeks over this, and they feel, like, "Okay, we need to jump in here, get involved because this is not being handled."
He said the administration "was not prepared to deal with this threat." I think it is expected for a lot of people who watch technology closely. I also think it speaks to the urgency, just given the administration hadn't done a whole lot on this before this moment.
Kousha Navidar: I'm looking at the clock. There are a bunch more questions I want to go through. This is so interesting, and you brought so much good information. I want to talk about Project Glasswing, too, because this is an important part of the puzzle, even with the strong government response. The biggest response seems to be coming from Anthropic themselves with Project Glasswing. That's an initiative that brings together all the huge AI companies. We're talking Amazon, Apple, Google, Microsoft, Nvidia, just to name a few. Tell us quickly about Project Glasswing. What's the goal?
Miranda Nazzaro: Sure. Anthropic announced Project Glasswing on the same day that they made the public statements about Mythos. Essentially, it's supposed to be a collaborative project between different companies, as you mentioned, Apple, Microsoft, that are going to be using Mythos for its own defense of its infrastructure. For example, Microsoft is going to use it to see if they can spot vulnerabilities in their own cybersecurity space, and they're going to bring those findings to Anthropic. Then Anthropic says that they will share this with the public. It's essentially one big collaboration. It's a lot of AI firms, but it's also some cybersecurity firms like CrowdStrike, as well as JPMorgan Chase, is the one bank that they've publicly listed there.
It's basically saying, "Hey, we need all of you to test this out. Tell us how well it works. Tell us what could be better. We can improve the model, and then we'll tell the public about it." Again, maybe this is the first iteration before they release it to the public. That's not very clear yet. This thing is also pretty typical. Tech companies will often collaborate. I know they seem competitors, but they do often want to share their product, want to improve it, because the reality is that they all need each other in one way or another. An AI company is not an AI company without the cloud, without the software and chip and blah, blah.
Kousha Navidar: It doesn't exist in a vacuum. Yes, I totally hear that. Let's go to Liron in Long Island City. Hi, welcome to the show.
Liron: Hey, thank you. Thank you for taking my call. Just a bit of background, I work as an artificial intelligence management systems implementer, which is-
Kousha Navidar: Oh, wonderful.
Liron: -basically creating the policy around implementing artificial intelligence so that organizations certify against an international standard. The idea of Mythos creating this concern, I think, is really great. It obviously is a concern in and of itself, and it's a very special concern. I think that it also represents a chance, especially for the current administration of the United States, to start thinking about guardrails because they have been very non-guardrail-oriented about artificial intelligence, in fact, changing some executive orders from the previous administration to open up and create a more relaxed environment.
I really think it's good. One of the big issues that I would say if you were just to erase Mythos out of the current conversation is the fact that agentic AI is already being used for large-scale attacks.
Kousha Navidar: Agentic AI, there is AI that has the agency basically to make choices on its own, to do actions on its own, right?
Liron: That's correct. It can work autonomously. It can work when you are not there, unlike if you're interacting with a GPT that you are feeding it information and waiting for a response. With agentic AI, you give it a goal, and it breaks that goal into tasks and then can work on it autonomously. Those who are bad actors are actually already using such tools to work at orders of magnitude that's millions of times faster than they were able to do.
Kousha Navidar: Liron, let me pause you there. I so appreciate your call and bringing that up. I think guardrails is something that really comes out to me when I listen to you. We're running out of time, so I want to make sure that we give Miranda a chance to respond to this. We've got a lot of things moving here. Miranda, we've got Scott Bessent, Jerome Powell, warning Wall Street. We have Project Glasswing, and guardrails really sticks out to me that Liron just mentioned AI regulation. Do you think this all could spur action at the federal level? I know it's a long answer, but what's your 30-second version of it? [laughs]
Miranda Nazzaro: I get it. I think that this has certainly raised the issue or brought it to the forefront for both the White House and Congress. I think they've been wanting to create a federal framework for a while now. I think that this presents a really good opportunity to say, "Hey, we have this threat right now. We really need to control this technology before it gets into the wrong hands." I do think it could spur momentum. I also am realistic. As somebody who works in the Capitol most days and speaks to lawmakers, it is a very long process, and there are a lot of political arguments right now when it comes to how we should do this.
I think many agree there needs to be some sort of regulation. I do think it's a long road. I don't want to be pessimistic.
Kousha Navidar: It's a long road.
Miranda Nazzaro: Congress isn't even able to fund a government agency right now. There's just so much disagreement. We could see more action from the White House. Congress, I'm not quite sure yet. It's going to be soon.
Kousha Navidar: Definitely not a 30-second answer for them. I can tell you that. I appreciate you trying to wrap that all up. Liron, thank you so much for bringing that up. Sorry that we had to cut off your call, but we have to leave it there for today. Thank you, Miranda Nazzaro, The Hill's senior technology reporter. Thank you.
Miranda Nazzaro: Thank you so much.
Copyright © 2026 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
