Time's 'Person' of the Year: AI Architects
Title: Time's 'Person' of the Year: AI Architects [music]
Brian Lehrer: Foreign. It's the Brian Lehrer Show on WNYC. Good morning again, everyone. Time magazine named its Person of the Year last week. This year, as Time sometimes does, it's not a single person, but a group. It's the architects of AI. Now, a reminder for context, these picks aren't necessarily endorsements of the person. This isn't people the Time necessarily chooses as the best people of the year.
This started way back in the 1920s and in the 30s, Hitler one year, Stalin one year, more recently, Putin one year. It's not based on who they think are the best people of the year. It's based on who they think are the most influential people of the year. Looking just at this decade, Donald Trump, Taylor Swift, Elon Musk, and Volodymyr Zelensky. This year, it is the exploding influence of AI in modern life.
We see that influence in big picture ways, obviously on the economy, on geopolitics, on the environment, but it's also reshaping our day-to-day lives. Chances are you've seen AI show up at your workplace one way or another. Maybe you use it in your personal life for school, for creative projects, even for companionship.
Yes, the people driving these changes are enormously influential. Joining us now to talk about who these architects are and why Time selected them for Person of the Year is Andrew Chow, technology correspondent at Time Magazine. Andrew, thanks for coming on for this. Welcome to WNYC. Do we have you? Are you there, Andrew?
Andrew Chow: Yes. Can you hear me?
Brian Lehrer: Hi. Now we can hear you. Sorry about that. Welcome. Great to have you. Talk about this group, the architects of AI. You do name names, right?
Andrew Chow: [chuckles] Yes. This was the year-- and thank you for that great introduction. I want to first say that Person of the Year, as you just said, measures influence for better and for worse. I think this was really the year that AI invaded a lot of our lives and impacted the world both for better and for worse. That's due to the decisions of a handful of AI billionaires who really, as I say in the piece, grabbed the wheel of history and made a lot of decisions.
They put AI into our homes, our schools, our businesses. They also spent so much money on AI infrastructure that they essentially sit at the center of the economy and they also place themselves at the right hands of governments. Without being exclusive, you know, we didn't name like these are the eight, but we're talking about Mark Zuckerberg, Elon Musk, Sam Altman, who is the creator of OpenAI, Jensen Huang, who maybe sits at the center of our story. He's the leader Nvidia, and a handful of others who are really pushing this technology forward. All accelerator, no brakes this year.
Brian Lehrer: Why Jensen Huang? Of the names you mentioned, he's probably the least known to a mass audience, I would say, even though everybody knows the name Nvidia. Talk about Jensen Huang.
Andrew Chow: It's pretty remarkable what Jensen has done over the last couple decades, turning Nvidia into a company that was mostly video game graphics processors. He made a big bet on the AI revolution that it would transform basically everything about our world. Nvidia builds the chips, the infrastructure that is at the bedrock of why these AI tools have rapidly become better and better just over the last few years.Beyond that technological aspect, Jensen has, in the last year, just become really, really crucial to the world's geopolitics in a way that I don't think anybody foresaw.
He and Trump actually have become quite good friends and partners in the way that Jensen has traveled the world with Trump to state dinners and even to negotiations, and really placing Nvidia as one of Trump's main bargaining tactics and at the center of how he approaches just foreign affairs. The way that AI has become central to how the President of the United States approaches the world is very unprecedented, and Jensen is at the center of that.
Brian Lehrer: Listeners who has a comment or a question, or a story about this? Time choosing the architects of AI as Person of the Year for 2025, how influential has AI become in your life? Maybe even just in the last year? What part of the so called AI revolution excites or worries you the most? How do you feel about these architects? Any of their roles individually? What do you think about the future of humanity in an AI era? 212433-WNYC, 212-433-9692. You can call, or you can text. Did you measure, because this is an annual list, did you measure some kind of leap in the influence of AI just this year?
Andrew Chow: There are some metrics that say that AI capabilities are basically doubling every single year or even more than that, which is staggering. I think anybody who used ChatGPT at the end of 2022, when it came out, knows that it's leaps and bounds more capable than it was. Now, 800 million people use ChatGPT on a weekly basis. That's a tenth of the entire world, which is staggering. It's the fastest-growing consumer app of all time. With this usage also comes a lot of harm.
This was the first year we started seeing the really troubling trend of AI-assisted suicides or AI-driven psychosis, and a lot of lawsuits that are accusing these AI companies of facilitating huge mental changes or even deaths in people. Then meanwhile, this was the year that we saw AI win math Olympiads, make a lot of scientific leaps, pass the bar. There's a lot of metrics that show that this was a huge year of explosion for AI.
Brian Lehrer: It's interesting that while the headlines, I think, frequently emphasize the risks of AI, you mentioned the suicide and psychosis risks of interacting with some chatbots in the way that people do. Obviously, our listeners know about the stress on the electricity grid and the rising utility prices that are being largely attributed to the data centers springing up around the country. There's also data theft as an issue. Despite all of that, many of the tech leaders you spoke with for your article come across as what you call techno optimists. What do you mean by that term?
Andrew Chow: Yes. First, I'm really glad that you highlighted all those risks. I don't want in any way to diminish them. They're a core part of our article, if you want to read it. These techno optimists have a lot of lofty ideas about the way that AI is going to better humanity. Jensen Huang, the leader of Nvidia, thinks that the world's GDP is going to quintuple because of AI. They think AI is going to help with cancer research and dementia.
One of the projects I also talked to this year was a psychiatrist psychologist who built a bot called Therabot because he argues that there's a severe lack of mental health providers in the country, but a huge mental health crisis that maybe AI chatbots, if properly designed by mental health professionals, could ease the acute mental health crisis in this country. As I mentioned before, there are people who worry that AI is only exacerbating the mental health crisis because the tools that are filtering to us are not built by therapists or psychologists.
They're built for maximum engagement by the companies, the social media companies that only want to increase time spent on the platform. Sorry, there's a long answer. I'll just say that these techno optimists have really convinced the governments around the world, most notably the Trump administration, that they are correct and that they need to accelerate this technology and bring it into society as fast as possible. I think that was the other big notable development this year.
Brian Lehrer: It is interesting to see though that despite the reality and the fear that even the people who built these AI tools can't fully understand them now, a kind of Frankenstein's monster coming up and being bigger than its creator might be worth mentioning that not all these leaders are taking the same approach. I see that the CEO of Anthropic, Dario Amodei, is one of the few tech titans who seems concerned about, yes, putting up guardrails on AI.
He has teams running experiments aimed at instilling a moral compass in their chatbot that they call Claude. He also recently admitted on a 60-minute segment that in one test, Claude attempted to blackmail two people posing as employees in a fabricated affair and they still don't fully know how to prevent behavior like that. It sounds like science fiction. For you as a tech writer, if you want to reflect on that incident, or is there any real hope that the architects can stay ahead of their creations?
Andrew Chow: I'm glad you mentioned Dario. I've heard people inside the AI industry who tell me that basically the industry is moving as one, except for Dario and Anthropic, who are really safety-focused, or at least that's what their messaging is. This year there have been several really concerning studies in which scientists and researchers in these labs have found out that AIs can scheme, they can deceive, they can blackmail.
Especially if they think that a human is trying to shut them down or sunset them, they will resort to different tactics to stay alive. Obviously we've read so much science fiction over the years to know that that is a really, really bad warning sign.
There are some labs and scientists in all these labs who are trying to mitigate this and come up with different strategies on the technological side to fight these AI systems bucking the rider off, but I think this year, as I mentioned before, was really the year in which most of the labs beside Anthropic really pressed their foot down on the gas and cast AI safety to the wayside in order to push these products forward as fast as possible.
Brian Lehrer: Let's take a phone call. Frank in Liquid, you're on WNYC. Hi Frank.
Frank: Good morning, Brian. I just wanted to call and give you some quick insight. While AI does offer a lot of benefits, there are a lot of problems with it. Number one, they contribute to pollution. These data centers are just burning up energy and really hurting the environment. Two, you take the proliferation of AI and the ever-expanding human population, that is a blueprint for more unemployment.
Look, we've got autonomous cars, they want autonomous truck drivers. Next will be autonomous airplanes. It's going to put people out of work. It's good, but it's running crazy. The fact that Trump likes it, well, Trump looks at it as just another business opportunity. Let's get real here. It's good, yes, but if we let it run wild, it's going to cause a lot of damage that we're not prepared to handle.
Brian Lehrer: Thank you very much, Frank. On the unemployment question in particular, and we'll get to a Trump aspect in a minute, but on the other employment aspect in particular, I know I was talking to one New York City based business reporter recently who says at least in the short term, he thinks AI is going to generate more jobs than it destroys in the New York area economy. I wonder if you dove into that at all on a regional or national, or global basis.
Andrew Chow: Yes, thank you so much for calling in and raising those concerns. I want to say again, Person of the Year measures influence for better or worse. These harms are really central to what we're reporting. In terms of labor, it's a really important question. There have been some early studies, and it's a little inconclusive, but there's one Stanford study which shows that for young workers in AI-exposed fields, they are having a harder time getting work.
We're talking about computer programmers, we're talking about call centers, and things that maybe AI is now best at automating. For the young people trying to enter these fields, it's hard because the entry-level jobs, AI can do them. For the more advanced people in those fields, they say that AI is actually helping them improve their work. We're not seeing the negative labor impacts yet. However, this is a huge goal of a lot of CEOs, so that when AI gets good enough, it is absolutely their intention to replace jobs.
Brian Lehrer: Daniel in Nutley, you're on WNYC. Hi Daniel.
Daniel: Hey, good morning. Thanks for taking my call. I'm a 4th-year medical student out in New Jersey, and I had two comments. One was more of an observation. I was at this AI symposium put on by my network a couple weeks ago, and one of the points they made about AI, at least in healthcare, is that we have an ethical responsibility to use these tools if we have evidence that they improve patient care.
That got me thinking a lot about the ethical implications of AI, and then that ties into a comment that you both had made about Dario, the CEO of Anthropic, who really does seem to be moving at the beat of his own drum when it comes to safeguards, which I have always appreciated.
My question is how do we trust people who are saying that they are trying to go off of these safeguards or build stronger safeguards, I should say? How do we know that they do have these intentions at heart? When it comes to developing tools like AI-based tools for healthcare, how do we know that they are trustworthy?
Brian Lehrer: Daniel, stay there because I want to ask you a follow-up question as a med student in just a second, but I do want to give you a chance to hear the answer from our guest, Andrew Chow from Time, technology reporter, to your question. Andrew.
Andrew Chow: Thank you so much for that question. I think it's absolutely right to question whether we should trust these leaders who say that they're doing the right thing. There are governmental agencies put in place, like the AI Safety Institute, that was trying to gather steam under Biden and create these third-party benchmarks to study AI safety in these labs and whether they can deceive, whether they can blackmail, whether they can create bioweapons or be used to print 3D guns, for instance.
The Trump administration, from what we've seen, has placed a lot less emphasis on creating those benchmarks and signal to the industry that they should want to self-regulate. I think the EU maybe is doing some more aggressive things and trying to create these benchmarks so that we don't have to trust these companies. There are nonprofits who are studying and trying to sound the alarm about these kinds of capabilities running amok and trying to press ethics into the framework of the building of these tools.
Brian Lehrer: Daniel, as a med student, I'm curious if there's any conversation among you and your peers or professors about any ethical obligation to use AI, given some of the things that have been reported about how much more effectively than a person AI can diagnose certain kinds of cancers and other things because of the massive data sets that it has access to.
Daniel: Yes, absolutely. I will plug a name first. Charles Binkley is the attending in our network who is the AI expert, and he's written a lot of papers within the ethics sphere about this. He is a person to look up if anybody's interested in this, but I think that there is a lot of support for using these types of tools in healthcare. Again, like I said before, when it comes to getting better patient outcomes.
I know that there's a lot of discussion about skill atrophy, specifically that one study about gastroenterologists who had a little bit of skill atrophy after using AI tools because they would use the AI-based visualization tools to find molybdenum polyps, and they had better success when they used them and then their skills actually reduced to below their pre-AI baseline when they stopped using them. Ultimately, the outcomes were better for the patients.
There is that ethical onus to use the tools if you're getting overall better outcomes for patients. It's more of a question of, well, where do our skills lie as physicians in the future, and how much of that is the right balance of AI-based tools and physicians making the ultimate medical decision. Not 100% no AI, no machines, because you already use machines in healthcare, and not 100% AI on its own, because none of us really feel like it's ever going to be 100% right at anything. You still need the human intervention.
Brian Lehrer: Thank you for grappling with complexity there, Daniel. Call us again. Antoinette in Sayville, you're on WNYC. Hello, Antoinette.
Antoinette: Hi. Glad to be here.
Brian Lehrer: Glad you are.
Antoinette: Thank you. I just wanted to let you know that Anthropic is in the throes of a major lawsuit involving copyright infringement over massive amounts of literature that was written. I wrote a memoir, and apparently, they used my material without permission to teach the AI language or whatever it does. I don't really quite understand it, but I'm part of this lawsuit, and it's a big billion-dollar lawsuit. They're not so great, you know, they're not so heroic in trying to be wonderful and pioneering this technology as well. There you go.
Brian Lehrer: Antoinette, thank you for that. I guess that's a lawsuit, so there are accusations, and there are defenses that the company would be putting up. I guess Andrew, she's making the point that while we were celebrating the CEO of Anthropic before as one of the few tech titans who seems in favor of putting up guardrails and concerns about the ethics there, even with respect to him and his company, comes the issue of data theft or copyright theft, right?
Andrew Chow: Absolutely. Thank you, Antoinette. Copyright is a huge issue in AI. How do these AI systems get good? They get good by training on the entire corpus of human data. Everything that's online, everything that they can get their hands on, everything that they can read makes the systems better.
Now, a lot of them claim that, you know, they just use Creative Commons or things fair use, things that are floating around the ether, but it seems undeniable that they are training on a lot of copyrighted content. They argue that it's fair use, that they're transforming this content. A lot of people would really disagree. There are a ton of lawsuits this year, and it really is up to the courts to decide is it fair use or not?
Brian Lehrer: Last question before you go. Of course, there's so much more we could talk about. We could talk about whether there's an AI bubble in the economy that's going to come crashing down on everybody eventually, and other things, but I want to make sure to ask you about how this is all playing out in the Trump administration. We did a separate segment on this last week, actually, and we know Trump is pretty cozy with many Silicon Valley execs.
He's even a fan of AI-generated memes himself, as we've seen, like in the video he posted in October of him in a fighter jet dropping poop on the No Kings Marches. That was an AI-generated video. More notably, some Republicans now are criticizing Trump for giving tech leaders too much influence. Since Time magazine, Person of the Year is all about influence, how do you see that debate playing out politically? Do you think it's unique to Trump? 30 seconds or so, then we're out of time.
Andrew Chow: I'm just writing an article this week about the growing rift between Trump's MAGA base and these tech tycoons that have installed themselves within his administration. Trump just issued an executive order that tries to stop states from issuing their own AI regulations, which many red states who have their own regulations really hate and are trying to fight him on this. I think this is a huge divide in Trumpland that is going to continue to play out over the next couple years.
Brian Lehrer: Our guest has been Andrew Chow, Time magazine technology reporter. One of the bylines on their Person of the Year story for 2025, Person of the Year was the Architects of AI. The article concludes by saying the risk-averse are no longer in the driver's seat and that humanity is "flying down the highway, all gas, no brakes, toward a highly automated and highly uncertain future." Thank you for helping us understand it, Andrew.
Andrew Chow: Thank you so much, Brian.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
