The Future of AI in Journalism
Brian Lehrer: It's The Brian Lehrer Show on WNYC. Good morning again, everyone. Now, we're turning to a question that's really relevant to the work we do here every day at The Brian Lehrer Show and elsewhere at WNYC. How is artificial intelligence changing journalism? We recently did a segment on AI in fiction. Some of you heard it, where the boundaries maybe feel a little clearer because it's a purely human creation, right? A work of fiction.
Journalism is a little different. It's so dependent on gathering, sorting, and synthesizing information that it's not always obvious to everybody where AI fits in or where it crosses a line. Because when we talk to journalists on this show, there's an assumption baked in that their reporting, the writing, the judgment, is all fundamentally human work. What happens if that starts to shift?
Now, to be clear, there are parts of journalism that AI can't really replicate, like going out into the world, knocking on doors, building trust with sources. Let's assume for the moment that journalists ultimately write their own stories. A lot of the rest, like first drafts of articles, summarizing documents, editing copy, even suggesting angles, can now be done or at least assisted by AI tools.
A recent piece in The Wall Street Journal by media reporter Isabella Simonetti profiles a Fortune magazine editor, Nick Lichtenberg, who has published more than 600 stories in about six months. How? Mostly using AI to generate drafts. Those AI-assisted stories accounted for nearly 20% of Fortune's web traffic in the second half of last year. Now, that's a relatively extreme case. At the same time, there has been backlash.
The New York Times recently cut ties with a freelance reviewer, who used AI to help write a book review that got considered plagiarism. There's been scrutiny around possible undisclosed AI use in opinion and personal essays, even at top-tier outlets. This isn't just happening at fringe blogs or struggling local papers. It's across the industry. According to a 2025 study led by the University of Maryland, some 9% of newly published newspaper articles either partially or fully were AI-generated.
Journalists themselves seem pretty divided about how to handle it. There's also a business layer here. Publishers are cutting deals with AI companies. News Corp, which owns The Wall Street Journal, Fox News, et cetera, struck a deal with OpenAI, reportedly worth more than $250 million over five years, allowing its content to be used in AI systems. That's News Corp content contributing to AI, not AI being used for News Corp. Of course, then other organizations will use it. At the same time, other outlets like The New York Times are suing over copyright concerns. It's a complicated conversation. It's got layers. Let's discuss how AI is being used in journalism and how newsrooms are trying to draw the line.
We're joined by Isabella Simonetti of The Wall Street Journal, who did that profile in the Fortune reporter who's gone all in. We've also got Margaret Sullivan, columnist at The Guardian US and author of the Substack, American Crisis, where she recently wrote a piece called In Praise of an Utterly Human Non-AI Voice. Some of you know her as the former public editor, internal critic, who had the right to publish her thoughts about The New York Times in The New York Times. She is very critical of how some news organizations and journalists are using AI in that Substack. Good to have you both on the show. Margaret, welcome back. Isabella, welcome to WNYC.
Isabella Simonetti: Thank you.
Margaret Sullivan: Thanks, Brian.
Brian Lehrer: Isabella, let's start with your reporting. Maybe tell us a little bit more about Nick Lichtenberg at Fortune. How exactly is he using AI in his day-to-day work? What will he use it for? What won't he?
Isabella Simonetti: Sure, so I was really interested in writing about what Fortune is doing, having AI help write stories, because it's a contrarian narrative in this very complicated and layered topic. While media executives and reporters are rightfully anxious about AI, this one publication and this one editor, Nick Lichtenberg, is going all in in an unpretentious way. I spent the morning with Nick in February, and I watched how he does his work.
A lot of his stories are generated from press releases or analyst notes, and he'll upload them into Perplexity or NotebookLM, which are AI tools, and prompt them with a headline that he comes up with and say, "Write a 600-word story framed around this headline," and then he takes that draft, edits it, will sometimes add in his own reporting from conversations he's had. It quickly can become a publishable story on Fortune's website.
Brian Lehrer: You're here to make a journalism distinction. You're here as a reporter. Margaret, you're here as an opinion columnist. As I mentioned in the intro, you argued pretty strongly in your recent piece against the use of AI in journalism. Where do you draw the line?
Margaret Sullivan: Well, thank you, Brian, and thanks for having me. Isabella's story was fascinating and got a lot of attention for good reason. It's a great conversation to have. I am not saying AI has no place in journalism, so I want to be clear about that. Speaking personally, as someone who writes a column for The Guardian and someone who writes a weekly post on Substack under my American Crisis flag, I have chosen not to ask AI to write my work for me. It's important to me personally that what appears under my byline comes entirely from me. Now, that doesn't mean I don't do a Google Search and read what comes up. We're all using AI to some extent.
I've also taken a hard look at this at Columbia Journalism School, where I was running a center for a while. One of the things that my colleague and I determined was that this is all very much in flux in newsrooms. Standards and practices editors are struggling with it. There's a push and pull between rank-and-file reporters and ownership that is interested in maximizing profit. There's a lot of tension around this. I guess the one place I would draw the line is if a person's byline, an actual human being's byline, is on a story, I think that person should have written that story.
Brian Lehrer: What counts as written?
Margaret Sullivan: [laughs] Well, yes, you've come up with the words and typed them in. I guess you can call that writing. It doesn't mean that you don't do research that can be informed by AI. I want to make the point that there have been Pulitzer Prizes won by news teams that have used AI in a very good way as a tool to crunch data, to analyze data. Many journalists use it to transcribe or to translate. I think those things are fine. AI is an incredibly powerful tool. I just don't think that it should be presented as this is the original work of a human being.
Brian Lehrer: Let's open up the phones and our text thread for producers and consumers of journalism, which means everybody's invited on one side or another of that. If you work in journalism, how, if at all, do you use AI? How conflicted do you feel about any way that you are using AI? What are the standards of your news organization? If you work for a news organization, what do you think they should be?
Anybody else, how do you feel about AI in journalism? As a consumer of journalism, when you read or listen to or watch the news, do you expect that everything was reported and written and researched entirely by a human, or are you okay with journalists using AI tools, maybe in some of the ways that Margaret was just describing, or where do you draw the line or become queasy or uncomfortable or judgmental? 212-433-WNYC, 212-433-9692. You can call, you can text.
Isabella, we talked about that relatively extreme example of that one person at Fortune. You also spoke with other newsroom leaders, including local outlets like Cleveland.com that are using AI to draft stories as well as process information so that, they say, reporters can focus more on reporting. What was the thinking there?
Isabella Simonetti: I think there's been a lot of fear and anxiety around AI in newsrooms and how it's used. There are some leaders of newsrooms, like the editor of Cleveland.com, who has said, "Let's take a step back, and this is obviously changing our industry, and see how we can use it to our benefit." His argument was AI can turn a bad draft into something usable in seconds. It has undivided attention it can give to reporters. He was very also all in on AI and said that it's helped his publications cover counties that had gone previously uncovered in Ohio. That's one side of the coin.
Brian Lehrer: How does it do that? If you don't have a reporter out in the hinterlands outside downtown Cleveland, how does AI help you cover those areas?
Isabella Simonetti: You can set up different AI tools to scrape local websites and news reports and try to find information about what's happening in local counties, and then use that to feed tips to reporters who then can take the tip and report it out and do what we do as journalists.
Brian Lehrer: Margaret, have a problem with that?
Margaret Sullivan: I think I know the situation in Cleveland. Chris Quinn, who's been so outspoken about using AI and, in fact, even wrote a letter to readers about a case in which a young journalist had wanted to get a fellowship at Cleveland.com and withdrew because she'd been told that there was going to be this heavy use of AI in the work that she would be doing. There's a lot of emotion around this. I think that AI is a really valuable tool, and it's a really slippery slope as well.
Brian, you mentioned this case at The New York Times of their cutting ties with a book reviewer who reviewed a book, incorporated some material that had come from, presumably, a prompt on AI, on a bot, or something. It returned to him a chunk of copy that had actually been in The Guardian, in a review of the same book. That's one way that things can go wrong. I also think and know that AI is wrong a lot.
That's why my colleague and I at Columbia had this informal rule. There must be human in the loop before anything goes out to the public. I would say that's a bare minimum that I wouldn't like to see stories that are being produced by AI and just shoved out there. Believe me, that has absolutely happened. It's really early. It doesn't feel like it because it now feels like I've been talking about AI constantly for quite a while, but it is really early in this whole landscape. I think the rules and practices and all of that are being formed as we speak.
There's also a big conflict between unions that represent reporters and some editors in newsrooms and the management. That's happening today. In fact, it's happening in these current moments at the chain known as McClatchy, which publishes the Miami Herald and the Sacramento Bee. The union is really rebelling against the idea that their previous work would be repurposed under their own bylines through the use of AI. It's really tricky and really complicated.
Brian Lehrer: Here's an interesting text from a listener who writes, "Disgraceful for anyone to use AI instead of doing their own work. For those of us who don't want to read AI-generated anything, it should be required that anything printed or published utilizing AI is disclosed." Isabella, are there practices that are springing up, if you've reported on this at news organization, to disclose to readers, listeners, viewers, clickers, in what ways AI was used in generating a story?
Isabella Simonetti: If we go back to the example of Nick Lichtenberg, the editor at Fortune I wrote about who's all in on AI, he told me that he initially included disclosures at the bottom of his stories often and shared a byline with Fortune Intelligence, which indicated that the story was assisted by AI and has slowly phased that out and uses the disclosure less frequently because he feels like the work is his own.
To your point, and I think you brought this up earlier, the University of Maryland study that shows there's a good chunk of content on the internet that is generated by AI, and the people who are reading it don't know. I think it's a real, legitimate question to be asking: What is the obligation of the journalist or the creator of the content when they're using AI to disclose that to readers?
Brian Lehrer: Joe in Middle Village, Queens, you're on WNYC. Hi, Joe.
Joe: Hi, Brian, longtime listener.
Brian Lehrer: Glad you're on.
Joe: I'm a journalist. I do book reviews, movie reviews, et cetera, and been doing it for quite a few years. How I use AI is more as an editor. I will write the entire review first, and then I will use one of the programs to offer suggestions. Then either I might like some of the phrasing and the words and edit my text accordingly, or it might even point me in another direction, or possibly just a single word to use that I like better than the way I did it, but I would never just copy and paste or save something in that was totally AI.
Brian Lehrer: Tell me if I understand you correctly, you use AI to edit, but not to write drafts, or sometimes to write drafts?
Joe: No, never to write drafts. I have to write the draft myself first. Over the years, from writing and working for different outlets, it's great to have an editor to overlook your work before you post something or send something out. That's basically the way I use it as an editor to offer possible suggestions.
Brian Lehrer: Explain a little more of that. How does AI function as an editor on something you've written a draft of?
Joe: Maybe it might move my words around in a particular paragraph or rephrase it, and maybe change one or two words so that I would then maybe look at a thesaurus or something to come up with another word that I like even more, so I think it--
Brian Lehrer: You might take its suggestions or you might not.
Joe: If I take their suggestions, I will go back and tweak it and rephrase it then myself, changing whatever words I want. Maybe on occasion, I'll use one or two words that they suggest.
Brian Lehrer: Sure. I'm just curious how you get it to be an editor. Do you give a command, edit this draft for clarity and accuracy, or what do you actually ask your AI tool to do?
Joe: Basically, I do all my own research and double-check everything. Basically, I'm asking it to rephrase it, and either to shorten it or to offer any other kind of suggestions in the phraseology or--
Brian Lehrer: Right, those are the specifics, but what do you actually enter as your request to the AI? Edit this document?
Joe: Well, I use the Google Form. Basically, I can highlight the text and tell it to rephrase it.
Brian Lehrer: Rephrase it. I see. Joe, thank you very much. Margaret, any problem with what Joe is doing?
Margaret Sullivan: That works for Joe, and he sounds like he really is very moderate about what he actually would then present as one of his reviews. That probably works fine for him. Some of this is personal. I don't want to do that. I want to write my own work. I am lucky to have a terrific editor at The Guardian and someone who reads behind me for my Substack. I think I used this somewhat overused expression before, but I think that's all a slippery slope.
I like the idea for myself of drawing the line and saying, "My writing and my work is my work." I've said that to my readers on Substack. You can take this to the bank. When you read my stuff, it's mine. It's not AI. I think people appreciate that. At the same time, I just want to mention. I did this workshop I was part of. I participated in a workshop at Columbia in which we had a bunch of facts, quotes, and other things.
We fed it into AI, I can't remember exactly what the tool was, and said, "Write this in the style of the Associated Press. Write this in the style of The New York Times. Write this in the style of NPR." It came back, and it was remarkably fast and remarkably true to those voices. As I say, it is an amazing tool. I think my feeling is it has to be used with a lot of care and with some sort of guardrails.
Brian Lehrer: Dorian in Minneapolis has an interesting story to tell as a teacher, I think. Dorian, you're on WNYC. Hello.
Dorian: Hi, Brian. Hi, Margaret. It's funny. The example I was going to give actually resonates with the one you just gave, Margaret, where we've had the pleasure of meeting at Columbia J School, which is one of the places I teach a business of media course. I assigned my students as an exercise to do a business analysis, have AI do it, and then for the students to analyze and critique that AI draft.
I witnessed two students in front of me in one of my classes arguing about whether it was appropriate for me to do that or not. One student took the position of, "How dare you? We're journalism students. How dare you assign us AI?" Another student said, "No, no, the point was to teach us. We're going into the workplace where AI is going to be used. It's to teach us what it does and how we might need to work with it and understand where it does and doesn't work."
Brian Lehrer: Dorian, thank you very much. Isabella, beyond the factual errors and so-called hallucinations, which are basically factual errors, there's also a concern that AI-writing can be very convincing. Does that seem like something people are excited about or concerned about? Is this anything that's come up in your reporting on journalists' use of AI?
Isabella Simonetti: What do you mean by convincing, Brian?
Brian Lehrer: The way I've seen it described is it can be wrong, but it looks very much like it's right.
Isabella Simonetti: Interesting. Yes, obviously, I spoke with Nick during the reporting for my story on Fortune about hallucinations. There was actually a piece in The New York Times a couple of weeks ago that compared literary passages with passages written by AI. It was a quiz where you went through and picked, "Do you prefer this passage or this one?" A lot of the ones that people picked ended up being the ones written by AI. There is this question: AI can be a great writer at times, depending on how you feel and what you think of it. I think that's a legitimate concern. You can read something, and it sounds very authoritative. If you don't really know your facts, you can't catch it in making the mistake.
Brian Lehrer: Let's see. We're getting a few phone calls in reaction to the earlier caller who described using AI as an editor. Jennifer in Jersey City, you're on WNYC. Hi, Jennifer.
Jennifer: Hi, Brian, how are you?
Brian Lehrer: Good, thanks.
Jennifer: My example is I'm not a journalist per se, but I work for a scientific conference that is a very specific high-level biotech. I do a lot of writing for that. I took a course once called Ethical AI. It taught me how to feed it examples of my writing and then use it as an editor to say, "Can you rewrite this in my tone?" which, again, I don't use per se, I just use as another editor who's looking at my work. I will use pieces of it. I will use examples, but I found that really helpful.
Brian Lehrer: All right, thank you very much. Another one using it a little bit like that. There's also, Isabella, the question of bias, right? AI systems are trained on massive data sets. They can reflect certain cultural or even political assumptions. Is there concern that relying on these tools, even for research, could introduce hidden biases into news coverage?
Isabella Simonetti: Of course, that's a concern, just like it would be for a human. I think Margaret talked a little bit about this. Most newsrooms that are using AI have a human behind them vetting the content and checking for hallucinations and biases. The bigger question, which I think we discussed, is whether or not they disclose that they're using AI in the first place. I think it's important that there's a human layer to check against bias that is totally possible to come from these AI tools.
Brian Lehrer: Yes, and just let me zoom in on the business side for a second before you go, because as I mentioned in the intro, the parent company of The Wall Street Journal, News Corp, signed a major deal with OpenAI, reportedly worth more than $250 million. As I understand it, that gives OpenAI access to both current and archived articles from publications like The Journal to help train its models and even surface that content in AI-generated answers. Actually, Margaret, let me get a take on that from you, because that deal might have implications for Wall Street Journal copy and Wall Street Journal voice and politics to the extent that it has politics showing up in other news organizations' content.
Margaret Sullivan: Right, you make such a great point when you say AI is only as good as what it is incorporated. It's a kind of garbage-in, garbage-out potentially. If the material that it's being fed or that it's feeding on is full of cultural biases, then the product is going to be full of cultural biases. I think we have to put this whole thing, too, in the context of news companies and news organizations and the media writ large in a really tumultuous moment financially.
Anything that looks like it's going to help them stay afloat or bring in so-called efficiencies, which could mean eliminating reporter positions or editing positions, is going to seem very attractive. You can't really just look at it as, "Is this a good idea from the perspective of journalism?" but rather, "What are the other motivations that are happening behind the scenes?" I think that's a very potent part of this whole discussion.
Brian Lehrer: Respond, Margaret, to this one more text with your opinion, and then we're out of time. "If you train your AI agent to write in your own personal style and then utilize what you have trained it to sound like the authentic you and use AI as a tool rather than a 'cheap way out,' then is that not expanding your creative palette?" What would you say to that listener as a last thought?
Margaret Sullivan: I would say maybe for somebody else, but not for me. I don't want to do it that way. I want to actually use my human capability to create and not rely on what amounts to machine learning.
Brian Lehrer: There we leave it with Margaret Sullivan, columnist at The Guardian US and author of the Substack, American Crisis, where she recently wrote a piece called In Praise of an Utterly Human Non-AI Voice, and Isabella Simonetti of The Wall Street Journal, who did that profile on the Fortune reporter who's gone all in on AI. Thanks so much for having a complicated conversation with us. We really appreciate it.
Margaret Sullivan: [chuckles] Thanks for having us, Brian. Thanks, Isabella.
Isabella Simonetti: Thank you.
Copyright © 2026 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
