Europe's Plan to Regulate AI, and Other News

( Michael Dwyer, File / AP Photo )
[music]
Brian Lehrer: It's The Brian Lehrer Show on WNYC. Good morning again, everyone. Now, various developments in the news recently about artificial intelligence. Did you know The New York Times is suing the makers of ChatGPT, accusing them of allowing its AI robots to plagiarize from the Times? Did you know Michael Cohen, the former Trump attorney now says he inadvertently filed false statements in a legal case because the falsehoods were generated by AI and he didn't catch them at first? I didn't know lawyers used robots to write official legal things.
Europe has jumped ahead of the United States with a new set of rules for how AI can and cannot be used. We'll talk about those in what I think will be a very interesting AI catch-up now made by me and at least one other human, Cat Zakrzewski who covers AI policy and other tech-related policy for the Washington Post. Cat, thanks for coming on with us. Welcome to WNYC.
Cat Zakrzewski: Thank you so much for having me on the show.
Brian Lehrer: Can we start briefly with the lawsuit by The New York Times against OpenAI makers of ChatGPT? The suit alleges "mass copyright infringement" because the system is designed to "exploit and in many cases retain large portions of the copyrighted expression contained in those works." It refers to "unlawful copying and use of The Times' uniquely valuable works." Is this suit the first of its kind in the AI or ChatGPT era as far as you know, and do you understand the legal basis for the suit?
Cat Zakrzewski: This suit joins a growing group of artists, authors, musicians, filmmakers who want credit and compensation from these companies that are using their work to train these AI systems. Although this is the first time we've seen a news organization like The Times Sue OpenAI, we have seen big writers like George RR. Martin and Jodi Picoult and others also bring lawsuits against the company.
It speaks to this growing question that the courts are going to face this year over what ownership do creators have over their work in this age of ChatGPT and other generative AI tools.
Brian Lehrer: Do you know who was generating what kinds of materials using AI as a tool that wound up copying from The Times in a way they claim is a copyright infringement?
Cat Zakrzewski: One of the things that The Times showed in their lawsuit were that there were instances where ChatGPT was prompted and then effectively spit out New York Times articles word for word. The Times argues that this is a violation of copyright law. It'll be interesting to see what kind of defense we see OpenAI come back with.
In general, OpenAI has been trying to partner and form deals with publishers. We've seen them reach that with some other news organizations and it seems that those talks with The Times fell apart and The Times took this step. Legal experts say that there's a high bar to prove that there's this infringement occurring and I think there will be a lot of scrutiny of what prompts have to be entered into ChatGPT in order to have it regurgitate articles in this way, but it's certainly an issue that's coming to the fore as more and more media organizations are looking to incorporate these products into their news gathering.
Brian Lehrer: Or gee, your college application essay looks a lot like a Maureen Dowd column I once read. [laughter] All right. The Michael Cohen case, here's reporting from NPR last week. It says,'' Michael Cohen, Donald Trump's one-time personal lawyer and fixer says he unwittingly passed along to his attorney bogus artificial intelligence-generated legal case citations he got online before they were submitted to a Judge. Cohen made the admission in a court filing unsealed Friday in Manhattan Federal court after a judge earlier this month asked a lawyer to explain how court rulings that do not exist were cited in a motion submitted on court on Cohen's behalf.''
If we give Cohen the benefit of the doubt of telling the truth here, if we assume that this was inadvertent, do you have a take on how AI makes up non-existent legal cases to include in a legal filing?
Cat Zakrzewski: This is one of the things that we see come up again and again with these generative AI systems. They are able to spit out this human-like text that makes them appear very advanced, but really what they're doing is just scraping data from across the internet and then making predictions about what the most likely next word would be. That can result in the system's hallucinating at times. I've played around with ChatGPT a little bit and asked it about my own biography and at times it has come back saying that I've won Pulitzer Prizes and have a degree from Harvard University. Unfortunately, none of that is true.
[laughter] We've seen how these systems can often just make things up. As people are using them more and more in legal cases in their work, we're going to see more mistakes like this come up. Michael Cohen isn't the first lawyer who's been caught up in this. In Manhattan Court last year, there was a situation where two lawyers were fined $5,000 in another case because they used ChatGPT and it also created bogus case citations. As more and more law firms are looking at this technology, they might want to think twice before they replace their attorneys or paralegals with a generative AI system due to this problem.
Brian Lehrer: Funny you just used the word hallucinate to refer to AI making mistakes. That came up on the show last week when we were talking about the various dictionaries and their words of the year for 2023, and one of them was hallucinate because its contemporary use is not just about what happens if you take LSD or something, it's about AI. This is a common term now referring to AI mistakes. People use the word hallucinate, right?
Cat Zakrzewski: Exactly. I think it's something we're going to have to really watch in 2024. I was at a hearing recently on Capitol Hill where lawmakers were talking about ChatGPT hallucinating when you were asking questions about how to vote, even making up polling locations that don't exist. It's cases like this are just an important reminder of the limitations of this technology, even amid all of the hype it's getting.
Brian Lehrer: Just before we leave the Michael Cohen story, do lawyers who obviously need to be very precise in their language, use AI for certain things routinely or increasingly now?
Cat Zakrzewski: Increasingly we are seeing this happen and it's really interesting because ChatGPT can be an incredibly powerful analytical tool. Michael Cohen talked about using Bard, a ChatGPT competitor created by Google as a supercharged search engine and not really understanding that it could have these types of hallucinations, he said. We are seeing lawyers and a lot of professions increasingly experimenting with these tools to analyze large volumes of data, to write technical documents. You can see how that could have a lot of value in the field of law, but obviously, there are major problems too.
Brian Lehrer: We're talking about AI in the news with Cat Zakrzewski who covers AI policy and other tech-related policy for the Washington Post. I'm going to take a call or two for you that look interesting before we go on to the main course of this conversation, which is Europe's new rules for the road for AI past last month which could have implications for the US and globally, but 212-433-WNYC is our phone number as always, 212-433-9692. Janaya in Harlem is calling in, says she's an AI consultant. Janaya, you're on WNYC. Hello.
Kanani: Hi, good morning Brian, and happy New Year. Thank you for--
Brian Lehrer: Oh, it's Kanani. Is this Kanani?
Kanani: Yes, it is Kanani. How are you today? You recognized.
Brian Lehrer: I'm sorry, I got your name wrong, but I recognized your voice. Hey there.
Kanani: [chuckles] Thank you, you so much. Thrilled to just share and kick off 2024 with AI New Year's resolutions, right? There's a lot going on in terms of The New York Times and the implications that that will have broadly not only for corporations but also people. The New York Times is a great example of it's not just the company, but it's also the industry. The industry is powered by advertising. When you think about how these articles are going to be used, manipulated, and also monetized, how will that also impact our ability to actually read and actually fund the news?
The news is funded based on its reliability and its accuracy. As you continue to use artificial intelligence in the way that it rapidly enhances, it can also rapidly distract and destroy information as we know it. The notion of AI companies and the other companies that are going to be eagerly using AI to improve profits, we also have to think about being proactive about the data. Tomorrow I'm actually really excited I'm going to share it out right now on your Twitter so that people can join in. The Female Quotient is an amazing organization that is empowering women to occupy the C-suite of Fortune 500 companies.
They're going to be doing their algorithms for equality event. I will be one of the speakers for it, along with other amazing AI experts. We're going to be literally talking about what are AI best practices at the beginning of the year so that we can be very clear as 2024 is going to continue to 2024, what does AI look like when it is inclusive, when it is also responsibly utilized and also is mindful of marginalized communities? It can be because the AI model that I manage is called DEI GPT.
We're constantly getting a lot of interest in it because it is an AI tool that is also a solution for diversity, equity, and inclusion issues. As we see what's happening with Claudine at Harvard University on Down, we also have to be thinking about how AI can be utilized in a way that is proactive to help us scale the impact that diversity, equity, and inclusion professionals can have because we're burnt out. I tweeted it out to you on the Brian Lehrer-- your tweet, but really excited about the Female Quotient if people can join in that conversation tomorrow, it's 12:00 PM Eastern Standard Time. We'll be engaging in questions and answers, best practices, and then publishing the manifesto. Thank you so much, Brian. Happy New Year. AI resolution, hoo-hoo.
Brian Lehrer: Happy New Year to you. Kanani, thank you very much. Nick in Corning, New York wants to talk about the word, hallucinate, as it pertains to AI. Hi, Nick.
Nick: Hey, Brian. Sorry, I missed your guest's name. I just jumped in, but I wanted to just highlight that comment that you guys had about the hallucinations term because I was listening to the podcast, The Daily. They had something from Sam Altman actually on there, and it sounded like AI is benefiting from this kind of--
Brian Lehrer: [crosstalk]. Go ahead.
Nick: Oh, yes. It sounds like AI is benefiting from this hallucination phenomenon because on one hand, yes, if you're irresponsible and you just take it at face value without checking your source, you're not doing anyone a service, but if you're allowing it to hallucinate the same mechanism that it's using there is allowing creativity and helping it formulate an answer that might otherwise not have been come to by the user. I think that when you say you want to leverage it for equality, I'm really excited about those prospects as well as just having it be available to everyone.
If you're teaching or you're presenting and helping people understand how to write a prompt and how to interact with AI in a ubiquitous but useful way, I'm really happy to hear that that's where you're working from.
Brian Lehrer: Nick, thank you very much. All right. Let's talk about Europe's new rules of the road for AI passed month. You reported in the Washington Post, Cat, that they were out to classify risk, enforce transparency, and financially penalize companies for non-compliance. Can you start to explain the term classify risk from artificial intelligence?
Cat Zakrzewski: Sure. The way the EU is thinking about regulating artificial intelligence is almost like a sliding scale, where there are different requirements for the companies based on how risky regulators think an application is. For instance, if you look at something like the social scoring system in China where you're ingesting tons of data about people's practices and that impacts their ability to gain credit and participate in society generally, that's something that Europe looked at, and says, "You know what? That's just far too risky. We want to ban all systems like that."
They also ban things like AI-powered toys that might encourage children to do risky things. Then they look at other systems where they can say, "Hey, there might be some really positive use cases for this, but we need to have special transparency measures around it." That's where the EU created greater regulation when you're using the AI for things like making decisions about hiring or when law enforcement is using AI to monitor potential crimes or even to-- just we were talking about the Michael Cohen case, and they created specific requirements when you're using AI in the application of the law in the legal system.
Brian Lehrer: Yes. Two of the examples that you gave in your article of AI's highest-risk uses were self-driving cars and medical equipment. Are they banning self-driving cars in Europe under these new rules? Because we keep hearing that this future is coming for trucking in the United States and other things.
Cat Zakrzewski: They are not outright banning it. That's the line that the EU is trying to walk here between making sure that they don't squash technologies that could be really beneficial to society, like a self-driving car or like an advanced medical device, but there are special product safety requirements those AI-powered systems would have to comply with.
Brian Lehrer: You wrote that France, Germany, and Italy had sought late-stage changes in their negotiations to water these AI rules of the road down before they got passed last month for all of Europe. For the EU anyway. In what ways were those three countries trying to water down this new set of rules and why?
Cat Zakrzewski: One of the areas that became most controversial were the requirements that the EU was trying to put on so-called foundation models. These are the AI systems that underlie the chatbots that we've been talking about like ChatGPT. There was real concern among these countries, particularly France, which has some big AI companies itself within its borders, that if you were to adopt stringent requirements on these models, you could have a system where Europe is really falling behind the United States and China when it comes to developing new technologies.
They were able to reach a deal that included some major exceptions, particularly what's called open source systems were exempted from some of the transparency requirements that other generative AI systems like ChatGPT have to use. That was seen as a compromise that would allow these companies, particularly in France and Germany, to continue to flourish and grow without having to abide by some of the stricter requirements that we've seen, like reporting any major cybersecurity incidents to the European Commission and having special evaluations of the models before they're released to the public.
Brian Lehrer: The EU, as you remind us in the article, has more than 400 million people. That's more than the United States, which has about 330 million. Do the rules there have implications for the use of AI here?
Cat Zakrzewski: They likely will. The United States, first of all, is still in the early stages of developing its own AI legislation, and it's likely that we'll see ideas from the European regulation make its way into the debate here and potentially be copied in any legislation that is considered in the US Congress. Then the other way we've seen this play out, Europe has moved faster than the US when it comes to regulating digital privacy, other issues around social media, competition, and sometimes we see these companies just decide it's too difficult to have different ways of operating in different regimes in every country.
If the EU is moving first and setting a certain standard for privacy or setting a certain standard for social media, you sometimes see the companies adopt those practices globally. A lot of experts say that this could happen once again with artificial intelligence.
Brian Lehrer: Here are some of the scary implications of AI, at least potentially being brought up by some listeners who are writing us text messages. We talked about the medical equipment. A listener writes, "AI has generated false medical information. Very scary." Another one writes, "Haven't heard much about cultural implications of ChatGPT pushing us toward a monoculture." Another one on the word, hallucinate, says, "Please stop calling it hallucinate. It's BS. AI has learned how," and then that text is cut off. That text, Cat, seems to be saying that AI has learned how to fool us.
The things that are false are purposely false. I think maybe our earlier caller was saying, don't worry about the hallucinations or the false things that AI comes up with because it's part of its learning process on the way to creative contributions. Though, maybe I misunderstood that, but then this notion of generating false medical information, the listener says, very scary. Do you know if that's happened?
Cat Zakrzewski: I have not personally reported on instances where that's happened, but I know that's a major concern. I think the companies have taken steps with systems like ChatGPT to try to avoid that since we know that medical misinformation was such an issue for tech companies during the pandemic. Certainly, you see how it could happen when you're using things like Reddit or other unvetted sources to create the data that are training these systems. If you put incorrect medical information into the system, it's very possible that it will spit it out. I think that is a major challenge in an area where there could be some key implications.
Brian Lehrer: There we will leave it for today on various developments in the world of AI and AI regulation with Cat Zakrzewski, who covers AI policy and other tech-related policy for the Washington Post. Obviously, a lot more to come this year on that in 2024. Cat, thank you so much.
Cat Zakrzewski: Thank you so much for having me on the show.
Copyright © 2024 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.