A Call for a Humanist Movement to Balance AI

( Michael Dwyer, File / AP Photo )
[music]
Brian Lehrer: It's the Brian Lehrer Show on WNYC. Good morning again, everyone. We'll get a take on artificial intelligence now and a possible humanist response to it from the editor of The Atlantic, Adrienne LaFrance. The context is that AI is becoming a part of everyday life now for many. Real estate agents are using it to write listing descriptions and perform core marketing functions. The health conscious are using it in some cases for meal and workout planning. We know all about students using it in all kinds of ways. Everyone is debating how dangerous all of this is.
We've seen what a mess unregulated technology can create through the rise of social media and its addictive amoral algorithms. How do we prevent AI from creating dystopia as powerful as it is turning out to be? What principles should artificial intelligence hold? In her article, The Coming Humanist Renaissance, the executive editor of The Atlantic, Adrienne LaFrance, shares a philosophy and some proposals for how we should move forward with AI, and we'll hear about her ideas right now. Big think on how to interact with, regulate and possibly resist, out of humanism, artificial intelligence. Adrienne, thanks so much for joining us. Do we have Adrienne? Adrienne, you there?
Adrienne LaFrance: Can you hear me? Hi.
Brian Lehrer: Hi. Now I can hear you. Hi there.
Adrienne LaFrance: Thanks for having me.
Brian Lehrer: You start your piece by comparing our current situation to the time of the Industrial Revolution and comparing Google search engine to the modern-day library of Alexandria and new AI to a mercurial profit. Why those particular historical references first? The Industrial Revolution and the Library of Alexandria?
Adrienne LaFrance: I think there's lots to see today that is an echo of the period around the Industrial Revolution, just in terms of how rapidly technology is advancing, AI in particular. We're on this continuum of still trying to understand what it means to live with the triple revolution of the internet and smartphones and the social web. It's this, similarly, just a period of really rapid technological change that as history can show us, can be very disruptive and disorienting, and I think that's what we're going through today.
Brian Lehrer: You warn that artificial intelligence may be the most consequential technology in all of human history. That's a big statement considering other inventions that we live with; electricity, the internet, plumbing, hot and cold running water, even the atom bomb like you mentioned.
Adrienne LaFrance: Sure.
Brian Lehrer: Why do you think this might rise to the most consequential technology in all of human history?
Adrienne LaFrance: We're not there yet, but it's advancing extraordinarily quickly. If it does meet the outcome that many people believe will happen, which is that artificial intelligence will eclipse human intelligence, not just in some tasks, but generally, that I do think would be the most consequential thing that humans have ever invented. Not, of course, like penicillin. You could name a whole list of things that have changed our species in the world for good or for bad, but in this case, creating a machine that is smarter than our species I think would beat them all.
Brian Lehrer: We'll get to specific points of restriction or resistance that you get to in your PSN. We'll invite some listener phone calls for you. I want to read, listeners, just the headline and subhead of your article in The Atlantic. The headline is, The Coming Humanist Renaissance and the subhead is, We need a cultural and philosophical movement to meet the rise of artificial superintelligence. One is a should statement, something that we need in your opinion. The other, the real headline, I realize you probably don't write your own headlines, is a prediction, The Coming Humanist Renaissance.
Adrienne LaFrance: Right.
Brian Lehrer: Do you think something like that is coming?
Adrienne LaFrance: It's me being a bit optimistic and hoping that it's coming. [chuckles] The other headline we used for this piece, just in case it gives you a flavor of my thinking is, In Defense of Humanity, and that sounded a little bit dark also. Yes, I think we have a choice in this moment. One of the things that's so compelling to me in this moment is that it doesn't take long for new norms around new technologies to solidify. Right now we're still in a space with regard to AI where the norms aren't fully established yet. I think that's a really important and exciting time to be in, where we have agency here.
We don't have to just let this new technology wash over us. We are in a position to say how we think it should be built, and what it should and shouldn't do. I think that's incredibly powerful and important to take stock of in this moment, what our values are, as this new technology advances.
Brian Lehrer: You open the piece with an epiphany that Ralph Waldo Emerson had in 1833 visiting Paris. Tell us that story, and why you think it's relevant to this moment.
Adrienne LaFrance: Right. I was drawn to this. This was, Emerson visited the Jardin des Plantes in 1833 in Paris and was looking sort of at the different specimens, seashells, and butterflies. You can picture the way a 19th-century museum would've curated its artifacts. They did so in a way that showed the connection between different items from the natural world. As Emerson is walking around and taking it all in, he gets flooded by this sense of interconnectedness, not just of nature, and how different species relate to one another, but in particular humanity's role and interconnectedness within nature.
He has this naturalist epiphany and can't stop thinking about it. This experience is part of what led him to write Nature, some related lectures that preceded that very famous essay of his, and really became the seed for his transcendentalist thinking. What is exciting to me about thinking in those terms today is it wasn't really that Emerson was fully rejecting technology and just saying we should all be out in the woods and admiring trees, although he loved to do that as well. He was very animated by the technological change of his time. The railroads were the big one that loomed culturally large for everyone.
To me, it's, it offers the beginnings of a blueprint for us to claim human agency and be reminded of the power of the individual at a moment when huge shifts are taking place around us and the ground feels as though it's shifting underneath us with regard to what's changing technologically. That's why I was drawn to Emerson's experience, though it was quite long ago.
Brian Lehrer: Listeners, who wants to talk to Adrienne LaFrance, executive editor of The Atlantic with this very provocative piece, The Coming Humanist Renaissance? Does anybody else think that we're on the verge of a humanist renaissance or beyond that? What should a humanist renaissance look like as generative artificial intelligence grows in power and influence? 212-433-WNYC, 212-433-9692, call or text that number. She says, as I mentioned in the subhead, we need a cultural and philosophical movement to meet the rise of artificial superintelligence.
What should that look like to meet this moment, in your opinion? 212-433-9692, call or text. As some calls are coming in, Adrienne, let's go through some of your proposed restrictions on AI to defend humanity. You start off with transparency, writing that people ought to disclose whenever artificial intelligence is present or has been used in communication. Why start there?
Adrienne LaFrance: I'm thinking probably disproportionately often for obvious reasons, maybe about the arts and writers, and musicians. In terms of what one creates, I think AI could be an amazing tool for an amazing creative partner, in some cases, depending on the art form. I want to be careful not to-- You think back to the rise of photography or film, and there were definitely people who, at the time, when those technologies were new, would be purists saying that, "Oh, it's not really art if it's not painted." This even happened with the early books.
People thought that they were automatically lower-brow just because it was a new format. I think it's important not to miss the chance to use this technology in interesting ways for the making of art, but I do believe strongly that we have to, at least in this early stage where we're all still figuring out what it is, that we should be transparent about when it's present. That goes too, not just for the arts, but in the workplace. You can imagine, this is already happening, but how AI is used to track people in their keystrokes on their computers and whatever other ways that a big corporation might use AI to track worker performance. I think there are real questions of privacy and maintaining human dignity that have to be asked and answered for before this technology becomes more widespread.
Brian Lehrer: A really interesting one to me in your piece is that you mentioned that a computer scientist you spoke with is planning a secret word to share with her parents so they know if it's actually her if they ever hear her voice pleading for help or money. On the one hand, this can seem extreme, but I'll tell you, this is just a tiny little interaction that I had with customer service for a company that I bought something from over the weekend. I had a question about the product, and I went to their customer service page, and it disclosed a response coming from a bot.
Adrienne LaFrance: Interesting.
Brian Lehrer: At a certain point, the bot couldn't answer my questions anymore, and it said, "Hang on." Then suddenly, there was another response that said, "Hi, this is Eric. How can I help you?" My first response, which I never thought of writing before in a chat with a company, was, "Are you a human?"
Adrienne LaFrance: Wow. The AI-generated voices now are so unbelievably convincing. You can take less than an hour of a real-life sample of someone's voice and make a really convincing duplicate basically. This is going to, in addition to the scam I mentioned in my piece, that someone might try to trick someone into giving them money, we'll certainly see that sort of thing, but even the implications for voice security in banking or any number of other examples, it's really, a person's voice used to be something that only they had, and that's no longer the case.
[music]
Brian Lehrer: Brian Lehrer on WNYC. I think we had a little technical glitch there. Sorry about that, but we are back, and we continue with Adrienne LaFrance, executive editor of The Atlantic, on her article, The Coming Humanist Renaissance. We need a cultural and philosophical movement to meet the rise of artificial superintelligence. I was just starting to ask you about a line in your piece that said, "We must recommit to making deeper connections with other people." Why include that and in what ways?
Adrienne LaFrance: I see this moment as an opportunity, not just to think about AI and the near and distant future, but also to course correct some of the adoption of more recent new tech norms and standards around, I'm thinking in particular about the social web. This has been thrown into relief for me after having gone through the pandemic, as just, it's simply not the same to be apart from people. We, I believe, as a species, benefit from being together in person. I think especially in the realm of friendships, or just, it's not the same to comment on someone's Instagram, or like their Facebook picture, or whatever the case is, that you have to do the work to see people and spend time with them and be face-to-face.
It feels like an obvious thing to say, but in this moment, especially as things that we think of as what makes us human are being called into question by this new technology, I think we have to just go back to the basics.
Brian Lehrer: Let's take a phone call. Daniel in Queens, you're on WNYC. Hi, Daniel.
Daniel: Hi, there. First-time caller, long-time listener. Actually, as I was listening, I was thinking not only about the rise of AI use in so many different applications, but I just went to see the new "live-action" Little Mermaid movie, and throughout the entire thing, all I could think was there was so little human in it and so much computer generated images or CGI. I'm sure I can only imagine how much AI is going to influence the production of video. There's so much that's already produced by AI, that's images and videos, and I'm sure that's going to quickly find its way into more large-scale production.
I would think, at least for myself, that there would be a strong appeal for a return to, I guess, more real images, having movies that are advertised saying, "We filmed this production on real sets with real actors and real props and no computer special effects were used."
Adrienne LaFrance: Yes.
Brian Lehrer: Yes.
Adrienne LaFrance: Oh, I'm sorry, Brian.
Brian Lehrer: Oh, go ahead, Adrienne. No, you go.
Adrienne LaFrance: I was just going to agree with you. I think you're right that in this moment, suddenly, real things made by real humans in maybe old-fashioned ways that suddenly a differentiating factor in a world that's going to be swimming with AI-generated junk. I think you're totally onto something.
Brian Lehrer: You insist, and thanks for your call, Daniel, that we "Should trust human ingenuity and creative intuition and resist overreliance on tools that dull the wisdom of our own aesthetics and intellect." I wonder if you think that applies to the Little Mermaid movie, [chuckles] or where is that line? You did, earlier in our conversation, I think, endorse the idea of using AI to some degree because it's an interesting tool to enhance our creativity.
Adrienne LaFrance: Absolutely. I haven't seen the new Little Mermaid yet, so I can't speak to that, but I think the line is going to shift, honestly, and I think it's on us as humans to determine where it ought to be. What I mean by that is you might have a purist who only makes films using old-school film, but even just the digitization of the film industry has changed what people expect. I don't know where the line is now. I think this is why I gravitate toward radical transparency in people who are experimenting with this technology so that its use is not covered up, or skewed, or misleading people in some way. I imagine the line will change over time as we become more comfortable and better understand this technology.
Brian Lehrer: Let's take another call. Renee in Manhattan, you're on WNYC. Hi, Renee.
Renee: Yes. Hi, Brian. So nice to talk to you and your guest. I just gave a talk over the weekend on this topic, and so I'm full of ideas. I very much agree that the potential for a new humanist renaissance is here, but it has to happen in a much wider context. I think, a philosophical context. I think we have to look at almost the ancient things, ethics, epistemology, the ways in which we build a community, and also a kind of an honor system. In my talk, if I can tell you very quickly, I raised five points. I said that we need to have new futurist methodologies.
We need to have a way of visioning and building scenarios and understanding this. Right now, we're getting a lot of warnings, but we need a broad way to vision this. We need to look at the centers of innovation where the most up-to-the-minute research is occurring. Those great minds, like the Oracle of Delphi, we have to put together the great minds who are studying this, not just any one company. I also think we need to go into fields as wide as astrophysics, bioengineering, neuroscience so that we understand the research on a much broader level.
Media literacy is a field I've been in for a long time, and it's not enough now. Finally, I'll just say we need to really conjure a global sense of the common good. I think that the Congressional hearings on AI a couple weeks ago, Sam Altman, the OpenAI head said, "Regulate me, regulate me," but he's also making the tool. We need broad conversations like this one and so many others. I'm very excited that Adrienne LaFrance is writing on this because everybody is writing on it, and we need some cohesion. Today's New York Times is a perfect example, Kevin Roose's article about everybody flocking now again to San Francisco for the next new thing.
Brian Lehrer: Renee, thank you for all of that. Yes, it's so hard to come to a general human consensus on anything. We were just talking in our last segment about how the Democratic party super majority in the state legislature in Albany couldn't come to consensus on things like housing, but you two humanists talk to each other for a minute. Adrienne, Renee is thinking some [laughter] big thoughts.
Adrienne LaFrance: Yes. No, and I love hearing you talk about it. I think one thing, in particular, I appreciate is the proactive stance you're taking. I agree with so much of what you said. I think it can be very easy for people to identify problems or see around the corners in the future about what could go wrong, but we have to also, in parallel, be figuring out, okay, what systems do we build? What people do we bring together to create the future that we want to live in? I think you're thinking about it in a really important and smart way.
Renee: Thank you.
Brian Lehrer: Renee.
Renee: If there are more conversations, I'd love to participate in them because I think we need to reconvene this conversation. I don't think it may naturally happen. I think it has to be organized and brought publicly to the fore.
Brian Lehrer: Thank you.
Renee: Thank you. Thank you so much for listening.
Brian Lehrer: We're having a piece of it anyway here. There's a caller who we won't have time to put on the air but who reminds us of a quote from your piece about utopians wanting us to use AI to "outsource busy work to machines for the higher purpose of human self-actualization," which I guess you label as utopian because the other way of looking at that is, yes, that's going to create widespread unemployment.
Adrienne LaFrance: Not just that. I think in some cases, certainly, people will use tools that help their mundane tasks, whether it's organizing email or whatever it may be.
Brian Lehrer: The vacuum cleaner.
Adrienne LaFrance: Sure, exactly. I think also you have to take this in context of knowing that if all of a sudden a doctor, for example, has an AI writing its notes, its patient notes at the end of the day, that doesn't mean the doctor is going to have his or her day freed up to just go lounge. The hospital administrator is going to, say, see twice as many or four times as many patients. We have to be realistic about when you're not your own boss, when your time is suddenly freed up by AI and you still are lucky enough to have a job, it doesn't mean less work. It just means more different kinds of work.
Brian Lehrer: I actually want to end this conversation, as we run out of time, off-topic because, back in March, you wrote another article that I read on extremist violence in the United States leading up potentially to the 2024 election. Recalling the ones leading up to the 2020 election, particularly the brawls between extremists on the right and on the left on the streets of Portland, I wonder what you think those events were a manifestation of that could recur. With the indictment of President Trump that we spent a lot of the early part of the show talking about, as he runs for president again, whether you think there's a heightened risk.
Adrienne LaFrance: To answer your first question about Portland and the chaos there in the summer of 2020, I think my view is that those events were not isolated to just that summer. Sometimes people who weren't there say it was just an outgrowth of the Black Lives Matter protests. To those who were there on the ground, they say that's simply not the case. That protests were hijacked by more extremist individuals and egged on by these right-wing radicals, basically. I think the larger point, the reason I looked to that example in the first place is that I don't think that the incidents of political violence we've seen in the country in the past many years are isolated.
I think they're part of this larger thing that we're experiencing as a society, really a society on the brink in a lot of ways. Through my reporting in that piece, the thing I heard again and again from people who look most closely at these trends is that there's real concern, leading up to 2024, of more violence. In particular, I think when you see the most recent indictment against former President Trump, you absolutely see his face really rallying around him, and in many cases, including members of Congress, using charged language calling for violence, either explicitly or implicitly calling for, really, war. I was concerned when I wrote that piece, and I remain concerned.
Brian Lehrer: Do you ever wonder, "Gee, humans have made such a mess of this world in so many ways, maybe Artificial Intelligence could actually lead us to something better."?
Adrienne LaFrance: I wouldn't frame it like that. What I'd say is most humans are good. I think we tend to focus on the things that are broken because we want to fix them. It can seem really dark, but in fact, focusing on the biggest, seemingly most intractable problems in society, I think can bring out the best in humans. I'm an optimist. I actually think it's all going to be okay in the end but not without a lot of hard work.
Brian Lehrer: Adrienne LaFrance is the executive editor of the amazing Atlantic magazine, which just continues to be incredibly deep and thoughtful-
Adrienne LaFrance: Thank you for that.
Brian Lehrer: -issue after issue and day after day online. Her article, just out, is called The Coming Humanist Renaissance. We need a cultural and philosophical movement to meet the rise of artificial superintelligence. It's in the July-August issue. Thank you so much.
Adrienne LaFrance: Thanks for having me.
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.