Tech Leaders Say It's Time to Hit 'Pause' on AI

( Susan Walsh / Associated Press )
[music]
Brian Lehrer: It's The Brian Lehrer Show on WNYC. Good morning, everyone. Most of you probably heard the headline story yesterday about AI, artificial intelligence, and I'm guessing that many of you were misled by the way some of the headlines framed it. For example, this is from the New York Times, "Elon Musk and others call for pause on AI, citing 'profound risks to society.'" That's one example. There was also this headline in The Guardian, "Elon Musk joins call for a pause in creation of giant AI 'digital minds,'" or this from NBC News, "Elon Musk and several other technologists call for a pause on training of AI systems."
I could go on with similar headlines from Reuters and the Financial Times and other places, but here's why I think all those headlines may have misled you and tricked your human brain, your natural rather than artificial intelligence into having the wrong first reaction to the story. All those headlines make it sound like the story is about Elon Musk. Given what you probably think of Elon Musk these days, maybe you rolled your eyes. The first few people I mentioned the story to when it was just breaking yesterday were like, "Oh yes. There go the big tech companies trying to get everyone else to stop developing new stuff so they can dominate the market," but I think that's probably the wrong way to look at it.
In broader context, more than 1,100 people signed an open letter from a think tank called the Future of Life Institute which studies existential risks to humanity. How's that for a job? The letter calls for the six-month pause. Almost all of the signatories being less well known that Elon Musk, obviously, including many who have very different politics and less general weirdness than he does.
They want the pause to assess the impact of and develop policies to stay in control of machines that can outthink us, take our jobs, and potentially change civilization for the worst. Now, in a minute, I'm going to read some excerpts from the open letter, more than just two sentences or so that are making the media soundbites for the most part, but let me bring in our guest for this first. It's Sigal Samuel, senior reporter for Vox's Future Perfect and co-host of the Future Perfect Podcast. Her bio page says, "She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience, and their staggering ethical implications."
I don't think she's Elon Musk. In fact, her headline on this story yesterday was more cheeky than those other ones. It said, "AI leaders," then in parentheses, "(and Elon Musk) urge AI labs to press pause on powerful AI. Sigal, thanks for coming on. Welcome to WNYC.
Sigal Samuel: Thanks so much for having me.
Brian Lehrer: There's a famous spoof edition of the New York Post from the 1980s. It's satire, it's a joke, just so everyone's clear on this. Like The Onion before The Onion, fake New York Post, and the screaming headline on the cover of this fake New York Post is, "Nuclear War: Michael Jackson, 80 million others dead." That's what I thought of when I saw how all these headlines framed this around Elon Musk. Tell us, why did you put his name in sarcastic parenthesis?
Sigal Samuel: For a few reasons. Basically, this isn't really about Elon Musk, but people, for better or worse, tend to pay attention and read whatever is going on when it includes Elon Musk because he's just such a lightning rod. Really, this is not about him. This is the AI leaders, and that's why they're the main feature of the headline because the point is that pioneers, recognized very important researchers in the field of AI, are now warning and urging all labs to press pause on developing really powerful AI. The point is that these are people who really know what they're talking about. Elon Musk, maybe not so much.
Brian Lehrer: Like I said in the intro, I thought I would read some excerpts from this open letter to the listeners. The lines I chose will take about two minutes to read. I think they're representative, but could you help set this up for our listeners by telling us to the extent that you know who this Future of Life Institute is that actually released this open letter as well as the scope of the 1,100 signatories?
Sigal Samuel: Sure. The Future of Life Institute is a nonprofit. It works to reduce catastrophic, potentially existential risks, so like major risks facing humanity, whether that's nuclear weapons or anything like that, but I want to emphasize it's not just them. They paired with folks from the Center for Humane Technology like Tristan Harris and Aza Raskin who listeners might remember from the Netflix show, The Social Dilemma, that came out about social media.
Broadly, the people signing this are people in tech and people in AI, whether they're veteran pioneers in the AI field who've been working on this for decades or whether they're just software engineers, coders, people who know the field, work in the field and are concerned.
Brian Lehrer: We hear Elon Musk, Steve Wozniak from Apple, and Andrew Yang, but it's also all those other people. All right. Here go the excerpts. I clocked this at about two minutes. We'll see how close I came. Here we go. It says, "Advanced AI could represent a profound change in the history of life on earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening."
It says, "We must ask ourselves, should we let machines flood our informational channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us?" We'll forgive them for using obsolete as a verb. "Outsmart obsolete and replace us. Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders." I'm surprised Elon Musk signed onto that,
but good.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. Therefore, we call on all AI labs to immediately pause for at least six months the training of AI system is more powerful than GPT-4," and we'll explain what GPT-4 means. "This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, government should step in and institute a moratorium."
It continues. "AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. AI research and development should be refocused on making today's powerful state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal." Remember that word, loyal. We're going to come back to that.
"In parallel," it says, "AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should, at a minimum, include new and capable regulatory authorities dedicated to AI by ability for AI-caused harm, well-resourced institutions for coping with the dramatic economic and political disruptions, especially to democracy that AI will cause." It concludes, "Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer," summer means pause in this respect, "not rush unprepared into a fall."
All right. It took me three minutes. Sue me. Excerpts from the open letter calling for at least a six-month pause in developing major artificial intelligence projects there. Our guest, if you joined in the middle, is Sigal Samuel from Vox who covers consciousness and artificial intelligence. Listeners, what do you think? First of all, any signatories to the letter happen to be listening? That certainly could be the case. Call in and make your case. 212-433-WNYC or anyone else who works with AI in any capacity, 212-433-9692. Tell us a story from your field. React to the letter or anyone else with a thought or a question. 212-433-WNYC, 433-9692.
Sigal, let me get your take on some of that language. The six-month pause would be on training systems that are more powerful than GPT-4. What's the line they're drawing there? What's GPT-4 and what does it represent in the letter?
Sigal Samuel: GPT-4 represents the state-of-the-art that we have right now. It's a large language model released by OpenAI which basically is a system that it's basically predicting in a sequence of words that you give it, what is the most plausible next word that would follow. As I'm sure a lot of listeners have been finding out by playing around with ChatGPT, this can do some surprisingly impressive things whether it's writing you a song or summarizing text, and much more. GPT-4 is currently the most advanced thing we know of in the field. This is an argument saying let's pause there for now.
Brian Lehrer: One chilling line to me was when the letter said future AI research should be refocused on making new systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. As I indicated when I was reading the letter, it was that last word loyal that stopped me in my tracks for a minute. Loyalty is a human trait, not one we think about computers being able to have or not have. What do you think they mean by loyal in that context?
Sigal Samuel: I'm glad you're bringing this up. Personally, I would not have used the word loyal there, but if it stopped you in your tracks, maybe that's a useful thing. Loyal, as you said, it implies a human quality and these are not human, sentient beings, they're machinery. I think where they're getting at there, they're probably alluding to something called the alignment problem, which is basically that even if you design an AI system it has no ill intentions toward you, per se. It doesn't want to destroy you, per se, but it can still mess things up for you pretty badly if it ends up acting in a way that is not what the developer had in mind.
I'll give an example, it sounds far-fetched, but this is in a nutshell, the alignment problem. Let's say you develop a super-capable AI system. You program it to solve some impossibly difficult problem, like calculating the number of atoms in the universe. Okay, sounds fine, but it might realize that it can do a better job of that if it gains access to all the computer power on earth so it releases a weapon of mass destruction to wipe us all out, let's say like a perfectly engineered virus or something, and now it's free to use all the computer power.
In this scenario, we would get exactly what we had asked for, which is the number of atoms in the universe very rigorously, beautifully calculated, but that's obviously not what we wanted, for us to all be wiped out in the process. That's a far-fetched example of the alignment problem, but experts have already seen and documented many smaller-scale examples of AI systems that end up doing something other than what their designer wants. For example, they might get a high score in a video game, but not by playing fairly or learning game skills, just by hacking the scoring system.
I think that's what they're getting at when they say loyalty, although that's not the word I would have used because it implies some sort of human quality.
Brian Lehrer: Well, even your description on Vox of what you reported on gave me those kinds of chills, that you write primarily about the future of consciousness, tracking artificial intelligence. Development consciousness is something I think of as a human trait too. What do you mean by consciousness in the context of your job covering AI?
Sigal Samuel: I'm very interested in consciousness, future consciousness, whether in the context of AI or neuroscience or animal cognition because humans aren't the only beings that are conscious. I think right now, we should be very clear that these AI systems are not conscious, they're not sentient, that that's not the world we're in, but I'm interested in the hypothetical theoretical discussion that people have around could one ever design a system based in silicon or whatever that eventually develop something like consciousness. That's not where we are now, but I'm interested in that discussion.
Brian Lehrer: All right. You are not going to be surprised, Sigal, that all our lines are full.
Sigal Samuel: [chuckles] Great.
Brian Lehrer: Let's hear some people's experiences or what they're thinking about whether there should be a pause on major artificial intelligence development as these 1,100 signatories to an open letter are calling for. Jeanette in Manhattan, you're on WNYC. Thank you for calling in, Jeanette. Hi there.
Jeanette: Hi. How do you pause the world in controlling the world developing AI programs which might harm us all? Why don't we continue to develop AI to stop an AI system in the future that might be catastrophic?
Brian Lehrer: Sigal, what do you think the signatories to the letter might say to that question?
Sigal Samuel: I think it's a curious impulse that some of us have to suppose that the solution to a potential AI-caused problem is more AI, like smarter AI. Jeanette, you're definitely not alone in thinking about this. In fact, OpenAI, their stated intention on the record is to develop more powerful AI that they hope will do our homework for us in terms of doing the AI alignment homework. Which is to say that they hope that their AIs will eventually help make the AIs safer and more aligned with our human goals. I don't see why we should be confident that that would be the case. I think it makes much more sense to think about how we humans can make decisions that will safeguard human safety.
Brian Lehrer: Although they almost call for what Jeanette is saying. In that same line that I was focusing on before from the letter, the letter says AI research and development should be refocused on making today's powerful state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. It almost seems like they are saying, "Let's continue to develop AI in a way that it develops in our interest."
Sigal Samuel: Absolutely. I definitely think we should make it more interpretable, more transparent, those are all great goals. It's just that that's not necessarily work for the AI to do on itself. I think that's work that we as humans need to insist we really want that and need that to be done.
Brian Lehrer: Another reason that I think some people have reacted cynically to this letter is not just the highlighting and the press coverage of Elon Musk and Andrew Yang's involvement, but also something you write about. That these are the people who got us into this mess and they're asking the people in the industry they created to stop work voluntarily. Is there any reason to think the companies and people that will profit the most will voluntarily stop their race to build the next big AI thing?
Sigal Samuel: I think there's a very, very understandable impulse to eye-roll here. When you see some of the signatories include some of the exact people who have been pushing out these AI models that the letter's warning about and have been catalyzing this AI race. I think it's very fair to eye-roll at that, but my instinct is this is too important to see that eye-roll and then stop at the eye-rolling. I would rather we have our moment eye rolling and then say, "Okay, but you know what? Anyone who wants to be in this big tent that is about pressuring the industry to stop this mess from spiraling out of control, great."
It's important enough that I think we can all band together and say, "How do we reshape incentives in the industry so that the incentives are pushing more toward safety, interpretability, transparency, accountability, and not just pushing towards profit and racing?"
Brian Lehrer: Alan in Westport is calling with an example that's in a field that we probably wouldn't think of first or even early with respect to AI or chatbots, but Alan went somewhere last night he wants to report on apparently. Alan, you're on WNYC. Thank you for calling in.
Alan: Yes, Brian, good to be with you and your guest. The timing is propitious. I spent two hours last night on a seminar run by the non-adversarial divorce counsel here in Connecticut. The woman presenting, we saw robots who spoke to us, we learned how clients seeking a divorce can contact us on our iPhones or computers to schedule appointments, get our intake forms, schedule meetings, and basically technologically particularly monitor divorce mediation practice. I left feeling profoundly dispirited. As for me personally, 65 years old, I've done over 1,000 divorces probably 350 divorce mediations. Coming out of COVID, I don't want less human contact. I don't want more technology in my professional or personal life. I want less of it. If this is the direction that my professional world is moving, I'm glad I'm as old as I am and not a young person just entering it now.
Brian Lehrer: That's interesting. Alan, you said you've done over 1,000 divorce mediations. When you do divorce mediation-
Alan: 1,000 divorces and about 350 mediations, Brian.
Brian Lehrer: The mediations are with couples who may or may not wind up getting divorced.
Alan: They always end up getting divorced. It's the rare one that doesn't. By the time folks walk through my door, one or both parties are seeking a divorce, so my job is to help them get to the other side of this, needless to say, profoundly significant life transition.
Brian Lehrer: Thank you very much. Any reaction to that story, just like less human contact and more machine contact in a field that's all about human relations?
Sigal Samuel: Yes, I think that's a very natural and relatable impulse. I think for me, actually what the bit of the open letter that your comment reminds me of is the bit that's just encouraging us to ask questions of ourselves. Like should we let machines flood our ecosystems in this way? Should we develop these sorts of AI systems that will potentially replace us in some of our jobs or some of the contexts that you might work in, for example? I think what that's getting at is in the tech world, there is very much this sort of myth of technological inevitability.
There's this myth that is propagated that once we're starting to develop these technologies, they just simply must-- it's like an inexorable march of progress, and it's simply must proceed and whether we want it or not. I think that's very fundamentally bizarre because that's a myth. We're humans. We can decide what machines we want to have created or not. It's very striking to me that companies like OpenAI declare, as part of their mission statement, that their goal is to benefit all of humanity. People like you are pointing out, "Hey, does this benefit me?"
Even maybe more to the point, the ultimate goal of people like OpenAI which is to build artificial general intelligence, so potentially surpassing humans in a wide range of domains. Do we want that? Does all of humanity want AGI? Have people like you been asked for their input? It's notable to me that folks claim to be developing these technologies to benefit all of us but it doesn't necessarily seem like folks like you have really been consulted on what would make your life richer.
Brian Lehrer: Yes, but you touch on another reaction that I was getting from people when I was talking to them about this off the air yesterday, which is that it is really inevitable that any technology that can be developed will be developed if someone finds it in their interest. I wonder if there is any example of resisting that temptation. The letter says society has hit pause on other technologies with potentially catastrophic effects on society. Do you know what they are referring to or are there any examples you could think of?
Sigal Samuel: Absolutely. Any listeners interested in this and who tend to be inclined to think the technological inevitability is true, I would humbly encourage you to read my piece on Vox called, "The case for slowing down AI," which really takes apart some of these myths. I list a bunch of examples where in fact we see that technological inevitability is a myth. There's plenty of technologies that we've stopped or put very significant restrictions on and that includes technologies that could be very economically valuable. Big ones that come to mind are especially in the field of genetic modification.
There was the Asilomar Conference in the '70s, where early recombinant DNA researchers famously got together and organized a moratorium, like these AI folks want to organize a moratorium now. Then they developed ongoing research guidelines about what kind of experiments are going to be prohibited, what is okay to do, but if you're looking for examples of tech that we've decided to not really deploy, think about human cloning, think about genetic manipulation of humans.
These are potentially economically valuable technologies but with the exception of one rogue Chinese scientist who do that but then was immediately imprisoned, we haven't pursued these technologies because we recognize that there are very real risks. Humanity certainly has the ability to press pause on or to forbid certain kinds of work
Brian Lehrer: With Sigal Samuel who covers this sort of thing for Vox. Arturo in Nyack, you're on WNYC. Hi, Arturo.
Arturo: I think some of this is exaggerated because I gave myself a challenge to try to defeat ChatGPT. By defeat, I mean get it to make a wrong statement. The first try and I did and I did it by asking it to explain something why this cannot happen. If you want that short statement, I can give it to you but it's a scientific issue. It did come up with a wrong statement, so I strongly suspect that current AI cannot make inferences.
Brian Lehrer: Interesting. Arturo, thank you very much. Calling from Colts Neck, "I don't know, we might have to use artificial intelligence programs to screen phone calls from now on to get the towns right now." Actually, I think that was a computer error in our system, funny enough. What about that? If a guy can decide to try to trick ChatGPT into giving an inaccurate answer on something and he was able to do it, what does that imply about whether we're exaggerating the power of these things?
Sigal Samuel: Yes. I think that it definitely is the case that sometimes there is exaggerated capabilities. People might have the misconception that AI always tells you some objective truth. That's definitely not the case. It's definitely not the case with large language models like ChatGPT or GPT-4. The makers of these systems will tell you that clearly, you don't even need to try to get the AI to give you a wrong answer. It will often just give you a wrong, completely made-up answer. For example, if you ask ChatGPT for I don't know, let's say, write me a paragraph summarizing the scientific literature on X topic, it will write you a lovely paragraph and it will cite some academic papers.
Then if you go and actually try to track down those academic papers and look at them, you might find that they're entirely fictitious. The papers don't actually exist in the real world. ChatGPT has just made them up because it's picking plausible words, words that plausibly statistically probabilistically could follow next in the sentence that has come before, the words that have come before.
Brian Lehrer: Right. As another article on Vox said back last fall, AI text generators can now write essays as well as your typical undergraduate making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. What? Do you know about that one? I think people have been talking about the college essay thing for a long time already now but how about AI-generated artwork winning state fairs? Have you heard of that?
Sigal Samuel: Yes. AI is definitely shaking up the art world and different people feel differently about that. You will find some artists who are happy to consider AI a collaborator and use AI in their process of creating art. You'll definitely find a lot of artists who are really pissed about these models like Stable Diffusion or DALL-E 2, models being trained on artwork that they've made that are like digitally available online. Now the model is able to do something where if you say, "Hey, make me a painting of a girl in the style of so-and-so contemporary artist," it might do a really wonderful job but that's because the AI has been trained on the art made by this actual human artist without that human artist's consent, permission, or without that artist being paid for their work.
Brian Lehrer: The open letter, as I read earlier, calls for well-resourced institutions for coping with the dramatic, economic, and political disruptions, especially to democracy that AI will cause. Do you know what they have in mind with respect to disruptions to democracy?
Sigal Samuel: Well, my guess would be that they have in mind, that basically, the concern is that this new AI era is going to repeat many of the mistakes of the social media era but on steroids. We all have seen how you know social media fragmented our political discourse, we're all probably familiar with talk of creating political echo chambers, what that has done to polarized society, in some cases, arguably push societies to more authoritarian governments, and generally lead to arguably a breakdown in democracy in the sense that people can't even agree now on who won certain elections, right? There's such political polarization and a breakdown in the national conversation.
That was just with the very primitive systems we had, let's say a decade ago in terms of Facebook and other social media platforms. Now, if we start to quickly integrate large language models, things like GPT, into our systems, and this is happening, this integration is happening, for example, with Google products. The concern is what will this do to our democracy now that we have these much more powerful systems? What I really worry about is, you might have heard actually, people like Elon Musk complaining that the current systems like GPT are too "woke" and they want to have AIs that will not have certain filters on them.
The concern I would have there is if we end up with a scenario where many, many different groups are each building, developing, and tweaking their own AI models, and so you basically get these echo chambers but that are now turbocharged by these bespoke models, will that increase the worry about echo chambers and polarization, but by orders of magnitude? That's a real concern I have.
Brian Lehrer: One more call, MC in Red Bank, you're on WNYC. Hi MC, are you really in Red Bank?
MC: I really am in Red Bank.
Brian Lehrer: Okay, that's progress.
MC: Yes, well, thanks for this conversation, an amazing guest who's clearly an expert on the topic. Just holistically, zooming out from large language models is the threat of AI overall. The spoofing and deep fakes threat is not only obviously clearly evident in the enterprise space, but it's slowly leaking into the private sector, and can become a nation state threat for us. It's really important that transparency and data is able to have a layer of verification as we move forward through AI.
Brian Lehrer: How can we best do that? Do you want to keep going on that, MC?
MC: Sure. Well, I'm just being clearly transparent here, I'm the founder of Forcefield. We have patent pending technology to verify streaming media. To give you an idea of what DocuSign does for documents, the same thing is going to be needed for data, streaming video, and photos as AI becomes smarter, and as [inaudible 00:34:13] become more believable, much like the verification system that Twitter had that has since fallen. We will need checks and balances for regulation and compliance, and I think that's the only way through this.
Brian Lehrer: Thank you very much for your call, I appreciate it. To finish up, Sigal, you wrote that you are for the six-month pause but what should be done with the time is less clear. You also note that when the letter calls for new governance structures to regulate AI, the Congress lacks expertise in artificial intelligence. I'll add that we saw how cringeworthy some of the questioning was at last week's hearing just about TikTok. Do you have any thoughts on what a constructive government role could be?
Sigal Samuel: Yes, I'm for a six-month moratorium. Do I think that's enough? No. We desperately need more regulation in this space. We desperately need laws to hold powerful people in the tech world and Silicon Valley accountable because right now, it's a wild west, and there's very little accountability. Problem is, as you were mentioning lawmakers, this is very advanced technology. Lawmakers genuinely want to know more about this but are behind in how fast -- We're all behind. It's so hard to keep up even if you're an expert. The research is moving so fast.
We can use that six months in part to just have increased education, literacy, both among lawmakers and among the public about what is happening with these models, the state of the art models, and really think about robust policy, regulations, laws that we can put in place to protect society, not just from what the AI models could do, particularly if we let them get even more powerful, GPT-5 on the way, but really protect us from tech groups that are going to just basically increase power in the hands of the powerful at the expense of humanity, which they claim to be benefiting all of us.
If we don't institute very careful regulations and insist on more transparency, interpretability, and accountability, we really are on track to end up with a world where we're really just concentrating power and riches in the hands of people who are already the powerful and the rich.
Brian Lehrer: Sigal Samuel, senior reporter for Vox's Future Perfect, and co-host of their Future Perfect podcast. Thank you so much for sharing some of your wisdom with us, we really appreciate it.
Sigal Samuel: Thank you so much, it's a pleasure.
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.