The U.N. Talks Artificial Intelligence
Title: The U.N. Talks Artificial Intelligence [music]
Brian Lehrer: Brian Lehrer on WNYC. Now we'll turn to a thing that happened at the United Nations General Assembly that you might not have seen or heard about. So much of the coverage had to do with either climate or the Middle East, but last Thursday, the UN announced it's establishing a global forum for discussions about governing artificial intelligence, similar to what exists for nuclear technology and climate change.
We're going to hear, actually, from one of the members of the UN Secretary General's Advisory Body on AI, Vilas Dhar, President of the Patrick J. McGovern Foundation. A few weeks ago, some of you heard on the show, we had the perspective of what you might call an artificial intelligence doomsayer who argued that the development of superintelligent technology will ultimately lead to the extinction of humans.
Our guest today is more optimistic. He's trying to navigate a path toward minimizing the harms, but also maximizing the opportunities that AI might present. His organization is dedicated to "advancing AI and data solutions to create a thriving, equitable and sustainable future for all."
He joins us now to talk about the news from the UN and why he thinks they have the opportunity to, again, quoting him, "create a new narrative of AI, one that serves public purpose rather than amplifying unjust profit or panic." He wrote that in a Time Magazine article on the topic. Vilas, thank you very much for joining us for this. Thank you for some time. Welcome to WNYC.
Vilas Dhar: Brian, thanks so much for the warm welcome. I'm delighted to be with you.
Brian Lehrer: Starting with that quote from your piece in Time Magazine that you want to "create a new narrative of AI, one that serves public purpose rather than amplifying unjust profit or panic," where are the kernels of truth in that? Or where do you start to do that?
Vilas Dhar: Look, Brian, I've spent the last 25 years of my life building in and around AI, and I've just seen the public conversation swing between two extremes. On one side, almost unfounded hype, the idea that we promise miracles with little accountability and little conversation about how it might help everyday people. On the other, a sense almost of horror. What will happen when we are subjugated to our own creation.
It's really an extreme of pendulums that doesn't really provide us any kind of path forward. I propose there's a third path for us, one that's hopeful, one that one that's less about the technology itself and more about building a confidence that we can, if we are properly equipped, make some good decisions about how these technologies might change our lives.
Brian Lehrer: Let me play a clip of that AI doomsayer as I described him, who was on the show recently. His name is Nate Soares. He's president of the Machine Intelligence Research Institute and has a book, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. Here's about 45 seconds of what he said about the possibility of a good future with super-intelligent artificial intelligence.
Nate Soares: One of the issues with making a superintelligence, making AIs that are smarter than any human, is humans being left around to be happy and healthy and free, this is a narrow target. When humans are running the planet for all that, we mess it up in various ways. A lot of humans are trying to make humans be happy, healthy, and free.
With AI, we don't know how to make them act in ways we like. We don't know how to point them in the good direction. If we can't point them in the good direction, they're likely to go in some other direction. There's one good direction, it's not exactly, but a good future is a narrow target.
Brian Lehrer: Vilas, do you want to respond to the central argument there, which I guess is humans have a hard enough time running the world in everybody's interest, but at least we can talk to each other about it and make course corrections. We can't do that with super-intelligent AI, or at least potentially we won't be able to do that. If it goes off the rails, then we're all doomed.
Vilas Dhar: Look, I've spent a fair bit of time talking to, learning from, and sometimes debating from the so-called AI doomers. I think it's important that we acknowledge that this is a real and potential threat that we should spend some time understanding and thinking about as a society. Because I think if you step away from the science fiction of what superintelligence might be and the speculation around it, this conversation actually taps into a much deeper and more important emotional frame.
The fact that when I talk to everyday people across this country, we're already feeling a sense of lack of control over technology. If you look at the last 25 years and the decisions that have been made about the internet or about social media, I'm hard-pressed to find a single person I've talked to that says, "I had a voice in what those tools became."
You take that really core human emotional response, you wrap it in a little bit of science fiction to talk about something that we can't really understand or contemplate, a super intelligence, and we get to where the AI doomers often start. The idea that we'll be subjugated, we won't be able to interact with or engage with these tools, that they will somehow control us. Now, let's accept that premise as a possibility, but let's also parallel it with something more fundamental.
We're making choices today about how these tools will be governed. If we make the wrong choices, I could absolutely see a world where these tools get out of control, where they take on some of these high threats and these scary possibilities. Let's not stop at, I believe you said something along the lines of if we build it, we die. What if we also paralleled that with if we govern it, we thrive.
We see all kinds of examples today where AI that's properly used, that's built with community inputs, that's often taken out of the control of a few small tech companies and built inside of communities, massively improve people's lives. From AI tools that are transforming the delivery of frontline health care to its use in things like forecasting wildfires and making sure that people have the information they need to be able to make a good decision about how they protect their houses and how they flee the area.
Let's not take the entire conversation about what AI could be for humanity and limit it to a sense of panic. Instead, let's ask a different question. If we could bring back agency and control, if we could build governance structures that let us support a positive vision, what would that enable us to do? How would we make sure we have some guardrails in place?
Brian Lehrer: Is that what this new UN initiative that you're working on with the Secretary General is intended to do? Figure out the guardrails and whatever tools would promote the maximum benefit from AI?
Vilas Dhar: That's exactly right. Look, I will tell you, I'm as much of a skeptic of large institutions as probably anybody listening, but the Secretary General of the UN did issue a pretty clarion call a few years ago. He said, "Look, we need to understand what AI is not just as a technology, but we need to understand what risks it presents for humanity.
We need to understand what opportunities it might create, and maybe most importantly, we need to understand what institutions we have to create to make sure that we drive towards that better version of the future. That three-part mandate is what guided the work of the high-level advisory body, but resulted in the two institutions that you addressed earlier.
One, a scientific panel that says for the first time, let's get a really credible set of experts together who aren't merely employees of large tech companies to inform all of humanity about where AI is today and maybe just as critically where it might be tomorrow and in the days ahead. The second part of it was to say we have so many people that have, if you'll excuse me, overnight become AI experts.
Where are those folks who have spent years and decades really thinking deeply about what it might mean for our political systems, our economic systems, how we conceptualize human rights? Let's build a global dialogue that lets those groups and entities come together and have a conversation about how we govern responsibly into this future.
Brian Lehrer: Who has a question, who has a story? Who has maybe a suggestion for what kinds of governance guardrails and opportunity generators the UN should get involved with here on a global basis with respect to the future of artificial intelligence and humanity? 212-433-WNYC. 212-433-9692, call or text. Can you get specific? I don't know if you started to draft a document at all, like this, or maybe you have at your foundation.
What's an example of a guardrail that protects people from one of the dangers that I think you just cited, which is using AI so much for profit because it can generate so many profits for the right kinds of companies that that happens at the expense of human well-being. What kind of guardrails could prevent that? Then I'll ask you about what kinds of opportunities the UN could place structurally to advance what AI can do for us.
Vilas Dhar: Absolutely. One of the things is the UN sometimes feels very far away from everyday people. Let's talk about what's happening right here in New York. Our foundation works very closely with governments, including the city government and the state, but also academic centers and nonprofits, to ask questions like how do we actually build a set of practices around responsible AI design and deployment that require community participation and engagement?
How do we ensure that principles of fairness and transparency are built into products and that we have the consent of people before they become part of an ecosystem where these AI tools are apparent and everywhere? Just Friday, I participated in the launch of the New NY AI Exchange, a congregation of academics, of policymakers, of technical experts, and companies coming together to figure out locally what we should do to make sure that AI is used in ways that support and promote human dignity and civil rights.
At a macro level, on the technological side, guardrails, like ensuring there are red lines in place, those red lines might be things like ensuring that AI systems never have autonomous controls of lethal weapons, or that they're not allowed to make decisions about human welfare without human contact and human engagement, and potentially human control and decision making as a core part of the solution.
Now, these are rules that sit outside of what a private tech company might innovate and what a technology might make possible. These require us to innovate our systems of governance, the rules and procedures, and practices by which technology will be deployed in the world. At the UN level, we need some international compacts about this as well. AI is not the kind of tool that you can just regulate in a piecemeal way by a geography. Data flows easily across borders, and these algorithms have impacts at a global, regional, and local level. We need innovation and governance at exactly those same levels.
Brian Lehrer: Let's take a phone call. David on Shelter Island, who says he's a bioethics professor. David, you're on WNYC. Hello.
David: Hello, Brian. I'm so glad you're covering this topic, but I'm not thrilled with the way you're covering it. It's not your fault. It's the fault of our vocabulary. AI is not one thing any more than cancer is one thing. If we don't start differentiating in each conversation between decision support, aiding humans, and decision substitution, replacing human decision makers, and then further bifurcate that according to whether the data that the AI is drawing upon is a static, identifiable, reviewable data set or generative, meaning the AI algorithm is pulling knowledge out of the Internet and is happening so fast we can't monitor it, that's the conversation we need to be having.
Brian Lehrer: Vilas, do you want to engage? Interesting call.
Vilas Dhar: Of course, David, I think that's such a good point. On one side, we've seen a lot of responses to AI that engage with a taxonomy of risks, that look at all of the different ways that this broad set of tools, and you're absolutely right, we lump it all together and call it AI, might affect so many different parts of the human experience. If you look at global regulation, for example, the EU has decided to regulate AI as a taxonomy of risks.
I think your question goes to something more fundamental. If we don't really understand how these tools and technologies are being built, what kind of data they're based on, and at the same time understand how they might be deployed, we're essentially just creating automatons without an understanding of what their second and third-order effects on society will be.
How do we address this? The first is to begin with ethics, norms, and values, and define a set of principles by which we are comfortable and okay, deploying these AI tools. You hit one of the topics right on the head. This idea of will this tool be used to support decision making or will it take our decision-making authority away from us? If I ask you that question, I think for most of us there's a pretty easy, common-sense, ethics and intuition-grounded answer.
I don't really want an AI system to be making decisions about my life without a human in the loop. In the loop can mean a lot of different things, but let's, for purposes of this conversation, say in control of the final outcome. Now that's a easy answer from a well, what do I like and what do I not like? The translation of that into how we actually build these tools requires both cooperation and regulation of the technology creator.
It requires us to have a common set of principles by which institutions gauge these kinds of impacts. We need to build social resilience so that when we as everyday citizens see these tools being used in ways that feel orthogonal to how we think they should be, we have a mechanism to go out and dispute them, to engage with our governments and our policymakers, but also to act as consumers and maybe most importantly, to act as moral activists to call for a realignment of what technology promises, but what our society aspires to.
Brian Lehrer: David, I want to give you, the caller, a chance to come back on that. I actually also want to invite you because I think our listeners, you gave a lot to follow in your four different lanes of dealing with AI in your initial statement, but I want to give you a chance to repeat at least two of them, because I think people can get it, and that it's worth it. You would first of all react to anything that our guest just said, and then go back to that differentiation that you were making between generative AI and something less than that.
David: Sure. Look, I can't argue with anything that your guest said except the enforceability. We have all of these AI systems in each of their four manifestations being implemented, while the discussion of regulation is barely getting off the ground. By definition, there is no preclearance of any of these tools.
They aren't even labeled as to what they are, so that the citizenry can understand which ones need closer scrutiny and which ones are already being supervised by some identifiable human who can be held morally and legally responsible. What we need is something equivalent to the Underwriters' Laboratory for fire safety of electrical devices that identifies those that require the lesser or higher level of scrutiny.
Of course, in my four-part analysis, the box that is the most troublesome to humans is a decision substitution tool where we want the computer to make decisions because it can make lots of decisions really fast, but if the decision AI algorithm is making those decisions using a generative data set, meaning it's pulling information into its body of knowledge unsupervised, it's literally impossible for humans to monitor that. If you don't know what information the algorithm is making its decision based upon, it's impossible in real time to evaluate the legitimacy of the algorithm's conclusions which it can spit out extra light.
Brian Lehrer: In generative AI, AI is making its own algorithms. Is that a simple way to put it?
David: It's reinventing itself nanosecond by nanosecond by pulling more information in, and it could be pulling that information in from everywhere. There was just recently a settlement of a lawsuit where one of the major AI proponent companies was found to have been hijacking people's expert commentary and feeding it into the algorithm as a source of knowledge.
That's an interesting intellectual property question, but the more interesting question is whether the "expert knowledge" was actually accurate or was itself being manipulated or itself being generated by another AI algorithm.
Brian Lehrer: David, thank you very much for your thoughtful call and maybe a follow-up to that call, Vilas, as you represent the UN Secretary General or this advisory team for the UN Secretary General on creating guidelines for protecting people from the destructive elements of AI and maximizing the potential of it, listener writes, "How would these AI guidelines be enforced?
Private companies will do what they can to make as much money as possible. They're constantly breaking and bending the rules until they get caught. At that point, the penalties are small enough that they"-- well, and then it kind of trails off, but the basic question, you hear it. How would these AI guidelines that you develop at the UN be enforced? Because private companies will do what they can to make as much money as possible.
Vilas Dhar: A couple of things to steer here. One, I want to acknowledge the call from that caller. That was an exceptional set of commentary. It reminds me of the work we did on the board talking to hundreds or even thousands of experts around the world. It was great. I want to respond to this question of how we break the hold of tech companies over what our AI future looks like. There are two things that seem very apparent.
The first is if we're okay with a world where all of the capacity to create these tools is so tightly held by a small set of firms, the rest of us really have no chance. The first thing we need to do is invest in building a massive AI capacity that sits across citizenry and civil society. It's something that we do at the foundation, as one of the world's largest funder of AI for public purpose is training new talent and making sure that they feel they have an opportunity to do something other than to go to a private tech company in order to make a livable wage.
To invest in building data capacity and compute that's actually owned by governments rather than owned by technology companies, and can become an infrastructure for innovation in countries across the planet. We see examples of where this works really well. For example, a community that's facing rising sea level because of climate change and needs AI and predictive models to understand where to invest in supporting existing mangrove forests or building new dock walls.
There's no product that exists to serve that function that's coming out of a private sector tech company. It needs to be built as a public good, which means it needs financing, which philanthropy is currently providing. My hope is that governments will provide it at scale. We have a new initiative called Current AI that will aggregate, hopefully, about $2.5 billion in capital to fund public sector capacity around AI.
The second is we need to come back to this question of what regulation really is. It's so easy for us to perceive regulation, particularly here in the United States, as a way to limit tech companies. That might be a useful part of what regulation should be, but there's a much broader swath of what a democratic government should be doing in this space.
Things like, instead of simply limiting the worst possible harms, what does it look like to lay out a positive framework for what public purpose AI could be, accompanied by investments in foundational research, as well as developing application layers that are owned by the public, the use of public revenue to invest in capacity to design these solutions, and values based frameworks for how any AI solution should be applied when it intersects with the lives of an American citizen.
Fundamental questions of limiting surveillance, ensuring consent, and making sure that people have agency to opt out. Now, the caller said very clearly, it's true that regulation moves much slower than technological innovation, but I come back to where we started our call, Brian. To me, this very much feels like a choice.
I think if we were able to, with the same amount of emotional resonance we bring to conversations about superintelligence, say we need to respond to the harms that potentially could be created tomorrow to by today's existing AI systems applied without government regulation. We were able to say we need a new regulatory environment for that that's as positive as it is controlling of tech sector activity, but is invested in a positive future. We could get somewhere useful.
Brian Lehrer: That's where you're going to try to get to, I guess on the Secretary General's Advisory Body. I just want to throw in one other thing in our last 30 seconds. A number of listeners are bringing it up, and we've done separate segments on this topic as its own topic, but a listener texts, for example, this is just one of several like this, can you ask your guest about the huge amount of energy required to run and create AI? It doesn't seem very compatible with trying to tackle climate change. How will you tackle that on your Secretary General Advisory Committee? 30 seconds and then we're out of time.
Vilas Dhar: Absolutely. Here's the thing that I can tell you today, the best uses of AI that we see out in communities are actually some of the lowest power implementations. Where is most of the power and water that's supporting these massive data centers and institutions going?
They're going to the large tech companies that are trying to build even more powerful future-looking models. If we took some of that time and attention and use the AI we already have built to solve real-world problems for real-world people, we'd find that the environmental impact of that work is nominal compared to the kind of benefits it could create.
Brian Lehrer: Vilas Dhar, president of the Patrick J. McGovern Foundation and now a member of the UN Secretary General's advisory body on AI. Thank you so much for joining us and discussing these issues.
Vilas Dhar: Thank you, Brian.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
