Sam Altman’s Trust Issues at OpenAI
David Remnick: [00:00:00] Just a few months ago, Andrew Ross Sorkin, the financial journalist, was on this program and he quoted a figure that was really remarkable. Virtually all of the recent economic growth in the United States, Sorkin told me his investment in artificial intelligence. A lot of people are concerned that a huge bubble around AI is about to pop and take the economy with it.
And a few people continue to feel that AI is just overhyped. But I don't think there's really much doubt at this point that in our lifetimes, at least, AI is going to bring changes as significant as the Industrial Revolution 200 years ago. At the center of this world changing technology is a man named Sam Altman, the CEO of Open ai.
It was OpenAI that really brought artificial intelligence into most of our lives with chat GPT, and that exploded into our consciousness in [00:01:00] 2022. But the chatbots are just the tip of the iceberg. OpenAI is planted to go public this year. And it recently fundraised more money than any company ever. Ronan Farrow and Andrew Moranz have spoken with over a hundred people closely connected to Sam Altman and with Altman himself.
Many times they began by looking, in particular at the week when Altman was very suddenly fired from open ai and days later reinstated a CEO. That whole episode has been mired in secrecy and confusion. Ronan and Andrew see the firing the blip, as they call it, as a key to understanding Altman and the problems with his leadership.
Their extraordinary investigation in the New Yorker is called Sam Altman may control the future. Can he be trusted?
Now, Andrew Ronan, you compare Sam Altman to Robert Oppenheimer, who of course. Was [00:02:00] pivotal in developing the Abom. Oppenheimer not only developed a technology, but in a sense he defined an age in American life, the atomic age, but there's of course something extremely ominous about that comparison too. So let's begin this way.
Andrew, who is Sam Altman and why would you compare him to Robert Oppenheimer?
Andrew Marantz: Well, for one thing. We compare him to Oppenheimer because he compares himself to Oppenheimer. I mean constantly constant throughout the rhetoric. Before OpenAI existed for why it needed to exist, there's this constant thread of analogies to the Manhattan project.
So when he emails Elon Musk out of the blue in May of 2015, he says, hi Elon, this is Sam. He says. I think we need a Manhattan project for ai.
David Remnick: Mm-hmm.
Andrew Marantz: And, and it does have this dual-edged nature to it, which is both we're gonna be the good guys and defeat the bad guys. Right. So
David Remnick: where the Americans, and we're gonna defeat the Nazis.
Andrew Marantz: Yeah. But, so in this case, instead [00:03:00] of the Nazis, it's either China in a national security context, or Google in a competitive. Corporate context,
David Remnick: but it's also the ominous part. Yeah. Ronan is the notion that the atomic bomb defined an age, the atomic age. It still looms over our politics and global security.
What is the potential ominous aspect of ai? We, we hear about it as something that could be fantastic for the development of drugs, for all kinds of things, but it could also wipe out God knows how many jobs, but it goes darker than that.
Ronan Farrow: I will say at the outset of this reporting. I was not myself convinced of the, you know, much bude.
Transformative impact of this technology. I, I really emerged from this more convinced, there's the scenarios that I think you are alluding to, right? The atom bomb esque ones. The idea that this really could lead to a kind of terminator Skynet scenario where a rogue artificial intelligence. Falls out of alignment with what we want [00:04:00] and takes control of nukes.
Um, but you don't have to buy into those extreme scenarios to also see the immediate impact that the entire US economy is now propped up by a few companies that are all in on AI with open AI at the center of it that most credible projections. Very immediately foresee millions of jobs being exposed to disruption by this, that it's already, uh, taking form as a significant change agent in the way weapons are used in battlefields that they can now just fire without, uh, human operators.
That is something in the very immediate future. Um, it's taking form in bio weapon development. Uh, these are all things that are, are happening or on the verge of happening not far off Fantasies.
David Remnick: Now, just to be clear, Andrew, what most of us have in mind in terms of AI is chatbots, which run on large language models.
Andrew Marantz: Mm-hmm.
David Remnick: How is that different from the aspiration of an A GI? Artificial general intelligence?
Andrew Marantz: Mm-hmm.
David Remnick: And [00:05:00] that's something that doesn't exist yet.
Andrew Marantz: Chat GT is a useful tool that can help you write emails and. A GI is supposed to be able to do any cognitive task that any person can do arguably better. So it's general because it can not only write emails or play chess, but it can also.
Discover new drugs or solve new medical problems or break new ground in physics. So we're not quite there yet.
David Remnick: What's interesting, Andrew, is that you quickly describe Sam Altman as somebody who really doesn't have that much computer science knowledge. Mm-hmm. That in, when it comes to the actual science, the actual technology, Sam Altman is no great genius.
So what does he bring to the table that makes him such a transformative figure?
Andrew Marantz: Yeah, he, in a sense, you know. What every entrepreneur does is, you know, bring together technical talent and investors and merge them into a solvent company. Um, so in that sense, it's not that remarkable. [00:06:00] I think what makes him unusual is he's not bringing this kind of bull in a China shop.
You know, he's not Elon Musk or Jeff Bezos and just saying, I'm gonna bulldoze my way to power. He brings this kind of subtle sales pitch around 20 14, 20 15, 20 16. Where the public is feeling kind of burned out on the bluster of tech tycoons. You know, everyone's suddenly mad at social media, uh, Republicans for stifling their speech and democrats for giving us Trump and whatever, and he comes in and instead of saying, who needs Luddite regulators, he says, actually, please regulate me because the thing I'm building is so scary that if you don't regulate me, everyone you love will die.
And it turns out to be a counterintuitively very good sales pitch. And even more importantly. It's a good pitch for recruiting
David Remnick: because, so it's a pitch to what he had been at. So yeah. Something called Y Combinator. Mm-hmm. Which was a, a, a kind of factory for technology and technology investments, a development place, if you will.
And then he incubator. Yeah. Then he begins, right, and he begins something called [00:07:00] open ai, which is what Ronan
Ronan Farrow: Well it started out as a nonprofit. And this is really at the heart of the story. Um, this is a distinction that Elon Musk is now suing Sam Altman and OpenAI over.
David Remnick: Well, who was involved in that OpenAI in the beginning.
Ronan Farrow: So, at the very beginning, Sam Altman, who had a lot of irons on the fire, right? He was investing in a lot of different cutting edge areas, uh. And AI became a fascination. He saw the industry moving towards that, and an interesting thing happened where he had been very optimistic about technology in general, but he, I, I think, saw an inflection point according to people we talked to who were in conversations with him at the time and knew his thinking and.
Where a lot of the leaders in the industry were becoming more apocalyptic about it. And one of those people was Elon Musk.
Andrew Marantz: And in fact, the very people, the scientific researchers who were most capable of building advanced AI were some of the most freaked out about it. So in order to pitch to them and recruit to them, he had to come to them according to a lot of people and say, I'm as scared as you are.
David Remnick: And then we learned very [00:08:00] quickly, very, very quickly that a lot of people around Sam Altman discovered that, well, they just don't believe him. And they rebel against him. And they think of him as a liar. Why is that, Ronan?
Ronan Farrow: These two things are deeply interrelated. Uh, yes. Sam Altman could have presented this with a for-profit motive more openly and perhaps earlier his critics say, but part of the power of his pitch was that he went to Elon Musk and he said.
I hear you. This could destroy humanity. We need to put something together and we need it to be safety first. And it can't just be about racing to get the technology first. Uh, this has to be a scenario where we're willing to slow down on development to keep it safe. It was all rooted in a fear-driven argument,
Andrew Marantz: and that was why Google was the bad guy.
'cause Google is for profit. They're the mega corporation, so we're gonna be.
Ronan Farrow: The good guys. Yeah. And that was, as Andrew alluded to, a really powerful recruiting tool because part of Open AI's strength was not only [00:09:00] did Sam Altman through these quite extraordinary powers of persuasion present this sort of fear-driven rationale why he needed to get the money for this.
Uh, he also was able to go to the brightest minds in the field and say, this is a nonprofit. We may not be able to pay as much as Google, but we can give you something else, which is, we're the good guys.
David Remnick: So what happened a few years ago?
Andrew Marantz: The blip.
David Remnick: Yeah. What was the blip?
Andrew Marantz: So one of the top people who they recruited, who took, uh, who was offered $6 million a year at Google and turned it down in order to go work for the good guys was this guy Ilya Sutz, who was on the board around 2020 in 2023.
And he started to get the feeling as, as we quote him in the piece saying, I don't think Sam is the guy who should have his finger on the button to return to the atom bomb analogy. And so. He starts to rally the board against Sam. Now, this has been a lingering question for years in Silicon Valley. What did Ilya see?
What is in his secret memos that he compiled? Is there a smoking [00:10:00] gun? Is there some one thing that explains it all? And what we found, and the reason that you graciously gave us 16,000 words to explain it is. There is not one smoking gun. There is this small accumulation of detailed patterns of behavior that add up to in aggregate what people like Sutz Giver felt was.
Someone who can't be entrusted with this world altering technology, for
David Remnick: example.
Ronan Farrow: There are certainly specific episodes over the history of OpenAI that have not yet been extensively reported. Um, you know, we have a detailed account of a moment where critics within the company allege and people.
Threatened to quit over this, uh, a a plot was entertained at the highest levels of the company to sell the next generation of this technology, a GI, when it comes to the highest international bidder to pit China and Russia and the United States against each other. And, uh, that really did prompt. Some of the safety minded people in the company to say, you know, as one tells us this is [00:11:00] insane,
David Remnick: what would've been the repercussions?
Ronan Farrow: I mean, potentially a situation where military, right? Vladimir Putin, has disproportionate control of the most dangerous technology in the world. Now, we should say this is something that ultimately did not come to fruition. And OpenAI says, oh, it was just one of many ideas that was considered for a time, but there were people that
David Remnick: seems a little easy that, well, it was just an idea that we were considering selling this to Vladimir Putin.
Ronan Farrow: Well, there you go. There you go. And, and.
David Remnick: It suggests a kind of amoralism No.
Andrew Marantz: Well, and
Ronan Farrow: also one
Andrew Marantz: of that's say of one of their defenses was we weren't talking about selling it, we were talking about giving it to them like much better. So, so these are all hypotheticals and they are kind of crazy sounding hypotheticals.
Ronan Farrow: What emerges is a newly deep picture of Sam Altman, um, through Inside Communications that haven't been out there before. What was behind his firing? Why those criticisms linger. Um, and exactly how substantiated there are. There are many, many instances. The, the famous memos that have never been made public and got him fired within this [00:12:00] company are now in there.
And we really go through the evidence in a very forensic way. You know, this is not a simple piece. I don't think people are gonna read this and, uh, have. E everyone reacting in one way. I think there are people who will look at this and they will come to the conclusion that, uh, maybe the sort of profit minded people who think, uh, safety as the main priority is more of a thing of the past are correct.
There are people who will look at this and say, this is really scary and dangerous.
Andrew Marantz: The phrase that the board used at the time was not consistently candid.
David Remnick: Liar, I believe is the word used, is it not
Ronan Farrow: by many, many people. And even things like pathological liar, uh, in great duplication across many sources.
We talked to more than a hundred people that are close to Altman in one way or another.
David Remnick: And you talked to Altman six times. So tell me what that experience was like. You, you went to see him more
Ronan Farrow: than a dozen times in the final fact check. Yes.
David Remnick: No, no, no. I gather. What's he like? Because I, I interviewed him here at the New Yorker Radio Hour, and I have to say it was like interviewing [00:13:00] a cloud.
He answered in a very mild way. Mm-hmm. And in a very skilled way too, I would say. Mm-hmm. But I couldn't get my arms around his answers sometimes. We talked about the job loss issue, for example, and here's a bit of that interview from 2023.
Sam Altman: I don't think that most people won't work. I think for a bunch of reasons that would be.
Unfulfilling to a lot of people. Some people won't work for sure. I think there are people in the world who don't wanna work and get fulfillment in other ways, and that shouldn't be stigmatized either. Um, but I think many people, let's say, want to create, want to do something that makes them feel useful, wanna somehow like, contribute back to society.
David Remnick: Well, let's, well, let's slow down for a second. What does this imply in the much broader sense about what change is coming down the road? In, in concrete terms,
Sam Altman: I think it means that we all are going to have much more powerful tools [00:14:00] that significantly increase what a person is capable of doing, but also raise the bar on what a person needs to do to be sort of a productive member of society and contribute.
Because these tools will do eventually,
David Remnick: you know, I'd say that I left that interview, not all that happy with myself. I didn't feel like I got any great purchase on what Sam Altman was about, but you obviously did with much greater time and much greater access, Ronan.
Ronan Farrow: People find it hard to wrap their arms around exactly what's going on with his motives.
He is able to and inclined to tell different groups of people possibly conflicting things that make them all feel that they have the same concerns he has. That is an extraordinarily useful skill for a business person. But over time, what one person after another? I would, a majority of the sources we talked [00:15:00] to found.
Is that they just couldn't rely on any baseline of truth that to an extraordinary extent, even on small things that are completely unnecessary to have any deception about. You know, we talk about an instance where he's in an office with colleagues a apparently claiming that he was like a champion competitive ping pong player, and Altman says that this was probably a joke, but people thought it was serious enough that they were struck by the fact that then he was one of the worst ping pong players in the office.
Mm-hmm. This is to give you an example of how banal it is sometimes. And then as Andrew pointed out, it extends to people alleged that he has concealed some of his financial interests, that he, uh, deceived some, uh, board members and executives about the safety testing requirements of some product. So it also then filtered into serious stuff.
And I will say, I've never been on a story before where something this peculiar in that it, it's not a bright line smoking gun. Mm-hmm. But, but it is so prevalent that it is. Almost, I, I'm barely being hyperbolic here. All anyone can talk [00:16:00] about after walking out of rooms with him, that is an extraordinary thing and very difficult to grapple with in a piece like this.
We try to do it with great subtlety.
David Remnick: How is that different from Elon Musk?
Andrew Marantz: Well, so this is part of the, the, the presentation, the temperamental presentation that Altman brings is not this kind of. Brash swaggering.
David Remnick: Not at all.
Andrew Marantz: Not at all. And so I think that's part of the pattern. If you hear from Elon Musk, I'm not boosting my own Twitter account.
And then you find out that he was, you're like, okay.
David Remnick: Yeah.
Andrew Marantz: Not particularly shocking.
David Remnick: Kind of Trumpian.
Andrew Marantz: Yes. The Altman thing is very low key. It's very thoughtful. Part of his pitch is presenting himself as conscientious. I hear you. One person, we don't quote this in the piece, but one person once called him the Michael Jordan of listening.
Part of
David Remnick: his Right. So he looks you in the eye. Yeah. And then he looks you in the eye.
Ronan Farrow: Yeah, exactly. And then reflects back
Andrew Marantz: what you want to hear.
Ronan Farrow: And, and I think this is an important sort of summary. After spending a year immersed in this, we represent in this piece a [00:17:00] range of perspectives. Uh, there are the, uh, defenders of Altman, of course, and we spoke to many of them, and they're represented in this piece.
And amongst the critics, there's a spectrum. There's the people who are, you know, diehard ISTs and they say, look, this technology may kill us all, and he is Machiavellian and sociopathic. There's all these extreme terms that get used by some critics, um, and that therefore this is truly dangerous. There are people in between who say, uh, this is just dysfunctional management.
That even if you're purely. Profit motivated, uh, an executive of a company this important, uh, can't be making conflicting representations all the time. I, I tend to, uh, feel inclined towards the analysis of Sam Altman through my own dealings with him in this story. Uh, that holds, you know, it's not that Machiavellian.
I think that he is someone who he, he actually grapples with this in a new and more sincere way. In this piece, he talks about having. Had some problems with [00:18:00] this. Um, he doesn't just pretend that this doesn't exist around him. Um, he talks about changing over time. Mm-hmm. He talks about the deep roots of feeling like a people pleaser, which, you know, I understand.
Mm-hmm. Um, but I, I think he is reckoning now a new with. The costs of that when it's taken to an extreme.
David Remnick: I'm speaking with Ronan Farrow, along with Andrew Moranz. They've co-written a long, deep investigation of Sam Altman and the rise of open ai. That's all in the New Yorker this week. We'll continue in just a moment. This is the New Yorker Radio Hour.
This is the New Yorker Radio Hour. I'm David Remnick.
Anchor: Today, US government [00:19:00] agencies are starting to enforce a ban that President Trump imposed Friday, barring the federal government from using AI tools made by Anthropic the Silicon Valley Company didn't want. Its
David Remnick: in February, a feud erupted between one of the leading AI companies, anthropic and the US government.
In short, anthropic was providing artificial intelligence capability to the Pentagon, but Anthropic wouldn't allow its clawed system to launch autonomous weapons or to be used in mass surveillance. In response, the Secretary of Defense, Pete Hegseth and the Pentagon called Anthropic and National Security Risk, anthropic, turned around and sued.
And into the breach step. Sam Altman, the CEO of OpenAI, and he swiftly made a deal with the Pentagon and replaced anthropic. This is the same CEO who said three years ago to Congress that he feared what could happen if AI was deployed incorrectly.
Sam Altman: I think if this [00:20:00] technology goes wrong, it can go quite wrong, uh, and we want to be vocal about that.
We want to work with the government. To prevent that from happening, but we, we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.
David Remnick: How did Sam Altman, who's now 40 years old, change his mind after all this time on such a fundamental issue? I put that question to Andrew Morans and Ronan Farrow, who have jointly published a deep investigation of Sam Altman and his history at OpenAI.
We'll continue our conversation now. Andrew, what did Sam Altman say to his own employees about this military contract versus what he said to the government?
Andrew Marantz: Um, Altman said publicly, we stand with Anthropic. We think these are the right, uh, bright lines for them to hold privately. He was negotiating with the Pentagon for the same contract, and this gets to the Shakespearean rivalry stuff as well.
This is often personal. This is [00:21:00] often petty, but because the stakes are presented as being so high, like literal existential, I mean, people talk about it as who will win the a GI dictatorship? People talk about it as, you know, who will get the golden ring? Who will get the ring of Soran? So I think they just think if you're Sam Altman or Dario Amide or or Elon Musk.
I think in their minds it's like. Anything is worth doing to win that competition because it no Bull Barr, it is totally com existential.
David Remnick: What are the financial stakes for Sam Altman running,
Ronan Farrow: uh, immense. Uh, one of the things that we talk about is Sam had both positive and negative arguments. He used to buoy this company.
Uh, we've talked about how he marshaled people's fear. Uh, he also really rallied people around the optimistic projections of what this technology is gonna be. And there are blog posts from him in recent years where he talks [00:22:00] about, you know, we're right on the cusp of maybe even have cleared the event.
Horizon is one term he uses. Um,
David Remnick: and God knows what that
Ronan Farrow: means. Well, a trajectory that will bring us very imminently to not only artificial general intelligence, but a further development beyond artificial super intelligence. And in turn to he itemizes, for instance, you know. Curing cancer, uh, traveling to other planets, uh, essentially
Andrew Marantz: capturing the light cone of all economic value.
David Remnick: Okay, I read that timeout.
Andrew Marantz: We put it in quotes.
David Remnick: What the hell did that mean?
Andrew Marantz: I, us, it's a sci-fi ish thing that basically means capturing all the economic value in the solar system. So it involves space colonization usually.
David Remnick: So, so we're talking, you know, comfortable living after that.
Andrew Marantz: Super abundance, we'll all be chilling.
Uh, so some of it is the, you know, the Keynesian thing of like, in the future we'll all have lives of leisure and we won't have to work. But you can see how this stuff
David Remnick: too late for me.
Ronan Farrow: And, and he, he [00:23:00] does, for instance, he believes that. Well, it's very believe that, and he advocates for, you know, universal basic income.
Totally. As a part of this future. But, but when we, when we asked for instance, yeah. Um, you know, what do you think about all of these economic projections that hold, that so many jobs are going to be disrupted by this? The reflection. And the grappling with it was not in, in my view, terribly deep. Um, you know, he does believe or seem to believe as he says them, all of his optimistic projections.
Um, but, but then
Andrew Marantz: he also says it's a bubble. At the same time. And so it's like, how do you,
Ronan Farrow: he basically on the joblessness, uh, and the risk of the bubble, you know, he says, uh, well actually it's just gonna make everything better. Uh, everyone will have access to chat GPT that's gonna allow people who are unemployed to, I mean, I'm, I'm paraphrasing broadly, but essentially it's gonna allow for more startup creation, and that's gonna help everyone.
And we'll have a big old foundation and we're gonna do some charitable activity, and that'll help
Andrew Marantz: also. Also, you see, I mean, [00:24:00] Ronan, you brought up his blog posts. Just to talk about the shifting pitch over time. You know, people call him this great pitch, man. The blog posts now are very bullish, optimistic.
Mm-hmm. We're gonna cure cancer. You go back two or three years and the blog posts say We need to solve alignment, or we will have a roge AI that stamps out humanity, the
David Remnick: alignment problem.
Andrew Marantz: So the alignment problem is supposed to be. If we build a super intelligence that is not aligned with our interest, it might so it, it doesn't have to come alive and become Hal 9,000 or whatever to kill us.
We quote a, a blog post in the piece of someone saying All it has to do is, is, you know, be misaligned with our interests and accidentally kill us. And the person who wrote that blog post was Sam Altman.
David Remnick: One of the things that you're kind of getting at here mm-hmm. Is, is a, is a politics. Mm-hmm. A kind of
Andrew Marantz: diplomacy,
David Remnick: ethereal politics.
And, you know, some of his rhetoric is kind of utopian lefty, and yet he's made his accommodation with and [00:25:00] friendship with the Trumps. Tell me about that.
Andrew Marantz: Yeah. So for a long time he donated to Democrats and he said, Trump is, is the, this. Unacceptable threat to America
Ronan Farrow: compared him to Hitler.
Andrew Marantz: Mm-hmm. Um, as, as have as did his vice president and several other people.
Yeah. And we, and then he, you know, in 2024, he starts to, you know, shift.
David Remnick: He dialed back the Hitler,
Andrew Marantz: dialed back the Hitler
David Remnick: Uhhuh.
Andrew Marantz: Um, we almost went with that as the headline of the piece. But, um, and then we, and then he, um. Start to say, you know, I think this country will be okay no matter what happens. And, and, and it seems very clear actually.
According to a bunch of, um, Biden administration, national security officials who we spoke to,
David Remnick: he used to go to the Biden White House all the time,
Andrew Marantz: all the time, and encourage them to regulate more heavily,
David Remnick: right?
Andrew Marantz: And say, this executive order doesn't go far enough. We need to restrict and regulate this technology more.
Then Trump comes in. Literal day one, literal first day of the Trump administration, they announced massive new data infrastructure projects, and then Trump [00:26:00] and his administration start blessing this acceleration off to the races
David Remnick: and the rhetoric of the, of the Trump White House's. Safety is a, is a, is a false concern.
We heard that from, I'm thinking. JD V says this in the piece, Uhhuh
Andrew Marantz: and David Sachs, and yeah.
Ronan Farrow: Safety has fallen out of favor in Silicon Valley and Washington to a great extent. And one of the things we document in this piece is Sam Altman's various transformations and his conflicting stances at various times, um, also represent a, a wider sea change.
Mm-hmm. The moment of the blip when people in this industry were still the
David Remnick: coup attempt,
Ronan Farrow: the coup attempt, the firing. When people in the industry were still uncertain about whether you should treat executives who shape this transformative technology as just other executives and hold them to those normal standards.
Yeah. Or whether this requires people with an elevated level of integrity because they hold our future in their hands. Mm-hmm. That was unsettled at the time in a way that really led to these events. Yeah. Where you had a company that started as a nonprofit was [00:27:00] still to some extent a nonprofit. Mm-hmm. Um, a bunch of people who joined.
Signing up for that mission and they said, this guy is lying too much and he has to
David Remnick: go. And by the way, and just for the record now, open AI is.
Ronan Farrow: Now it is a, a, it is a for-profit and no less
David Remnick: than the others.
Ronan Farrow: And it's, it's, they call themselves,
Andrew Marantz: yeah, they call themselves A PBC, but they're
Ronan Farrow: what
Andrew Marantz: we
Ronan Farrow: were told.
What's a
David Remnick: PBC?
Ronan Farrow: A Public Benefit Corporation. Right. But functionally it is a for-profit institution. And the what was once the main event, the nonprofit now has a 27%, 26%, right? 20. Yeah. It has a minority stake.
Andrew Marantz: Yeah.
Ronan Farrow: The the inflection point. Of the firing. And the reason we look at it as an important one in this piece is a moment where that argument that we should have this elevated standard.
The rubber met the road. Mm-hmm. It was tested. And what we see afterwards, him clawing his way back in this way, um, partly on the strength of going to a bunch of investors. As we document, we get inside of those rooms and we see how those conversations went and saying like, Hey, these people just fired me for this, uh, [00:28:00] you know, vaporous thing.
Um, now. He was able to do that in part because I think that old board that fired him, uh, did not acquit themselves strategically in many ways. No, shall we say, no. Uh, there was not a lot of transparency, but he was able to, uh, in making the argument for himself. Also, I think. Assert and get into the bloodstream.
A broader argument that that kind of safety mindedness no longer has a place in a race to achieve a GI with massive economic stakes. And today you see that reality, that there is a very fair argument that these firms are on safety, engaged in something of a race to the bottom. Mm-hmm.
David Remnick: Elon Musk isn't the only person that's suing OpenAI.
How many court cases are there currently against the company that are related to. Suicides and murders allegedly prompted by chat GPT.
Andrew Marantz: Yeah. Um, many ongoing suits for various different things. So some of this has to do with liability. Um, you don't have to believe [00:29:00] in the sci-fi existential harms to see the very real harms that are already occurring.
Um, psychosis that is furthered by addiction to these. Okay. How would, how would that
David Remnick: happen?
Andrew Marantz: Um, well this is not just like a Google search where you, you know, this is an ongoing kind of. Relationship that people have with a chatbot, and the chatbots can be sycophantic
Ronan Farrow: chat. BT is now able to convince human observers that it is human more often than humans are able to.
So that,
Andrew Marantz: so we're in,
David Remnick: please be more specific before I throw myself
Ronan Farrow: out the window. It is, it is. This is, this is research published last year that when you look at this test of, can a chatbot deceive a, a human observer into thinking that the chat bott is a human, uh. D human beings pass that threshold less often than chat GPT
Andrew Marantz: and people are doing these quizzes all the time of, you know, can you tell what's AI writing and what's karmic McCarthy?
And you know, people are no better than 50% and stuff, so [00:30:00] we don't,
David Remnick: that promises well for our profession.
Andrew Marantz: Yeah. Well, so this is why I was gonna say, we don't, we, we, we word people don't like thinking about this, but it turns out that it kind of is the case that if you take all the words in the universe, crunch them onto a, a chip.
It can kind of create a gom of new words that can kind of infinitely spit out
David Remnick: and we should point out
Andrew Marantz: Yeah.
David Remnick: That Conness mm-hmm. The company that owns the New Yorker and Vogue and Wired and all the rest has a deal like many, many other publication companies with open AI and, you know, other such companies.
Mm-hmm. For, for that very purpose.
Andrew Marantz: Yeah. And like all
David Remnick: these, they're limited and, and you know, I think the fear. For a lot of publishers mm-hmm. Is that they're gonna take it anyway and, and we'll get nothing for
Andrew Marantz: it. Mm-hmm. Like a lot of these dual use technologies, it, you know, even the most dire critics of this stuff.
I can't deny that it's useful and fun and engaging and, you know, if it weren't [00:31:00] so useful, it wouldn't pose such an economic threat. It wouldn't,
Ronan Farrow: and worth pointing out useful in a, in a very sincere and deep way. I mean, when you look at the medical applications mm-hmm. Um, lives are being saved
David Remnick: for diagnostics,
Ronan Farrow: for diagnostic and research.
It, it is a game changer for things like, you know, severe weather warnings, um, which may sound banal, but that is truly a lifesaver. Yeah. There are all kinds of applications where this, this is. The real deal already.
David Remnick: I'm speaking with Ronan Farrow and Andrew Morantz, who have just published a long and very thorough account in the New Yorker of Sam Altman's tenure as CEO of open ai.
Our conversation continues in a moment. This is the New Yorker Radio Hour. Stick around.[00:32:00]
This is the New Yorker Radio Hour. I'm David Remnick, and I've been speaking today with two of our writers, Ronan Farrow and Andrew Moranz. They spent over a year reporting on Sam Waltman, the CEO of OpenAI. For a piece that's just been published in the New Yorker this week. They wanted to understand Altman and his vision of the future, but they also wanted to understand where he comes from, including parts of his life that Altman himself seemed wary of analyzing.
You wanted to get a sense of who this guy was. I think he told you that he was the victim of a really serious homophobic attack when he was a teenager, although he was reluctant to go into it in much detail. Um, how does he think, Ron, that that moment shaped him, if at all? And how do you see it? I.
Ronan Farrow: If I'm being honest, through this reporting, I do, I feel oddly, um, somewhat connected to [00:33:00] Sam.
Um, he may, you know, have a different view of the rapport. I do think I understand him on a certain level. Um, you know, I had these moments with him where I would ask like. What is your personal human experience of so many people around you saying these things about your honesty and integrity? 'cause for me it would be, um, such a terribly devastating thing to hear.
Um, and I will say on this point, he is somewhat resistant to self-reflection, is my impression. Uh, um, as he is on a range of these questions, when we talked about his roots and some of these. Perhaps formative factors, his sexual identity, his Jewishness, um, you know, his socioeconomic background. Um,
David Remnick: which was prosperous.
Ronan Farrow: Prosperous, right. A a kind of a fancy suburban family. His mother, yeah. Upper middle class. A dermatologist, very well connected. His father is a, a kind of housing activist. Um, he, he often, and particularly on, on the sexuality question, [00:34:00] uh. You know, he, he'll even cop to the lack of reflection. Uh, he's in the piece saying, uh, well, yes, you know, I got beat up this time.
Uh, I don't wanna talk about it because I think people will see it as me playing for sympathy and being manipulative. So he is internalized some of these critiques. Um, and also he sincerely in that moment and later when we were fact checking this piece, uh, just really emphasized. I wanna be dismissive of that.
Uh, you know, this did not define me. And when I would ask him some of these questions, you know, what is your experience of this? Uh, have you grappled with this in therapy? For instance, do you do therapy? You know, he recites a, a lot of the kind of, um, conventional, uh, uh, west coast, uh, wisdom about like breath work and said he has tried and, and liked therapy.
Yeah, vaguely, but, but. It was very clear that amidst this stratospheric rise and a lot of these difficulties in this, it would be for most of us, I think, painful criticism. There has not been a [00:35:00] process of. Really looking inward of reckoning that that is my, uh, take having, you know, asked him these questions.
He sort of is, is brealy dismissive. And I think that that runs through a lot of the professional situations that we narrate that, that there are other people who really have, um, a relationship that matters to them internally with the truth of a situation. And I think he is. Uniquely suited in some ways to this job because he really can effortlessly shift between one version of reality and another as he is marshaling people to his cause.
David Remnick: You spent a lot of time in the last year
Ronan Farrow: mm-hmm.
David Remnick: Looking into some very lurid allegations about Sam Altman's personal life, sexual life, um, what he may or may not have done. What did you conclude? What can you say about that?
Ronan Farrow: So, so this is one area where it's important to note. We didn't set out looking for that.
Part of what makes this circumstance extraordinary is the prevalence of this allegation that Sam Altman lies all the time. [00:36:00] Another thing that makes it extraordinary is that in addition to those, in my view, more substantive critiques that are so widespread, there are incredibly widespread. Claims about his personal life that don't stand up to as much scrutiny again, in my view, having looked at this for months and months, uh, we capture the way in which actually, ironically, the existence of these falsified or thin claims trumped up by rivals, kinda obfuscates.
The, the real criticism, um, the, the atmosphere of conflict in this field is. We, we quote one executive saying, Shakespearean, the most dangerous, uh, worse,
David Remnick: worse than the rise of the internet. Um, and, and other businesses, the railroads or whatever it
Ronan Farrow: might be, you, you could, uh, historians could debate, but certainly we talk to many people in this field who say Absolutely yes.
And you know, one of the things we encountered, and I'm not talking about like a little bit here and there from arrival, right? I mean, I got. You know, uh, the better part of a dozen incoming calls from government officials, from people at investment firms, from rivals. [00:37:00] You talk to anyone in this industry, and they will cite this in many cases as common knowledge claims that, you know, Sam pursues minors.
Um. That's a very persistent one,
David Remnick: and let's quickly stipulate that
Ronan Farrow: there's no, and we, we found no evidence of this. Sam and I had direct conversations about it and while obviously, you know, people have been telling us to take things Sam Altman says with a grain of salt. I did feel there was a degree of sincerity in some of those conversations where we would talk, you know, in addition to our, on the record conversations, we had, you know, frank personal conversations where I think I got up.
Picture of his relationship with these allegations. And we put, uh, what the facts we uncover can sustain in the piece, which is we found absolutely nothing. This appears to be untrue
David Remnick: by
Ronan Farrow: cut and, and it's pushed by, by his opponents. I mean, we have dossiers. Can you say
David Remnick: who
Ronan Farrow: from Elon Musk? Mm-hmm.
Intermediaries in some cases paid by Elon Musk. And,
David Remnick: and if Elon Musk were sitting here, he'd say, what?
Ronan Farrow: Well, uh, we certainly [00:38:00] reached out to him for an interview about it and he. Declined. He was busy, but we did, but we did fact check with, uh, with other intermediaries of his, and he has responses to some of the things that we say on, on this matter.
Uh, it is incontrovertible that Altman's rivals are pushing this and hard.
David Remnick: Microsoft has been a huge funder of OpenAI mm-hmm. With a lot of exclusive access to their, their own products. Mm-hmm. And just recently it was reported that Microsoft is considering whether to sue OpenAI and Amazon for a deal that seems to go around Microsoft.
Explain what this is all about 'cause it seems like a mess to me.
Ronan Farrow: Uh, well, it's one of several examples where business partners or business rivals. Accuse Altman and OpenAI of making conflicting announcements about deals that they say are incompatible.
Andrew Marantz: Mm-hmm.
Ronan Farrow: Now, I, I, I will not bore you with the technical particulars of this, but essentially Microsoft is an exclusive provider of a certain kind of [00:39:00] foundational models, and they announced a deal on top of that.
Actually, on the same day, they reaffirmed that Microsoft exclusivity with Amazon. And Microsoft says this new deal, which is to do with enterprise products, uh, that allow businesses to build agents. Mm-hmm. They say that that depends on the very thing that Microsoft is supposed to control. Exclusively. Um, and, you know, look, they've since then, uh, released sort of mutual statements, my sort of mutual Microsoft saying, like, we gather that OpenAI understands their legal obligations here, right?
So it's tense. But you talk to Microsoft executives behind the scenes and they say, first of all. This was absolutely in conflict that he announced two conflicting things, that there is no way for him to achieve the thing that is promised in the Amazon announcement, which is essentially we're gonna build a new solution that de conflicts these things.
Andrew Marantz: Well, and this also gets to the circularity of a lot of these deals, like a lot of times a meme that you'll see when describing one of. OpenAI has a deal with Nvidia, and Nvidia has a deal with Amazon, and [00:40:00] Amazon has a deal with OpenAI. People will just put a picture of an extension cord plugged into itself.
Like there are a lot of these deals that are like, I buy your stuff, you buy my stuff,
David Remnick: Andrew, how big is OpenAI? Does it make a profit? They certainly had lots and lots and lots of investing.
Andrew Marantz: Well, how big And do they make a profit or different
David Remnick: question? Exactly.
Andrew Marantz: They are burning through cash at an enormous rate.
I mean, so. While this piece was in production, we kept having to change. They've just sustained the biggest fundraising round in history to a higher number. 'cause they kept closing more fundraising rounds. The latest, I think was 122 billion and. At the same time, there's still, as far as we can tell, losing money.
David Remnick: Where does the money go?
Andrew Marantz: So a lot of it is going into data centers.
David Remnick: Data centers, right?
Andrew Marantz: So they're building one in the UAE that is gonna be seven times as big as Central Park and use about as much electricity as Miami. And so
David Remnick: this is great for the environment.
Andrew Marantz: Yeah, perfect for the environment. But, but don't worry, because a GI is gonna fix the environment.
David Remnick: Um, I feel better already.
Ronan Farrow: And, [00:41:00] and geopolitically, it's worth noting. A big thread in this piece is. In addition to talking about some of those early ideas that were raised about, you know, pitting powers against each other, there is still a present day reality of. This computational power becoming, as Sam has said, you know, the, the new currency of the world and the factor that may shape the balance of power between nations.
And so there are people within this industry and within the national security establishment who feel that this unrestrained, and in Altman's case, often very nakedly transactional approach to getting as much money as possible. Which inevitably in the, the current reality means doubling down on Middle Eastern money.
Right? There are, there are those critics who say that is concentrating a new kind of power that is incredibly geopolitically sensitive under autocracies, um, that may eventually be beyond our control. Um, that you could wind up with a situation where a dictator. Uh, has a [00:42:00] disproportionate ownership of the most powerful technology on Earth.
Mm-hmm. And that that could pose a, a real national security threat.
David Remnick: But that's where the money is. Mm-hmm.
Ronan Farrow: And that is where the money is. And we should say it's become standard in Silicon Valley to fundraise from the Middle East,
David Remnick: Hollywood, everywhere.
Ronan Farrow: Everywhere. Yeah. But. Uh, I will say even against that backdrop, there was a lot more grumbling about Altman's sweeping vision of really getting such a massive amount of money and in exchange promising such a massive amount of infrastructure.
Also, UAE particularly
Andrew Marantz: also where you get your funding for a, for a movie is, is different than where you put as, as some people call it, a country of geniuses in a data center. In other words, if you really think that you're growing a new form of super intelligence.
David Remnick: You wanna keep it in the family.
Andrew Marantz: Exactly.
David Remnick: I'm gonna ask you both as first Ronan, I get the sense that we all know what Elon Musk at this point means, and what Elon Musk wants. Sam Altman is more mysterious, it seems to me, Ronan. Um, what does [00:43:00] Sam Altman want? What's the grand ambition here to be the richest man in the world to at, at some point? He, he says that he was more interested in power than he is in money.
What's your sense of the,
Ronan Farrow: I don't think it's just a caricature or hyperbole to say Sam Altman. Very often wants what you want in this moment, and you know, that is the, what does
David Remnick: that mean?
Ronan Farrow: That is the crux of the matter, Sam Altman. More often than not, it seems across all of this documentation and sourcing reads what the person on the other side of the table needs and wants, and he tells them.
Those are the things he wants. Gimme an example. Well, the, the entire founding, uh, story of the company, uh, that at a time when safety fears had had run wild and were kind of the animating, uh, engine of discourse, he was the safety guy and he told everyone that was his deepest belief. [00:44:00] Um, now he says, and, and not totally unfairly.
You know, evolution over time is, is, uh, a real thing. And I've had to adapt to changing circumstances. But even allowing for that in both that big picture where he's gone from being the safety guy to being the, uh, he would dispute this, but I would say fairly accurate to say no regulation guy. Um, and also in the micro picture, in all of these small interactions.
Uh, you know, we, we tell the story of an interaction where he summons Dario Amide, this now competitor. Um, when, from Anthropic, from Anthropic, when, when. Uh, prior to the founding of Anthropic Am Day was a senior person at OpenAI into a room along with AM Day's sister, who was also at the company, uh, and accused them both of being overly political and working against him.
And then they called in another executive, uh, who Altman had said was the source of this rumor. And that executive said, I never said that. And Altman said, well, I never said that either. According to an account from, from someone there, and we talked about all the people involved. You can see the exact reporting.
But [00:45:00] the gist is, you know, he will often in the same moment reflect the different views and desires. He is a profoundly by his own telling conflict averse person. And I think the piece holds a lot of sympathy for the reality of that emotionally. Um, and the understandability of that, the relatability of that.
Um, but. It also, uh, doesn't explain it away. This is a trait that is present in him to a truly extraordinary extent that has ramifications for his businesses and for the world.
David Remnick: Andrew?
Andrew Marantz: Yeah, I I, I think it's important to grasp how much of the initial pitch for this was based on this view of the world.
That has completely flipped over time because we think of it through the lens of, sure, businesses put out nice press releases and they say, we're not gonna be evil, and then they. Redefine what they think is evil over time. This is not that. This is, this is Altman went to great lengths to write blog posts, to [00:46:00] convince people to take people out to dinner, to say, I am the person you can trust to usher this technology into existence without literally killing everyone on earth.
That was literally the pitch. So then for him now to say. We don't need to worry about all that doomers stuff. Like it's kind of one of those, either you were lying then or you're lying now, and I personally,
David Remnick: or he might argue that something happened in the research on AI that made him less alarmist.
Andrew Marantz: He might, but he didn't.
What he says is, um, he redefines the problem.
David Remnick: Yeah.
Andrew Marantz: He redefines the alignment problem away from this civilizational existential thing. Now he defines the alignment problem as something that's annoying. You know, Instagram algorithms that tempt you to waste your time. So he just kind of shifts the pitch.
Ronan Farrow: Uh, I think it's important to say, sometimes people see, you know. Heavy duty investigative reporting from the New Yorker, maybe particularly my work, and they think like, [00:47:00] this is a, this is a hit piece about a villain. Mm-hmm. Um, I would of course dispute that characterization. We look at these complex problems fairly, in this case in particular though, this is something different than cases where we're looking at a, a single clear cut criminal allocation.
Mm-hmm. There are people. In Silicon Valley who think of Sam Altman as a villain. For what it's worth, I emerged from this reporting not thinking of Sam Altman as a villain. Mm-hmm. I think he is a complicated character. I think he often believes what he is saying in the moment. I think what he says about this being rooted in conflict aversion is very likely real.
Mm-hmm. And as one person close, Jim told us in the piece, he really seems to lack any self-doubt. So that is a superpower. He, he believes it, I think when he says it.
Andrew Marantz: Mm-hmm.
Ronan Farrow: And I think he's grappling with the consequences the industry needs to grapple with the consequences too, I think is the main case I'm making.
Andrew Marantz: I really agree. And I think, uh, one way of putting this, David, is those memos that were the initial thing that got him fired. If something [00:48:00] simple enough, if, if those memos had contained a simple enough smoking gun. We would've known about it long before this.
David Remnick: Yeah,
Andrew Marantz: they don't contain a single smoking gun.
What they contain is a pattern of behavior that you need. A 16,000 word New Yorker piece to elucidate, genuinely
David Remnick: Rodent Farrow. Andrew Morantz. Thanks so much.
Ronan Farrow: Thanks David.
David Remnick: You can read that piece@newyorker.com. It's called Sam Altman May Control the Future. Can He Be Trusted? And you can subscribe to the New Yorker there as well. New yorker.com.




