Regulating Big Tech and AI

( Patrick Semansky / AP Images )
[music]
Brian Lehrer: It's The Brian Lehrer Show on WNYC. Good morning, everyone. Two days before Thanksgiving at this contentious time in New York City and the world. Later in the show, we'll open the phones on the question. What are you anticipating at your Thanksgiving table regarding the Israel Hamas war, and how do you plan to handle it? Do you plan to bring up the subject, do you hope to avoid the object, or how do you hope to handle the subject if others bring it up?
You don't need me to tell you how intense people's feelings are about this conflict and the reactions to it in this country. When your calls come in next hour, we will give you a smattering of what's in the press now, anticipating Thanksgiving, or just about communities fraying generally over this. That's coming up. Also, it's Tuesday, so we'll have our climate story of the week, which we've been doing on the show every Tuesday all this year.
Today, America's first carbon capture plants are now open, prompting the question, can the technology capture, move, and bury enough carbon dioxide to be worth the risks? What would it mean to replace a lot of oil and gas pipelines with carbon dioxide pipelines, for example? We will examine risks and benefits today. We will start here. There are several headlines from the world of Big Tech right now, some of which are just battle of the tech bros corporate intrigue, some of which have implications for all of us, some of which might be a little of each.
Maybe you heard about the latest Elon Musk controversy, he promoted a tweet widely considered anti-Jewish, then followed up with his own tweet that was condemned by many as both anti-Jewish and racist. By the end of last week, major companies, including Apple and Disney and IBM, suspended their advertising on the platform now known as X.
Maybe we shouldn't be surprised that Musk also announced that the company which he bought for $44 billion, remember that price tag, when it was called Twitter, is now worth about $19 billion. That's by Musk's own admission. Whatever it is he's doing in the name of his ideology or free speech is costing him big time. Then there's the big Google antitrust case. Sara Morris, senior reporter for Vox, who covers data privacy, antitrust, and Big Tech's power over us all, and who'll join us in a minute, wrote an article called The Secrets Google Spilled in Court: What we learned and didn’t learn from the big Google antitrust trial.
There's Europe trying to take the lead on regulating artificial intelligence, but different countries taking out different positions in the last few days. For the moment that seems to be going south. Maybe the story you've been most likely to see today, but also most likely to be confused by, the melodrama at the powerful artificial intelligence company, OpenAI, creator of ChatGPT, where Sam Altman, their wunderkin CEO, was fired going into the weekend, and nobody quite seems to know why.
By Monday, yesterday, Microsoft had signed him on to a big job, while hundreds of Altman's former colleagues at OpenAI were demanding his reinstatement there. There's a Hollywood movie in that, but not just a movie. Remember, Altman is the guy who testified in Congress a few months ago that his company's creation does have the power to put human beings out of work, but he also put this happy spin on it.
Sam Altman: There will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that. I'm very optimistic about how great the jobs of the future will be.
Brian Lehrer: Sam Altman, very optimistic about how great the jobs of the future will be before Senate committee in May. Sara Morrison's article out today is called Why OpenAI Blew up and Why It Matters. We'll start there with Sara Morrison, senior reporter for Vox, who covers data privacy, antitrust, and Big Tech's power over us all. Sara, busy times for our technology overlords. Welcome back to WNYC.
Sara Morrison: Yes. So much for my Thanksgiving break. It's been a lot.
Brian Lehrer: Right, and so much for Sam Altman. Let's start there. The lead of your article says, "So, OpenAI had a weird weekend. The hottest company in tech is imploding, after the shocking removal of its superstar CEO Sam Altman under still mysterious circumstances." Can you do a little one-on-one for listeners who don't follow this stuff? What is OpenAI, and why is it the hottest company in tech?
Sara Morrison: OpenAI is obviously an AI developer, and I guess research company. If you know anything about AI, or even not, you probably know them best because they're the makers of ChatGPT, which is the chatbot that you ask it to write anything in any style, and it seems to be able to do it. That was released to the public around this time last year. It was a huge deal because people were like, "This is amazing. I didn't know we had the capability to do this."
They were really the first company, one of the first companies to just release this and show this off, and it seemed to be one of the most advanced. They're a big deal. They have I think about $13 billion in investments from Microsoft, at this point. Their valuation is something like $80, $86 billion, or it was. I don't know today. They're basically the biggest, the hottest tech company, and a technology that could change the world. Not a small thing.
Brian Lehrer: Right. We've been talking about how it can write people's college application essays without the high school seniors doing it themselves. It can put actors and writers out of work, and Hollywood, all that stuff. Who is or was its superstar CEO Sam Altman? What is his role been in the development or selling of the ChatGPT artificial intelligence tool?
Sara Morrison: Selling is a great word for it. He's the CEO. He became the CEO in 2019, company was founded in 2015, but he's always been a part of it. His thing over the last several months has been going around the world, going to various governments, world leaders and saying, "Here's this thing. Don't be afraid of it too much. It's amazing." I've heard he's very charming, very able to win people over.
He testified in front of Congress, as you showed, and they were kind of falling all over themselves to praise him and the technology, both parties. He is the face of this company, and therefore, the face of AI right now. He's also a very big deal, which is why him no longer being the CEO of that company is huge news and really shocking.
Brian Lehrer: We'll get explicitly to the firing in a minute. Yes, we played that clip of Altman testifying to a Senate committee that AI will destroy jobs, but also create new jobs. Your article refers to him even saying that his company's products could contribute to the end of humanity itself. Has he gotten specific about that apocalyptic vision?
Sara Morrison: I can't say for sure. I know there's this general sentiment amongst some people in this world that if we don't have these safe AI systems, and we don't develop and control this technology in the right way, we'll create super-intelligent artificial intelligence that is smarter and more capable than we are. I think there's worries that it'll create diseases, set off nuclear missiles, whatever. The things you see in sci-fi movies.
It's a long-term concern over artificial intelligence that you've heard from him and various others. It's like, why is the CEO of the company that makes it saying these kinds of things? I think there's a couple reasons for that.
Brian Lehrer: I actually want to replay the clip, and draw listeners' attention to something in the middle of it. It's just like a 15-second clip, and it starts with jobs, and it ends with jobs, but listen to what's in the middle, folks.
Sam Altman: There will be an impact on jobs. We try to be very clear about that. I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that. I'm very optimistic about how great the jobs of the future will be.
Brian Lehrer: It will require action by government to mitigate that. Here is this CEO of this threatening, in addition to whatever else it is, new technology, on what you call an AI world tour, telling governments and others how best to regulate this transformative technology as you write. Actually, inviting regulation, which is not something that we traditionally hear corporate CEOs with a lot of money on the line doing, but I think you see it as a sophisticated game, right?
Sara Morrison: Yes. If you're the one whose company has gotten out ahead of this, and you present yourself as this ambassador of AI and you say, "I know a lot about this, so I'm in a great position to help y'all out to make sure that we're putting out a safe product and not destroying the world," but you're going to do this in a way that benefits your company. If you are the one who makes the rules, you're also the one who makes the rules that are the most advantageous to you.
We've seen over the last couple years is there's been more pressure on Big Tech companies in general to-- more of an awareness that self-regulation doesn't quite work. We've seen them say, "Just give us the rules, and we'll follow them, and they should probably look something like this." He's unusual in that he's starting from there, and that's something I think other CEOs in tech have had to get to because they've seen the need to do so. If you start from there and say, "Yes, I love regulation, I love rules. It's really important that we have them," you look a lot better, and people will probably be more happier to work with you.
Brian Lehrer: You might wind up with fewer rules if they think you're acting in good faith like that. We've buried the lead so far, [chuckles] in pursuit of all this explaining, which you and Vox generally are very good at, in which we try to do a lot of here. Why was Sam Altman fired on Friday?
Sara Morrison: With all the build up of how good we are at explaining, I don't know.
[laughter]
Brian Lehrer: That's honest.
Sara Morrison: It's still a mystery. We have a board that said, "We don't have confidence. If he hasn't been, I guess, completely candid about some of the things he's done, we don't have confidence in him to lead us, and goodbye." That's about all we've gotten from the board, which made this decision. It's been a couple days now, and I haven't gotten anything else. Everybody seems to be mystified as to what that means, including apparently Sam Altman himself and also Microsoft, their big investor.
Brian Lehrer: From what I've read in your article and elsewhere, apparently at least it's not financial fraud. It's not Me Too. He didn't tweet hateful things like Elon Musk. He didn't lose the company money. He just maybe had a personality clash with a board. Where's the Hollywood blockbuster in that?
Sara Morrison: I think it comes down to OpenAI was begun as this non-profit research thing, and over time it became-- There was a for-profit or a capped profit company housed within that that is put out like ChatGPT, and made a lot of money, but the board has a non-profit mission and vision. They still run that, which is unusual for a company like this. The board is the one trying--
They're making these decisions supposedly under-- We have to follow this mission of creating safe, transparent systems and research, and I guess they felt like he wasn't carrying that out effectively, or he wouldn't do that. I don't know if that's what the reasoning was, but that's an interesting wrinkle in all of this that could be a big contributor to why it's all happened. I think this is maybe a fundamental difference at this point in the vision of the board and the non-profit and the CEO, or the former CEO.
Brian Lehrer: Can you explain that non-profit vision and non-profit function of the board further? I had no idea about that personally, and I'm sure most listeners don't. This is, what did you say, $80 billion company creating this big commercial product of artificial intelligence, but what's the non-profit part of that?
Sara Morrison: That's how it was originally founded. It is still a non-profit, just with this for-profit thing within it. The goal was there was some very big people in tech. Elon Musk is the part of the founding of this company that were concerned over AI was being developed and the dangers that that could create, so they wanted to make this-- It's like I said, a research development company or group that would hopefully develop or contribute to the development of safe systems in a transparent way.
That's why they're called OpenAI. Then I think over the next couple years they realized they needed more money to be able to do that. It takes a tremendous amount of money to develop these models, billions of dollars, obviously. Sam Altman comes on as a CEO at around 2019, and they introduced this capped profit thing under it, and that's where you get releasing things like ChatGPT, DALL·E there, image-generating model, and getting investments from places like Microsoft and partnerships with Microsoft so that Microsoft can put out products based on this. You can see when a company suddenly becomes worth, within a matter of months, I think, $86 billion, you're going to have some friction.
Brian Lehrer: Your article on this is called Why OpenAI Blew Up and Why It Matters. Maybe it's speculation at this point, but does any of this matter to the public good regarding AI versus humanity, or is it just the battle of the tech bros, Altman being fired by OpenAI and being hired by Microsoft?
Sara Morrison: Right now, it's the latter, but depending on what results from this, what comes out of this. If you have, again, the face of the biggest company and the biggest technology, potentially one of the biggest companies out there at a certain point, depending on where AI goes and what it does, him being removed, and then right now hired by Microsoft, and developing his own thing within that, you have a different company with different goals now potentially in control of whatever Altman and apparently a lot of people from OpenAI create.
What's Microsoft's vision for this? It's a completely for-profit company. Does that change the direction of how this gets developed, or where it goes, or how "safe" it is, how responsibly it's developed? Are they going to rush to put products out as soon as possible? What impact do those have? What are the consequences of that? Again, we don't know that this is several years down the line, but it could be a big deal, have a huge impact on all of us depending on what this stuff can do and how it changes the world.
Brian Lehrer: Listeners, your questions, comments, or stories from your tech company on these developments on Sara Morrison's beat as senior reporter for Vox who covers data privacy, antitrust, and Big Tech's power over us all. 212-433-WNYC, 212-433-9692. In the old days, beats had names like City Hall reporter or tech reporter. Now they have names like senior reporter who covers data privacy, antitrust, and Big Tech's power over us all.
I think that's a good thing. 212-433-9692 on Sam Altman and OpenAI, ChatGPT, on Elon Musk offending or hate speeching Twitter into oblivion apparently, or toward it, IBM suspending advertising after its ad appeared next to some Nazi content they said, and last week's decision by Apple and Disney and others to not be associated for now, as well as because of content widely deemed anti-Semitic posted by Musk himself, or on the Google antitrust case, which we'll get to, or on how to regulate artificial intelligence or anything related.
212-433-WNYC. Call or text with your questions, comments, or stories from your tech company, maybe that's trying to compete with Google as a search engine. Who knows? 212-433-9692. Let's take a call right now who actually wants to refer-- Caller wants to refer to the clip of Sam Altman. Ephraim in Brooklyn. Ephraim, did I just hear you sneeze? Thank you for joining us, and God bless you if that's appropriate.
Ephraim: Hi, Brian. I was surprised by your description that Altman invited, as you said, government regulations. He did not invite. He said that government-- at least he meant, what I understood from his speech, that government has to take care of this unemployment, which would be created by [unintelligible 00:19:21] but it is not regulations. There are different means government to-- He said mitigate. Mitigate is not regulate. What do you mean?
Brian Lehrer: Fair enough of you to point that out, Ephraim. Let's play the clip again. Let's listen to this very carefully, and I will say someone else tweeted the exact same comment. Let's listen to the clip one more time and make sure we get it right. Here we go.
Sam Altman: There will be an impact on jobs. We try to be very clear about that. I think it will require partnership between the industry and government, but mostly action by government, to figure out how we want to mitigate that. I'm very optimistic about how great the jobs of the future will be.
Brian Lehrer: Sara Morrison, I guess we could hear mitigate, either as just provide safety net services and new job trainings and things like that to people who lose their jobs because of artificial intelligence, or we can hear mitigate as try to prevent it to some degree by some kind of regulation. How do you hear it, since this is your beat?
Sara Morrison: That's one clip. It was a long hearing. I know because I listened to it. It was several hours, I think. I believe he has called for the government to step in and lay down here are the guidelines and regulations. Obviously, also, he's been very big on voluntary commitments, which is something the US government has asked for, and I believe OpenAI signed on to voluntarily.
Brian Lehrer: Meaning voluntary as opposed to mandatory?
Sara Morrison: Right. There's no law that requires these things. Voluntary commitments is almost all we have right now, which is great for him and all the other companies. I believe he signed, I know a lot of other ones did, a moratorium on these models until we can make sure that the things we develop are safe before they're released to the public. He's definitely said or paid lip service to the idea of there being guardrails or guidelines in the development and release of this technology.
Brian Lehrer: That gets us to one of the other stories in the news today, that meanwhile, in Europe, this hasn't broken out very much here, but it's a story in the last day about the future of humanity and artificial intelligence. As Politico writes this one up, "Europe's three largest economies have turned against regulation of the most powerful types of artificial intelligence, putting the fate of the block's pioneering artificial intelligence act on the line."
It says, "France, Germany, and Italy are stonewalling negotiations over a controversial section of the EU's draft AI legislation so it doesn't hamper Europe's own development of what they call foundation models, AI infrastructure that underpins large language models, like OpenAI's ChatGPT and Google's Bard." It says, "Government officials argue that slapping tough restrictions on these newfangled models would harm the EU's own champions in the race to harness AI technology," from Politico.
Sara, it sounds like there's tension over there, like over here, between regulators wanting to keep AI safe while also wanting their economies to profit from it as much as possible at the same time. Is that how you hear that?
Sara Morrison: Yes. That's been a big part of the issue here. Every time we get ideas that your bills or possible laws are going to come out that put more regulations or guardrails on Big Tech companies in general, the business interests of these innovative amazing world-leading companies always seem to win out, and in Europe has actually been generally better at this. They do have a law, the Digital Markets and Digital Services Act, that is out there. There's two different ones.
It looked like they were going to lead on AI as well. It's been in the works for a couple of years. I think it even predates the release of things like ChatGPT, and they had to go back and figure out where generative AI, this kind of stuff would fit in. It looks like we got pretty far in the process. It looks like a couple of weeks ago this was going to happen.
Then, all of a sudden, apparently we've got those three countries that are saying, like you said, "We want self-regulation instead. We don't want to put anything out that hampers the development of stuff, innovative products from us," which again is the thing that tech companies always say, "If you give us too many rules, we'll leave, or we can't do this, and then what happens?" It's interesting that that's now happened, and apparently in Europe as well. Then I guess we'll see how that plays out because, like you said, this has happened over the last couple of days only.
Brian Lehrer: Is that potentially a recipe for disaster? Maybe this is a worst-case chicken little thinking, but a recipe for disaster, as in we didn't want to kill the golden goose, so we let it grow so powerful it killed us. There I've mixed my goose and chicken metaphors, but you know what I mean.
Sara Morrison: We're seeing how that plays out with companies like Google and Amazon and Meta and antitrust lawsuits and attempts to pass bills that have happened over the last a couple of years, where we let these companies do what they wanted, figure out rules for themselves, and then, oh, no, there have been implications for society. They're now the biggest companies in the world, and we didn't really supervise or regulate them as well as we could have or as well as we do other industries if you want to say they do it more.
I think with AI, it was like, here's a chance to maybe get ahead of this and do something before. It doesn't seem like that's happening yet. I'm not saying it won't, but the fact that the EU couldn't even so far come to an agreement on this, this is something they've been working on for a long time, says not a great sign.
Brian Lehrer: Here are some text messages coming in on the clip. A lot of people want to interpret that 15-second clip. One person writes, "Maybe I'm naive, but isn't it possible that Sam Altman is sincere in his warnings to government? He does seem sincerely fearful about the possible downsides, especially as far as security, the possible HAL effect referring to the computer in 2001, A Space Odyssey, the HAL effect, and the dangers to the future of humanity." That's one listener is open to Sam Altman's interpretation.
Another listener text, "Bold of Sam Altman to be optimistic of the jobs of the future without actually speculating what those jobs would be." Another listener, "Trying to figure out if Sam Altman is really gone from OpenAI." Writes, "Any insight into OpenAI board member Ilya Sutskever's change of heart, initially supporting Altman's ouster and subsequently signing the petition to quit if he's not restored?" Is he maybe going to be restored?
Sara Morrison: Right now, look, obviously, there's a big push to do that. The people who work there, almost all of them, I think, at this point, have signed something saying, if he's not restored, we're going to leave. He wants to go back and Microsoft, who offered him a job that apparently he accepted, also want him to go back. That's a lot of pressure. At this point, there was four members of the board, now, there are three who are still, as far as I know, holding firm on this.
It's three people against everybody including, again, the company that invested a lot of money into OpenAI. It's very much an open question of if he's the CEO of OpenAI tomorrow or not. I don't know the answer to it. Like I said, certainly possible. Absolutely.
Brian Lehrer: Callers are all over the place on this. We'll take another one or two right now. Andy in Westchester is saying, "There are 195 governments in the world, and many are headed by people who don't want regulation, don't understand this new reality, or are poor in developing. How can we trust these heads of state to regulate anything?" I guess he's implying even if the United States does, that doesn't mean it's going to keep the worst possible effects of AI bottled up. Maria in Sunset Park, Brooklyn wants to add something important too. Maria, you're on WNYC. Hello.
Maria: Hi, good morning, everyone. Thank you for taking my call. The first thing that popped in my head as I turned on the radio and the subject started to be dealt with was where are we in preparing for this? Particularly neighborhoods like Sunset Park, which are historically working-class neighborhoods, our education has left our children behind for decades has improved here, but we're still way behind. Where are the education people even thinking, even interested in this?
How do we make the group of people who sit in those rooms who are preparing the algorithms, who are directing traffic, and how this technology, which is something earth-shaking, and any other technology, be for the benefit of all of us? I don't see it. I hope other people out there know more about this, and they see that diversity out there because this is-- My fear and the fear of a lot of people's fear in the neighborhood with whom I talk and organize is that it is more of the same that we— it continues to be that class domination of some people can succeed, some people can lead, some people will have their hands on what other people are not.
Jobs, prosperity, stability is central in our thinking, and we battle with that every single day. The money that we all spent here in the United States or elsewhere to mitigate the consequences of when the people sitting in the room are not diverse, don't come to the table with different ideas and different perspectives. Where is this in the conversation? Is there anyone thinking about this? Are the people that think of how our children should be educated and prepared for the future are even thinking about this? Or are they still with their heads in the sand?
Brian Lehrer: Maria, thank you. Thank you for that context and that question. Sara, is there any answer to that question that's knowable at this point?
Sara Morrison: I think if you're asking the people who are developing it, are they thinking about it, or do they care too much about it? I don't know. I think they say they do. I do believe in the Biden's AI executive order, there were some things there that mention this or consider implications on labor and education and things like that. Again, I don't know how those play out, but it is something that some people who are responsible for some things are thinking about. It seems everything is in the beginning stages. Again, don't know how it plays out, but she's not the only person who's thinking about that or worried about that.
Brian Lehrer: Maria, thank you. Keep calling us. All right. We're late for a break. We're going to take the break. This is obviously such a rich topic. We could talk about it all day. I see one, possibly two, more calls I really want to take bringing up different points. One says she has a cautionary tale regarding AI in her own business. I do also want to get at least to the Google antitrust trial, which the government rested this week.
As we continue with Sara Morrison from Vox and your calls and texts, stay with us. Brian Lehrer on WNYC, as we talk about Sam Altman's firing as CEO of the artificial intelligence company, OpenAI, and other news from the tech world of the last few days with Sara Morrison, who covers that kind of thing for Vox, again, her beat is data privacy, antitrust, and Big Tech's power over us all. Let me get at least one more call in on this before we go to the Google antitrust case. Tayana, calling from Rikers Island. Tayana, you're on WNYC. Thank you so much for calling in.
Tayana: Hi. Thank you so much for taking my call. I don't have much time left. Can I first start by just saying the name of my business?
Brian Lehrer: Sure.
Tayana: Omega The Warrior LLC, O-M-E-G-A T-H-E W-A-R-R-I-O-R. I just want to just piggyback over what the last person said. Are the people thinking about this when they do it? Absolutely. Because I was one of the people that they thought about, and they just use me as a stepping stone in order to keep going. I just want people to be aware of the dangers.
Brian Lehrer: Can you say at all what happened with AI in your business?
Tayana: Yes. You have to be careful because everything is connected. AI is so brand new that you can literally grab anything and [unintelligible 00:34:00]
Automated Voice System: Thank you for using Securus. Goodbye.
Brian Lehrer: Oh, I'm so sorry that happened. When you call from Rikers Island as an incarcerated person, you have a certain amount of time on the phone, and then you get cut off. I apologize to Tayana because it took us just a little too long to get to her call. I don't know if she's allowed to call back. If she does, we'll put her right on. Sara, I don't know if there's anything you can say from the little that we heard, but I know she was telling our screener that her business, which she just shouted out the name of there, was somehow a victim of artificial intelligence, and she says she lost out as a result. Obviously, we don't know any details, but could you extrapolate from that, that there are many stories that could start out that way already?
Sara Morrison: Yes, of course. Not to be super negative because I think I have been. As Altman said, there will be new jobs that are created as well. I don't know if there'll be more as many as this technology potentially takes away. I don't know if this person's job will be the same, but this is also "progress". We don't have people who are making horse and buggies anymore, really. They make cars. It's not a super sympathetic outlook. I don't want to put forward the idea that just everything goes away. None of us have jobs anymore, and it's all controlled by computers, I don't think that that's the truth yet.
Brian Lehrer: In fact, here's a contrary point of view to a lot of what we've been hearing from Antonio in Oakland Gardens, Queens. Antonio, you're on WNYC. Hello.
Antonio: Good morning, Sara. I was telling the screener essentially because I'm a web developer and web developers, like all programmers, they want to solve problems, and they want to do it in a good way. This alludes to my question. Why does it always seem to be that AI is put in this cartoon-like Terminator framing like, yes, they're going to just then eliminate us? Right to 1000, we're going to 1000 right to that point because as a programmer, I like to solve problems, and with AI, as they would evolve, they would be like, "Wow, there's so many industries that these hairless apes have." I'm just trying to be funny.
Brian Lehrer: We are the hairless apes, yes.
Antonio: Things that are very interesting problems to solve in the health industry, transportation, food distribution. Their brains would go haywire. It leads to my second observation is just like, if they became so sentient, which everyone is afraid of, when they become like us that we've been scouring the universe for another similar sentient being, that would seem illogical. Why would you eliminate another form of sentientness just because some cartoon-- That's it, pretty much. I think those two, that's what I wanted to get across. Thank you. This is a really interesting topic.
Brian Lehrer: Thank you, Antonio. Yes, that would be illogical. Thank you, Mr. Spock. Any response you have to that, Sara, is welcome, but also, couldn't AI, because humans do put it out in the world in the first place, be programmed to, in some way, do good for humanity?
Sara Morrison: Yes, I think when you see people like Sam Altman saying we want it to be safe and responsible, that's what they're getting at. I'm not so much focusing on these existential risks of ending humanity, these things are getting so powerful down the line that they do that. I think there's more immediate ones that we're already seeing now, where likes of people called in, jobs are going to be eliminated.
There's issues of these things train on a tremendous amount of data. Is there bias things, discriminatory things being put out there? Those things have come up, too. Again, these things are coming up now, and I don't know that we're ready to or are effectively dealing with them, let alone whatever comes down the line. On the other hand, if you've tried this technology, it's pretty cool. It's really impressive. Good and bad.
Brian Lehrer: Yes, and you make a point about humans programming AI, even if they have good intentions in the way they unleash it on the world, who are the people in the tech world who have this power? They're so disproportionately white and male and fairly well off that they might have a lot of blind spots about what the impact of the technologies would be on people who are not those things, even if they're not intending to try to hurt them. All right.
Sara Morrison: Yes, exactly.
Brian Lehrer: Next topic, you've been following the antitrust trial against Google, which the government just concluded its case on. Remind us of the premise.
Sara Morrison: When you open up your search engine or your browser, your phone, whatever, almost always, unless you're using Microsoft, it's going to be a Google search engine that you use on Safari, Firefox, Chrome, obviously, which is owned by Google. The government is maintaining that a big reason why, and why Google has something like 90% of the search engine market, is because Google pays all of these companies to make their search engine the default, the thing you open. You can choose different ones, but Google is what you open with, and that's the one people are just going to stick with.
The government's case is that this is making it almost impossible for anyone else to compete. Google has a search engine monopoly, but they're illegally monopolizing or creating just too high barriers for anyone else to compete to preserve that. That's their case. Obviously, Google is saying, "No, we don't."
Brian Lehrer: Does a case like this matter to the public good, as in regular people who just want a good and safe search engine to use, or does it only matter to other tech entrepreneurs who want to make money by competing more easily with Google?
Sara Morrison: Well, we can see obviously the potential competition or actual competition, if there's any, you can see why they're upset. The impact to people I think is a little harder to see because a lot of this is all the things that we don't get because companies are just like, "We're not even going to bother because we can never get a foothold in this industry. We don't have $26 billion to pay off these companies to get our thing in there. Potentially, there's a bunch of stuff we don't have."
With Google not having real competition, the case is they're not really innovating as much as they could. Again, there's a lot that we're not getting. There's also some stuff in there about a lot of websites and just everything construct or make their products, their websites to be picked up by Google because that's the one you have to do, or if you're placing ads in search engines, you're going to do it for Google.
There's a lot of things that happen because of that. The internet looks the way it does because all these websites are trying to get picked up by Google. A lot of stuff that we don't realize is influenced heavily by Google is because they have 90% of this market. There are impacts on people, but it's a lot harder to see and to find those than it is for the companies who were like, "We can't make a search engine, and we want to."
Brian Lehrer: Your article is called The Secrets Google Spilled in Court: What we learned and didn't learn from the big Google antitrust trial. We only have time to pick one secret that you learned about in court out of the six in your article. You can pick the one you think is most important. I'm leaning toward your secret number two, Google's secretive deal with Apple gets a little less secret, but you pick.
Sara Morrison: The headline is spilled. That one was spilled because it's apparently a Google witness wasn't meant to reveal that. It was just like Google asked the judge to redact or seal a ton of information. We didn't get as much as we could, but this was something that I guess a witness said this, Apple gets 36% of the revenue from search ads that people see I believe through its browser. Apple is a huge part of this because it's Safari, and the iPhone is like 50% of mobile.
We still don't know how much Google is paying Apple, but it's probably a big chunk of this. Now we just know a little bit more about how this deal is structured. Yes, it'd be 36%, but I think if you're the general public, maybe that $26.3 billion that they pay everybody combined is probably a number you can wrap your head around what that means better.
Brian Lehrer: Two giants like that making that kind of a deal makes it even harder for new competition to spray on. Sara Morrison-- Go ahead.
Sara Morrison: The competition that is Apple because Apple is not developing a search engine either if they're getting that much money from Google to use theirs.
Brian Lehrer: Sara Morrison, senior reporter for Vox, who covers data privacy, antitrust, and Big Tech's power over us all. Thank you so much for this.
Sara Morrison: Thank you, and happy Thanksgiving.
Brian Lehrer: And to you. Brian Lehrer on WNYC. Much more to come.
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.