A 'Big Tech' Child Safety Hearing in Congress

( Susan Walsh / AP Photo )
[music]
Brigid Bergin: This is The Brian Lehrer Show on WNYC. I'm Brigid Bergin, senior reporter in the WNYC and Gothamist newsroom, filling in for Brian today. Happy Groundhog Day to all who celebrate. Maybe you already heard, but our local rodent, Staten Island Chuck, did not see his shadow. As the legend goes, we'll be having an early spring. Hooray.
On today's show, Manhattan Borough President Mark Levine joins us. He's got a proposal to remove old sidewalk sheds that seemed to linger outside some buildings for years. We'll talk to him about that and a few other city issues. Plus later in the show, Craig Newmark, he's the Craig of Craigslist, will be here along with Graciela Mochkofsky, who is the dean of the journalism graduate school at CUNY. They'll tell us about a big donation from Craigs Foundation that will allow the school to soon go completely tuition-free. Amazing.
We'll wrap up today's show with a call-in for anyone who works a four-day workweek. Whether you're cramming five days of work into four or working reduced hours, we want to hear how the four-day workweek is going for you. First, senators called on the CEOs of five major tech platforms to testify before them on the issue of child safety online this week. Featuring testimony from the bosses of Meta, the parent company of Facebook, TikTok, Snapchat, Discord, and X, formerly known as Twitter, the scene was what some of us have come to expect from these high-profile hearings.
Senators grilled these tech giants on the lack of transparency when it comes to social media and the harm it could potentially cause kids, but this time they focused pointedly on child sexual exploitation. To add to the highly charged atmosphere, survivors of exploitation and family members of victims were in the audience. Here is Senator Dick Durbin with his opening remarks, referencing something called CSAM, which stands for Child Sexual Abuse Material.
Senator Dick Durbin: Discord has been used to groom, abduct, and abuse children. Meta's Instagram helped connect and promote a network of pedophiles. Snapchat's disappearing messages have been co-opted by criminals who financially extort young victims. Tiktok has become a "platform of choice" for predators to access, engage, and groom children for abuse. The prevalence of CSAM on X has grown as the company has gutted its trust and safety workforce.
Brigid Bergin: While some lines of questioning from senators might have looked like grandstanding, the political reality is sobering. Congress has tried and failed for years to overturn a 1996 law, which gives online service providers broad immunity from lawsuits over their users' posts. Joining us now to explain that law and offer analysis of Wednesday's hearing is Will Oremus, technology reporter at The Washington Post. Will, welcome back to WNYC.
Will Oremus: Thanks for having me.
Brigid Bergin: We're going to get into the specifics, but first, I want to reflect a bit on who was sitting before the senators on the dais and how you characterize the hearing overall.
Will Oremus: This was the latest in a string of hearings where tech executives have been called to testify by Congress. As you mentioned, it gives lawmakers a chance to grandstand. They can also ask questions and get answers to things that might be hard to get answers to in other contexts. This hearing was unique in that there were families of victims and there were survivors of sexual exploitation in the audience. In fact, many of them were arranged right behind the CEOs. There was an emotionally charged atmosphere that made it hard for anyone to be dismissive of the existence of the problem, regardless of how they felt about various efforts to solve it.
Brigid Bergin: Sure. You write about how, from the start, senators of both parties focused their criticism on the law that Congress passed in 1996, a law that paved the way for social media as we know it. Can you tell us more about this statute, the so-called, Section 230 of the Communications Decency Act?
Will Oremus: Yes. Back in 1996, this is the era of AOL and Motorola flip phones. Social media, as we know, it didn't exist. There were starting to be online forums on internet service providers like CompuServe and Prodigy, where people could go and have chats on various topics or read news updates or that kind of thing. At the time, Congress was passing a big telecom bill, and as part of that telecom bill, there was rising concern among lawmakers of both parties, maybe especially on the right, the prevalence of pornography online.
There was concern that kids would be able to get access to that pornography. Again, to go back to that time, this was the time when, in a convenience store, there would be Playboy magazines on the top shelf with a wrapper over them.
Brigid Bergin: Sure.
Will Oremus: All of a sudden, there's this new technology where anyone can log on, and there's pornography, there's all kinds of stuff on there. They were concerned about that. They were concerned, particularly about kids getting access to it. As part of that Telecom Act, they came up with something called the Communications Decency Act. This act, a lot of it, has been mostly forgotten, but it actually was aimed at protecting kids, at least nominally.
What it did was it criminalized the knowing transmission to children of pornographic or other lewd material on the internet. Now, most of that law was struck down the next year by the Supreme Court as violating the First Amendment. They found that there was no way to enforce it without also chilling all kinds of legitimate speech. One little obscure section of it survived, and that was called Section 230.
This was a section that tried to address a problem that was arising where when someone would post something online that was libelous or something that violated the law, something that ruined someone's reputation unfairly, there would be a lawsuit. The person suing would sue not only the person who posted it, but they would sue the platform that hosted it. They would sue Prodigy, or they would sue AOL or CompuServe and say, "Hey, look, you published this libelous material." Just the same way you would sue maybe a book publisher if they published a book that included libel about you.
Courts were conflicted over whether the online service providers should be treated as the publisher of that material or not. Should they be held accountable for the libel as well? The lawmakers came up with this section that said that the online service provider will not be treated as the publisher or speaker of material that their users post in most cases. That's true whether they moderate the material or whether they choose not to moderate. Either way, they're not going to be held liable.
Now, that has been the lasting legacy of the 1996 Communications Decency Act because that part didn't get struck down by the courts. The result has been that whenever you try to sue an online platform for something their user posts, they're going to say, "Look, Section 230 protects us. You can't sue us for this. You can't even take us to court over it." That has enabled the rise of social media. It has enabled companies like Facebook and Snap and Discord, the companies at this hearing, to exist and to get huge because generally speaking, with some exceptions, they're not really responsible for hosting all kinds of harmful content.
Brigid Bergin: Will, and you made this clear that this statute is the legacy of this act, but it's not to say that it hasn't been challenged before. [clears throat] Excuse me. Not to mention some legislative pushes, even an executive order by President Donald Trump. Can you tell us briefly what kinds of challenges it has survived and whether that's unusual in any way?
Will Oremus: Yes. It survived challenges from the very start. There's a great book on this by Jeff Kosseff called The Twenty-Six Words That Created the Internet, referring to the key 26 words in Section 230. Section 230 has lots of fans, people who believe that social media wouldn't be able to exist. These fledgling online forums would've been snuffed out because they would've had to be defending themselves all the time against all kinds of lawsuits. Can you imagine if Facebook was able to be held liable in court for everything its users' post? It would just be in court all the time. Right?
Brigid Bergin: Absolutely.
Will Oremus: Section 230 has a lot of backers, particularly in the tech industry, in the social media industry. Over the years, it has also acquired a lot of critics. It has survived challenges in court. In 2018, there was actually a law created that carved out a certain type of material from Section 230 protection, and that was called SESTA/FOSTA. This law said that if it's material related to facilitating sex trafficking in particular, then those Section 230 protections in many cases won't apply.
You won't get to say as a platform that, "Oh, well, you can't take me to court for hosting sex trafficking." You can still say that for other stuff, but not sex trafficking. Then in 2020, let's see, in 2020, I believe, I'm going to get the year wrong, President Trump started attacking Section 230. He was upset about it for a different reason. He was upset at the power that social media companies have acquired to moderate content. In particular, he was upset that Twitter applied a fact-checking label to one of his tweets.
He felt that these social media companies have too much power to decide what people can and can't post online because again, Section 230 protects their ability to moderate content as well. He tried to pass an executive order. He tried to withhold approval from a defense bill if there wasn't a weakening of Section 230. Now, Biden undid that executive order, but even President Biden has expressed skepticism about whether Section 230 has gone too far. Has it been too widely applied to say that tech platforms are basically not responsible for anything, and should that be revisited?
Brigid Bergin: Will, you also write about how much of the hearing was focused on social media as a dangerous gateway to child sexual exploitation. I should note that other senators mentioned bullying, the promotion of self-harm, and eating disorders, as well as a means for kids to buy drugs online. The big problem senators were trying to address is that these companies are not forced to disclose the data on potential harm. Did the hearing get any answers out of these tech CEOs about what data they're willing to provide?
Will Oremus: Not exactly. [chuckles] I should note that anyone who has watched one of these hearings, there are a lot of questions, and there are fewer real answers both because the tech CEOs have an incentive to evade answers, but also because sometimes the lawmakers will get very worked up and ask a question and not even give the CEO time to answer because what they really want to do is be seen asking the question, they don't actually care what the response is.
I wouldn't say we got a ton of enlightening answers, but what we do get for one thing, even before the hearing started, are what Senator Dick Durbin, Reilly referred to as deathbed conversions, where companies will suddenly introduce new policies aimed at protecting children, or Snap, for instance, decided to endorse one of the major bills around kids safety online, called The Kids Online Safety Act in the week before the hearing.
They want to have something to talk about, something to point to when their CEO goes up there to say, "Look, we are doing something. We do care. We're working on it. You don't have to do something drastic like repealing Section 230. We're going to partner with you on this."
Brigid Bergin: Listeners, I know we have a lot of parents who are concerned about their children's safety online who are listening now, whether we're talking about sexual exploitation or more insidious issues that social media might cause. Senator cited the promotion of self-harm, eating disorders, access to drugs in this hearing. What are you as a parent navigating for your kids online? Do you want to share, or is there something that you think might help? The numbers 212-433-WNYC. That's 212-433-9692. You can call or text at that number.
Will, I want to get into what the senators were specifically calling for in a bit, but as someone who has been covering these types of hearings, you talked about how there was a time when members of Congress would routinely preface their critiques of tech CEOs by thanking them for their innovative products and the jobs their companies have created. It sounds like the hearing this week was different.
Will Oremus: Yes. When they first started calling tech CEOs to testify, that was a normal thing, especially Republicans, but members of both parties would say, "Mr. Zuckerberg, thank you so much for-- you've created this great American company, you're part of this dynamic American tech sector that ensures we're world leaders in technology, but we do have concerns about this or that." Now those niceties are largely gone. For lawmakers these days, I think the incentive is to portray yourself as being tough on big tech.
It's been really this turn in public opinion over the years. I wrote years ago about the rise of the term big tech itself was a bad omen for the tech industry. We don't start calling an industry big until we're also considering it to be potentially bad. You think of Teddy Roosevelt railed against big business.
Brigid Bergin: Sure.
Will Oremus: That was maybe the first one. There's Big Tobacco, Big Pharma. That term came maybe in the mid-20-teens when we started calling these companies-- We used to call them startups, by the way. Google was called a startup up until maybe about 2010. All of a sudden, they're big tech. Now as a lawmaker, you want to be seen standing up for the little guy, particularly, in this case, when the little guy here is kids.
This is a real problem, by the way. There's an element of moral panic to this, but there's also a very real problem of kids being sexually exploited online. By the way, it is not just an American problem. This is truly a global problem. If you look at the 80-some million reports to the Federal Clearinghouse of online child sexual abuse material just last year, 90% of them actually originated outside the United States, with the most originating from Asia. This has become a global problem.
It's everything from graphic videos of sexual assault of kids circulating online, to teens who are extorted by sexual predators, who maybe pose as-- if it's a teen boy, maybe they pose as a teen girl of the same age, ask him to send a naked photo, and then once they've got that, they use that to blackmail him to send more stuff or to get his friends to send stuff or blackmail him for money. This ruins people's lives. It is a very real problem. It is understandable that there's anger and outrage, but getting to the solution is the really hard part.
Brigid Bergin: It is such tough stuff. Will, there was a line of questioning from Democratic Senator Amy Klobuchar from Minnesota, and how tough it is for parents to monitor their children's social media use. In response, Mark Zuckerberg talks about how the onus shouldn't be on the social media companies, but on the tech companies that control the apps in the first place. Let's listen to about a minute of their exchange.
Mark Zuckerberg: I don't think that parents should have to upload an ID or proof that they're the parent of a child in every single app that their children use. I think the right place to do this, and a place where it'd be actually very easy for it to work, is within the app stores themselves. My understanding is Apple and Google already, or at least Apple, already requires parental consent when a child does a payment with an app.
It should be pretty trivial to pass a law that requires them to make it so that parents have control anytime a child downloads an app, and offers consent to that. The research that we've done shows that the vast majority of parents want that. I think that that's the type of legislation, in addition to some of the other ideas that you all have, that would make this a lot easier for parents.
Senator Amy Klobuchar: Just to be clear, I remember one mom telling me, with all these things she could maybe do that she can't figure out. It's like a faucet overflowing in a sink, and she's out there with a mop while her kids are getting addicted to more and more different apps and being exposed to material. We've got to make this simpler for parents, so they can protect their kids, and I just don't think this is going to be the way to do it.
Brigid Bergin: Will, what point do you think Zuckerberg was making in this case? Is he just basically kicking the can down the road or shifting blame to someone else, or is there some validity there to what he's suggesting?
Will Oremus: It's both. There are a number of different bills on the table to address this. We're at the point where pretty much everybody agrees that social media can be harmful to kids, and that maybe something needs to be done to better protect kids online. The question is, what is it that we're going to do, and how are we going to make sure that it actually protects kids, first of all, and that, second of all, it doesn't end up doing more harm than good in the long run.
One of the proposals that's on the table is something called age verification, where you require in some way that apps make sure that someone is 18 or over if they're going to target them with ads or targeted recommendations. Maybe you make sure that they're between 13 and 17 if they want to use the app at all, and then there'll be certain restrictions if a kid is a teenager of that age, and then you want to make absolutely sure that nobody under 13 is using the app.
Then the question is, how do you verify their age? One of the amazing things about the internet, from the start, has been the possibility of anonymity. There's that classic New Yorker cartoon, which is, on the internet, nobody knows you're a dog. On the internet, nobody knows if you're 12. A lot of lawmakers would like to see that changed. Well, okay, so how do we do that? One of the ideas is to require the apps themselves, so like Facebook is an app, Snapchat is an app, to make sure that all of their users are 13 and up in some form, maybe require their parents to upload some kind of ID. It starts to get invasive pretty quickly when you think about how do you ensure beyond just asking them to check a box that says I'm over 13.
Brigid Bergin: Sure.
Will Oremus: How do you make sure they're telling the truth? What Mark Zuckerberg is saying here is that he thinks that instead of the apps, his apps, Facebook and Instagram and WhatsApp, having this responsibility, how about we make the mobile platform, so Apple's iOS should already know a lot about its users, why don't we make Apple be the one who has to ensure that a user is 13 or up to use certain apps or that they're 18 or over to use other apps?
Why don't we make Google-- they own the Android platform. Wouldn't that be a lot simpler? That's his suggestion. There is some validity to it. In some ways, it might be simpler. It also is absolutely shifting the blame.
Brigid Bergin: Sure. I want to bring some of our listeners into the conversation. Let's go to Raz in Queens. Raz, you're on WNYC.
Raz: Hey, good afternoon. Great discussion. I'm a millennial, and also I work at a big tech company, and I also have kids as well. I think, on balance, what I've realized is it's really an asymmetric bet letting your kids use social media. I see very minimal upside and mostly downside to allowing young people on these platforms.
In addition, I think the risk that AI presents now with people being able to use your images and videos for all sorts of nefarious purposes, that has created an additional risk for allowing kids' content and images online. For our family, we made the decision, it makes no sense.
I do think, to the point about the apps creating friction, I think it's good to create friction to let kids create social media accounts. You don't want it to be a frictionless experience because the risk that's posed on the other side is so great. It shouldn't be easy to just with one click create a profile. Just wanted to provide that insight. Thank you.
Brigid Bergin: Raz, thanks so much for that call. Will, any reaction? Raz is calling from the place of both a parent and someone who works in this industry. Do we still have Will?
Will Oremus: Sorry. I was muted there. I didn't want to interrupt your caller. A lot of parents these days are asking that question. I'm a parent of a kid. I'm certainly wary of having any images of my kid online these days, let alone using social media. Thankfully they're not of that age yet, but the question is, first, should kids be on social media at all? Second, if the answer is no, how do you stop them? That's easier said than done.
One of the things is that if you ask teens, a lot of them will say, "We do want to be able to use social media, but we are concerned about these algorithms that keep us addicted, that keep us scrolling. We're concerned about people exploiting our images. We're concerned about how these apps work. We do worry we spend too much time on them." A lot of teens, I think, would rather have help getting control of their online lives as opposed to just being barred altogether from the social media world.
One of the things that comes up in these debates is particularly for some of the most vulnerable teens, kids who are LGBTQ, kids who are struggling with gender identity, kids who are struggling with loneliness, depression, social media can make your mental health worse, but it can also be a hub for connection. It can be a way to find your people. It can way be a way to find information that you can't find elsewhere.
Think about also kids in other countries where maybe there aren't as many free speech protections, there isn't a way to get information about reproductive health, about what they're going through, kids in states where there are laws about reproductive health that prevent them from accessing information. Social media can have a lot of good sides, and it also has a lot of bad sides. It is a tricky question of how much should kids be on social media, and is there a way to make their experience of social media, is there a way to put more friction, as the caller said, into that experience? Is there a way to make it a little safer than those of adults?
Brigid Bergin: If you're just joining us, I'm Brigid Bergin in for Brian Lehrer today. We're going to take a quick break. More with my guest, Will Oremus, from The Washington Post, on this recent hearing with tech CEOs and really unpacking some of the limitations to our current regulations around social media. Plus, of course, your calls coming up. Stay with us.
[music]
Brigid Bergin: It's The Brian Lehrer Show on WNYC. If you're just joining us, I'm Brigid Bergin in for Brian Lehrer today, and my guest is Washington Post technology reporter, Will Oremus. We're talking about this week's Senate hearing with big tech CEOs on child safety online. Will, in one of the most extraordinary moments, Meta CEO Mark Zuckerberg, of course, as we've mentioned, that's the company that owns Instagram and Facebook, apologized to victims and families present after a lot of pressure from Republican Senator Josh Hawley of Missouri. I want to take a listen to a clip that's a bit hard to hear at some points, but definitely worth listening to. Senator Hawley speaks first.
Senator Josh Hawley: Would you like to apologize for what you've done to these good people?
[applause]
Mark Zuckerberg: I'm sorry for everything you've all been through. No one should have to go through the things that your families have suffered. This is why we invested so much and are going to continue doing industry-leading efforts to make sure that no one has to go through the types of things that your families have had to suffer.
Brigid Bergin: If that was a little hard for anyone to catch, Zuckerberg begins with, "I'm sorry for everything you've all been through. No one should have to go through things that your families have suffered, and this is why we've invested so much." Will, do you want to just reflect on that moment a little bit and what it made you think of?
Will Oremus: For one thing, Mark Zuckerberg has come a long way from the CEO who was often described as robotic in his younger days. He, under intense questioning from Senator Hawley, and it was a heated exchange at times, he seems to have made the call, I don't know if it was premeditated, to get up and turn around and face those parents and make that acknowledgement of what they've gone through.
I don't know if I would quite call it a real apology because he didn't accept responsibility. In the next beat after that clip, he says, "And that's why we at Meta have all these industry-leading efforts to protect kids online." Part of the context for this was that there was a complaint, a legal complaint, in Massachusetts that was unredacted in November that alleged that some of Meta's other top executives, including Adam Mosseri, who runs Instagram, and Nick Clegg, their communication, their global affairs president, had urged Zuckerberg to take some more actions to get ahead of this issue, devote more staff and resources to addressing bullying, harassment, suicide prevention, and that Zuckerberg had ignored them and pushed them off and basically failed to prioritize it. That's part of why he was getting grilled at this hearing over these issues.
Brigid Bergin: It seems like media companies and your reaction, at least online, has been that there was some admission of guilt on the part of Zuckerberg. Do you think there is potential for any lasting implications?
Will Oremus: Well, again, I think where the rubber meets the road here is in the actual legislation. Is any of it going to pass? If it does, what will the practical effects be? One of the key laws that's under discussion is called the Kids Online Safety Act. This would impose on online platforms something called a duty of care. Basically, it would say that they have to take reasonable measures to protect kids from stuff like content that encourages eating disorders, content that's trying to sell them drugs or tobacco, sexual exploitation.
This has gained bipartisan support, and it seems to be marching along, and yet there are critics who worry that it is too vague. There are critics who say that this could be used to chill all kinds of legitimate expression. In particular, some people on the left are very concerned that one of the bill's co-sponsors, Senator Marsha Blackburn, Republican of Tennessee, has said that part of her goal with this bill is to protect kids from being exposed to material about gender identity.
There are some on the right who equate discussions of gender identity with sexual grooming and sexual exploitation. Is that going to be a way for that agenda to get forced into law, and not only that, but to dictate how social media companies behave and the type of material they filter? It's controversial. It also enjoys a lot of support. There are other provisions in the law, like requiring them to do certain types of filtering for kids between 13 and 17. That's one of the big ones. There are a few others on the table, including ones that tackle parts of Section 230, that statute we talked about earlier that indemnifies tech companies from all sorts of liability.
Brigid Bergin: Will, I want to make sure that you know and our listeners know that I see we have a lot of callers who want to weigh in on this topic, some very concerned parents who want to share their own strategies and who have some questions, I think, for you and for our broader community. While we are on this topic of the legislation, I just want to stick with this for a minute. The bill that you just mentioned, I think you've also written about how there's a package of five bills in total.
Another piece of legislation that would address Section 230, as I understand it, is the EARN IT Act, which could specifically roll back the Section 230 protections. Can you explain to what extent that would work and how it would apply, and then also the status of some of these bills?
Will Oremus: Yes. The Kids Online Safety Act is one. There's then a package of five bills that actually have been advanced by the committee that held this hearing. That's the Senate Judiciary Committee. These five bills were advanced by the committee in May, but they haven't come up for a floor vote. The Senate's leadership hasn't decided that they're a top priority or doesn't think that they have the votes to pass it or doesn't want it to pass, one of those three.
There was some frustration from lawmakers on this committee that their bills haven't moved any further. Then there also aren't companions to these bills in the house yet, so they would face another tough road there on the way to passage. One of those, as you mentioned, is the EARN IT Act. This would a little bit analogous to that SESTA-FOSTA bill I mentioned earlier around sex trafficking. This would roll back the Section 230 protections when the content in question is child exploitation content.
You can still get Section 230 protections if the content is libel or that sort of thing, but you don't get the protections anymore if the content has to do with sexual abuse of children. There's also the STOP CSAM Act. That's another one of that package of five bills. That would actually create a cause of action, making it illegal in some ways to host this type of content, so that not only would the platforms not have those 230 protections, but there would be an actual statute that victims and their families could refer to when they file suit against these companies. None of these bills yet have come up for a floor vote. Even though there is bipartisan support, which is unusual these days, the strength of that bipartisan support and its ability to survive final votes in the House and Senate is not yet clear.
Brigid Bergin: Desiree in Park Slope, thanks for holding. Welcome to WNYC.
Desiree: Good morning. I wanted to start by saying that I am a child of farm life, and I'm also a child of New York City. I have lived in each place for pivotal moments of my life. I'm a librarian. I work with young people all the time. I was an academic librarian previously. Now, I'm a broadcast media librarian, so I understand how broadcast media uses social media specifically to influence young people.
What I want to say is this. My grandmother used to say that you can't buy good home training. I'm going to paraphrase it and say that you can't legislate good home training because the person who calls to talk about how his family dealt with social media and his children, that is literally the only way that you are going to solve this issue. You cannot legislate what children do behind their parents' backs. You can't filter it. People have been trying to filter since the web was born because I've used technology since I was 12 years old.
I feel like the whole conversation about what's happening in the Senate, which is a performance, not an actual hearing, is a deflection. It's a deflection from the fact that children are asked to do dangerous things literally every day. They ride the subway by themselves at 12, 13 years old. Children on farms, I drove a tractor when I was 11. Those are things that you teach children to make them safe because you don't have the option to keep them away from everything that's dangerous.
If a parent doesn't understand all the apps and technology their child is using, then that means it's time for that parent to get themselves to the public library so they can take a free class and learn. Because you're raising children in an age of technology. You need to understand how the technology works. It's not enough to try to legislate away bad people. Bad people exist. Children have been abducted before there was a World Wide Web.
Brigid Bergin: Desiree, thank you for your call. It's great to know that there is an option available at public libraries to help people who want to get educated about how to use some of these social media apps. Although, Will, as we know, not every parent is necessarily as able or engaged as each other, and those kids still deserve the same protections. I want to go to Abby in Flatbush, Brooklyn. Abby, welcome to WNYC.
Abby: Hi. This is, I guess, building on what Desiree just talked about, but first of all, Brigid, you always do such an excellent job when filling in for Brian. Thank you for doing that.
Brigid Bergin: Thanks, Abby.
Abby: I just wanted to ask about education and whether that is at all a component of what anybody is talking about. I have two kids, a first-grader and a fifth-grader. During the pandemic, especially my fifth grader who was in first grade at the time, was handed this device, and there was no training or any kind of education around the Internet or how to responsibly navigate it.
Obviously, age-appropriate education, and whether that is being incorporated into the thinking and not just blocking kids from using it, but doing what Desiree was talking about, but on a larger scale and using the education system to say, "This is something that is now part of society, and children need to learn to responsibly use it or to be more savvy about it, and how do we make them part of the solution."
Brigid Bergin: Abby, thanks for that question. Will, I'd love to get your reaction to both Desiree and Abby. Just this idea that there is some responsibility for parents to be more tech literate, but also that there needs to be a certain component to help make sure that children who are just handed new tools understand how to use them appropriately.
Will Oremus: Yes, I think, absolutely, digital literacy has to be part of the solution and good parenting. If you're a good parent, you absolutely should be engaged in what your kid is doing online. That's critical. I think the question that lawmakers are asking that is valid is, should all of the responsibility be on the parents and the kids, or should some of the responsibility also be on the tech platforms? I think Desiree did make a really good point, which is that you can try all you want to filter the Internet that kids see. History teaches us they're going to find a way around that.
To the point that you can't do anything about it from a regulatory perspective, there are some valid responses to that. Georgia Senator Jon Ossoff, a Democrat, made this philosophical argument in the hearing, where he said, "Look, we know that social media is a dangerous place for teens. We also know that social media companies, by their business models, are incentivized to attract more teens and to keep them engaged and to keep them spending more time." You might also add to that that one of the ways that social media companies have found to keep kids engaged is to show them content that upsets them or outrages them or disturbs them or plays on their anxieties.
In that sense, you could see this as a broken system where the companies, in order to make their bottom lines go up, almost have to keep showing kids content that is potentially harmful to them and keep them on their phones more than the kids want to be. There's this idea that they exploit our brains to make us keep scrolling. Is there anything that public policy can do about that? I think is a legitimate question.
Brigid Bergin: Will, I want to read a text from one listener before we wrap up here. This listener writes, "Parent of a 13-year-old here. Apple provides very little help to parents in controlling their kid's use. My daughter doesn't have any social media apps, but loves YouTube and would watch videos 24 hours a day if she could. Apple lets parents disable apps during certain hours, but won't let you shut down Safari," which is their Internet service provider, which is ridiculous. More concerns from parents on the challenge of trying to limit kids' access to these tools.
Just in our last minute, Will, we've mentioned a bunch of pieces of legislation that could have an impact on these social media companies. Anything in particular we should be watching going forward?
Will Oremus: Yes. I think one of the big debates this will come down to is the debate between safety and privacy. A lot of the measures that would potentially make kids safer online would also require a lot more invasive monitoring by tech companies. They weren't there at this hearing, but the privacy advocates are going to come out of the woodwork. One of the questions is should you be able to have encrypted messaging platforms such that nobody, including law enforcement, can tell what's happening on there, and Apple's iMessage is one of these, or should that not be allowed because that creates a haven for these types of horrible sexual exploitation? I think that's going to be one of the key real debates that this comes down to.
Brigid Bergin: We're going to leave it there for now. My guest has been Washington Post technology reporter, Will Oremus, and we've been talking about this week's Senate hearing with big tech CEOs. Will, thank you so much.
Will Oremus: Thanks for having me.
Copyright © 2024 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.