Protecting Kids' Online Safety
( Manoush Zomorodi )
[music]
Tiffany Hansen: It's The Brian Lehrer Show on WNYC. Good morning again, everybody. I'm Tiffany Hansen in for Brian today. We're going to turn now to a new push out of Albany aimed at protecting kids on the Internet. Earlier this week, Governor Hochul said she would back legislation that would make it harder for strangers to interact with minors online. The proposal would also tighten rules around kids' interaction with AI chatbots following several tragic cases in which young people formed intense emotional attachments to those bots and ended up taking their own lives.
As part of our ongoing effort here on The Brian Lehrer Show to help make sense of the increasingly complicated and very much often unsettling digital landscape, we're going to unpack some of this legislation and others like it and talk folks through what the science says about how social media and AI affect young people. With us to help go through all of this is Kris Perry. She's the executive director of Children and Screens, a nonprofit focused on child-centered tech policy and design. Welcome to WNYC, Kris.
Kris Perry: Thank you.
Tiffany Hansen: Let's just start with some of these proposals from Governor Hochul. There is one that would block strangers from interacting with minors online. I don't know how that is going to work. Maybe you can explain that for us because I feel like it's the wild west out there. How do you even make something like that work?
Kris Perry: Well, there are a number of pieces of policy that would help make that possible. They start primarily with understanding the child's age, but also making it more difficult for the platforms to collect data on the child, thereby making them more anonymous and more difficult to track and more difficult to market to and personalize feeds to. The policies that New York is putting in place essentially address each of these features of platforms that they use to target children.
Tiffany Hansen: I'm going to ask a basic question here, but when kids are online, and they're asked, let's say, to put their age in, because that's how we're really determining who that user is and what age they are, right? Is by this kind of self [crosstalk]
Kris Perry: Correct. That's one way. Then the platforms have also become very sophisticated at identifying, or let's just say predicting, the child's age through behaviors that the child may engage in online, the tone of their voice, the look of their face; these are all ways in which they're able to guess at the child's age. There are both proactive steps the child can take by entering their age, but there are also steps that the platforms are taking to guess at the child's age.
Tiffany Hansen: Yes, because I can sure see kids saying, "Yes, I was born in 1968." For someone who might be a kid who might be more focused on trying to get around all of this. It does sound like there are different layers. I'm wondering if lawmakers are focusing in the right places. Are they focusing on the right thing here? Are these layers that we're talking about, the right area where lawmakers should be putting their effort?
Kris Perry: They are. In fact, they're surprisingly consistent with what the research has said is most problematic about the design of some of the platforms that children like the most. There are both design feature changes, or you might say they're pushing back on the platforms to force them to collect less data, so that they can stop personalizing the feeds and pushing notifications and keeping children up all night with the ways in which they're designed.
Also, they're thinking more broadly about how to limit the use of the products themselves by the children. That can be from changing the way schools allow use of screens during the school day to limiting or putting warning labels on products so that the child's reminded over the course of the day that what they're using might be harmful. There are a number of ways in which it seems policymakers have seen what the research says are developing policy solutions that will hopefully collectively protect children more than they are right now.
Tiffany Hansen: Listeners, we would love for you to join this conversation with Kris Perry, the executive director of Children and Screens, the nonprofit focused on child-centered tech and policy, as we talk about teens and their online behaviors, and how legislators are attempting to regulate some of that. Are you a parent, guardian, caregiver of teenage kids? What are your biggest concerns? How do you manage it? What are your challenges and fears as you're working with your kids to navigate this landscape?
212-433-9692. You can call us, you can text us at that number. It's 212-433-9692. I'm assuming, Kris, that one of the things that you're really talking about when you're talking about ways that these platforms keep kids online is through algorithms. That is definitely one thing that we've seen legislation really geared toward, which are the way the social media algorithms specifically shape kids' feeds. Tell us about what laws are already in place around these algorithms.
Kris Perry: Yes. New York passed something called the Child Data Protection Act, which really does get at the root of this issue, which is how data collection allows the platforms and companies to target their products at children through advertising or a mix of engagement-maximizing features. By reducing the collection of that data, it therefore limits the sophistication of the algorithm and its ability to keep the child engaged. It sort of really does get at the root of this problem. It's great that the Child Data Protection Act was passed.
The algorithms themselves, of course, are used not only in tailoring feeds through, say, social media apps, but increasingly, there are just standalone AI products like ChatGPT or Gemini that can do all kinds of other things that the child asks it to do, such as answer questions about friendships or advise the child on how to interact with their parents. There are really interesting and in some ways problematic and worrisome ways in which AI is being deployed through companions and characters that the child might engage with, apart from how they're being targeted for marketing.
Tiffany Hansen: I'm wondering if we can just circle back. You were talking a little bit about the research about why. Just get at why spending so much time online can be harmful for kids. What does the research tell us about that? I think we all sort of intrinsically know, but what does the data tell us?
Kris Perry: Well, you bring up a good point. I mean, as adults, we know how it's impacting us, and it's not that different from how it's impacting children. Research does show that the internet and social media use that young people engage in is affecting both their physical and mental health in both population, positive and negative ways. Some of the physical health risks are sleep disruption, sedentary behavior, which leads to obesity, and not surprisingly, a high rise in myopia, which is nearsightedness.
Some of the mental health risks are anxiety, depression, cyberbullying, and digital stress. As I said earlier, there are some potential benefits, and kids report feeling that there is a greater opportunity for connection and social support, that they're able to find information that's useful to them. It's a time where they're exploring their identity, and they can do that pretty effectively online. It really does help them with friendships and staying connected.
Tiffany Hansen: Kris, we invited listeners to the conversation. We have Fala in New Rochelle. Hi, welcome.
Fala: Hi, thanks so much for having me. Quick question. You mentioned that the policy is helping to or hopefully helping to protect children online. I'm wondering, even in the case of like Roblox, a game that many children play, in order to access the chat, children are asked to send in a photo of themselves. You mentioned something like this previously. How is that information going to be protected if a child sends in a picture of themselves so that they can then access something? We also know that sometimes children's ages are gauged differently depending on their race. How do the companies decide who's 8 versus who's 13 or 16?
Tiffany Hansen: Yes, good questions. All right, Kris.
Kris Perry: Good question. The way in which the new social media warning label law will be implemented is still to be determined. The Mental Health Commissioner of New York and the Attorney General of New York have many, many months ahead to work on how to implement that new law. When you proactively provide a photograph to any company, you are in a sense, overriding any data collection laws that are going in place.
As a parent, you have to be very vigilant about your child's providing data proactively. The laws that are going into place are really meant to prevent the company from essentially pulling data from the child's accounts. If you're giving it proactively, I think you're essentially overriding any of the protections that are currently being discussed.
Tiffany Hansen: I see. Kris, like, "Hey, well, they gave it to us, so why can't we use it?"
Kris Perry: Correct.
Tiffany Hansen: All right, Fala, thank you so much for that call. I have no idea what Roblox is, but I did get very creeped out by the fact that kids are putting their photos or being asked to put their photos. Is that fairly common, Kris?
Kris Perry: No, that is not common. I think what this brings up is a really interesting point about devices that have front-facing cameras. That was a major innovation when the smartphone created the front-facing camera, which led to the prevalence of selfies. What we're seeing that's quite worrisome is most schools are issuing tablets and Chromebooks that have cameras. Those cameras could potentially be on at all times. If you're bringing your Chromebook home and it's in your bedroom and you have it on, the camera may be on, and you don't even know it.
I want to remind your listeners that it's one thing to take a photo and send it to a company to get on their, get on their platform, but even more subtly, there are cameras on your phones and computers that could potentially be on at all times. It's wise to remember to cover that or keep it closed while it's in your bedroom or anywhere, frankly, in your home, because some of these apps can actually activate the camera without you knowing it.
Tiffany Hansen: Oh, dear. Okay, Kris, we have a text here that kind of gets at what you're talking about. "Can the guest talk about the many issues with surveillance, privacy, government censorship, and online age verification? Many of the proposed legislation to "protect children" has a very chilling implication along these lines. Based on what happened in the UK and Australia, it seems that kids just end up circumventing these measures in a way that actually endangers them more. Plus, vulnerable minors, especially teenage girls and LGBTQ people, may be prevented from accessing information about sexual health in their communities." Where's the line drawn here, Kris? I think might be what this texter is getting at. How do you hold both of those things at the same time?
Kris Perry: It brings up a really important point that we are talking about, policy solutions to a very complicated and rapidly changing industry where they have developed products primarily for adults, but that children are using in great numbers. It raises a number of ethical and privacy issues that are so complicated, it's actually hard to generalize what the answer to that is. What we see in New York and in other states like Minnesota, California, Florida, et cetera, are a suite of policy solutions and aimed at creating a safer online environment for children. They're imperfect. They aren't going to solve for some of these complicated identity differences among children or age differences among children.
Because we don't have that much cooperation from industry to improve their programs and their platforms proactively, we're at, you might say, a difficult moment in society where we're doing our very best to solve for these problems with policies, but we aren't where we need to be. We are not at a place where there's federal protection for children online that also takes into account some of their personal privacy needs. That's where, hopefully, in a year, a few years from now, if we were having this conversation, we'd be able to talk about more wide-ranging federal protections that address some of these very specific needs children have online.
Tiffany Hansen: Kris, the texter mentioned Australia, so I think we should just ask you about it. Australia recently passed a first-of-its-kind law banning social media accounts entirely for kids under 16. What do you make of that approach?
Kris Perry: Well, it's for children under 16, which is essentially a very large-scale policy experiment. It continues to address this ongoing problem we have, where kids are spending as much as 8 hours per day online. They are experiencing the health and mental health impacts that I described a moment ago. There is very little pushback in Australia for this ban, which, initially, very surprising. In the research world, we're expecting a more, you might say, dramatic response to sort of taking away something that many children have a problematic relationship with.
Tiffany Hansen: Sorry, Kris, I just want to interrupt. Pushback from kids, pushback from adults?
Kris Perry: From the kids.
Tiffany Hansen: Okay, got it.
Kris Perry: Because they are so attached to these platforms.
Tiffany Hansen: Let's bring another caller into our conversation here, Nadia in Essex County, New Jersey. Hi, Nadia.
Nadia: Hi. I wanted to know what is legislation being done at the government level for dealing with issues with tweens and teens interacting with AI chatbots? One program that I'm specifically worried about, it's called Polybuzz AI. I feel a lot of teens are using it so much that it's affecting their mental health. They're not able to socialize in a proper manner with actual people.
It's affecting their concept of, let's say, for a boy, what type of interaction they should have with the opposite gender in that sense. I think it needs to be brought up. It's being promoted very much to teens and tweens on YouTube. On YouTube Kids. I don't see any legislation or anyone discussing that. I just feel it's really destroying the respect and empathy and exploration that tweens and teens need to have with the real world versus something that's fictitious and does binary.
Tiffany Hansen: Nadia, just so I'm clear, you said Polybuzz? It sounds like [crosstalk].
Nadia: Poly. P-O-L-Y buzz. Oh, so good. Polybuzz AI.
Tiffany Hansen: All right, great. Kris, I'm assuming you might know what that is. Nadia, thank you so much. You might know what that Polybuzz is. I think, broadly, that app, program, whatever aside, it gets at that, sort of like, "Hey, kids, come talk to an AI chatbot about how to live your life." I guess Nadia's point is what's happening around the legislation of those kinds of things?
Kris Perry: Well, we saw essentially full-scale deployment of AI and AI character-driven products in 2025, and policymakers are just coming back to the office here. It's only the first week of January. They have expressed, I've seen across the country, great concern expressed about the human-like characteristics of some of these AI products. There are both federal and state-level hearings and policy discussions about how to protect children from these human-like AI characters, including even AI toys. We don't have a great answer to this question yet. In fact, that we're actively debating this at the policy level.
I think one of the positive threads here is that there is universal bipartisan concern about the role that AI is already playing in children's lives. Because there was good momentum around protecting children's data and their privacy up to this point, there's real potential for a strong response to the way AI products are being deployed. For your listeners, it's really important to remember every one of these products, whether it's a social media platform or an AI character or a smart speaker in your home, these are all designed for adults. However, children find them and use them and are at risk while they're using them.
Tiffany Hansen: Yes, of course. Kris, let's talk with Karen in Bergen County, New Jersey. Hi, Karen.
Karen: Hi, good morning. I have a 17-year-old and a 15-year-old and I do have limitations on cell phone usage. I was just wondering, since they're just teenagers in general, they don't listen to me no matter what I say, is there anything that is out there, perhaps a documentary or some advice that-- I'd like them to somehow understand what the effects are, the harmful effects.
Karen: That somehow something other than me I could show to them that would get this point across because they're just scrolling and I do have limits. I do. I have conversations. What can I do to show them besides me, if you have any advice on that?
Tiffany Hansen: Karen, thanks so much. Kris, advice.
Kris Perry: Well, there's an award-winning show out right now called Adolescence. It's a difficult show to watch, but it may be helpful. There's a documentary called The Social Network that's not new, but it does get at the way in which these products are designed. There are a number of youth-led movements and organizations that are helping youth understand themselves what the impacts are. You can find some of those on our website, childrenscreens.org, that will help them speak to their peers about the impact that the products are having on them and what they might want to do to address that.
It's really interesting that youth are starting to say, "I don't want this experience in my life. I don't like being manipulated. I don't want my data taken from me." They're starting to push back in a way that may ultimately result in a social change movement of some kind, where they care about their own health and want that for themselves. As a parent, you're in a really difficult moment where the industry is ahead of policymakers, and their products are in the hands of your children all day long. You have a 17-year-old, they're practically an adult, so you do have to keep this conversation going about the impacts and how concerned you are, and asking them how you can support them.
Tiffany Hansen: Rely a little bit on what lawmakers are doing, right, Kris? I mean, for example, New York requiring warning labels. Parents, caregivers, guardians like Karen have to rely on those types of initiatives. Are they actually working?
Kris Perry: We don't know yet. They're just starting to be implemented. You bring up an excellent point. The warning labels are a very familiar public health tool. We've seen them used on alcohol, cigarettes, and medications. What I'm understanding about social media warning labels is that approximately every half hour, your child would get a warning that "You've been on for half an hour. This is not a safe product for you. You should consider turning it off."
It will support the parent in reminding the child about this and will hopefully be effective in communicating the risks to the child directly. Over time, there will be enforcement and penalties levied against the platforms themselves if they're not adding those warning labels to their products.
Tiffany Hansen: Kris, we've had what, here, a half an hour worth of conversation, and I am incredibly creeped out and worried. I can't imagine what parents, caregivers, guardians of teens feel like trying to navigate all of this. You navigate it on a daily basis with the work you do. My question here in this last minute is, are you hopeful for the way things are going, or are you as equally freaked out as I am?
Kris Perry: I think that the harms and the way in which many of the companies have, in a sense, taken advantage of children who are so vulnerable at this. Their brains are being developed, their social lives, their physical health; they're actively growing into adults. I am very concerned about how prevalent the use of the social media Roblox and a number of other products are and now the advent of AI products. At the same time, I'm hopeful because in New York in particular, there have been so many interesting and important policies signed into law and are currently being implemented, that should do a lot to support parents as they try to navigate this complicated digital world with their children.
There are also real efforts being made at the federal level, so not only children in New York, but across the country should be seeing some improvements around the protection of their privacy and data. We are at a very, you might say, a watershed moment where we've had more than 10 years of this wide-ranging experiment conducted by the companies themselves on all of us. We're all reaching a point of, you might say, oversaturation and the desire to take back our privacy and our identities so that we can go back to a more balanced real life and digital life.
Tiffany Hansen: Well, Kris, been a great conversation, I have to say. Kris Perry is the executive director of Children and Screens, a nonprofit focused on child-centered tech policy and design. Kris, we appreciate all of your insight and time today.
Kris Perry: Thank you.
Copyright © 2026 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
