Do You Need an AI 'Friend?'
( Ryan Kailath / Gothamist )
[music]
Brian Lehrer: Brian Lehrer on WNYC. Have you seen this Friend ad in the subways? Apparently, there's more than 10,000 of these grayscale posters plastered throughout the subway system, showcasing a new wearable AI product that aims to be your next companion.
The ads, if you haven't seen them, feature promises like 'I'll binge the entire series with you' and 'I'll never bail on our dinner plans,' along with a simple definition, 'someone who listens, responds, and supports you,' but some New Yorkers had their own messages to share on these ads. Maybe you've seen this, too, from the people who pulled out their Sharpies to share their messages.
One defaced poster reads 'Stop profiting off loneliness,' while another warns, 'AI wouldn't care if you lived or died.' The latter references a few notable cases of suicides prompted by chatbots that you may be familiar with, not associated with the product advertised, but in that realm.
Well, WNYC and Gothamist's arts and culture reporter Ryan Kailath took to West Fourth Street, the subway station there, the other day to chat with Avi Schiffmann, the 22-year-old founder behind Friend and New Yorkers, as they pass through a corridor full of these ads. He joins us now to share what he learned. Hey, Ryan.
Ryan Kailath: Hey, Brian.
Brian Lehrer: Listeners, we can take a few phone calls on this, too. First of all, has anyone tried this? Does anybody have this product and want to review it? Or have you seen the subway ads, and what did you think about the messages that they deliver, some of the ones that I just quoted, or whatever else they say, or the ways they've been defaced? 212-433-WNYC. 212-433-9692, call or text.
Ryan, did I give a fair description of it, or how would Avi Schiffmann describe his own product? How did he describe it to you?
Ryan Kailath: I think that was a fair description. He called it 'a new species,' which I had to get him to elaborate on. Basically, just an AI companion. He sees it more as a living journal. Rather than writing things down, it's a microphone that hangs around your neck, and you can just talk to it all day. He also pointed to an OpenAI study that made waves a few months ago, digging into how people use their models.
Schiffmann said the biggest category of usage is companionship, advice, and general life advice. That's the category he wants to own with this wearable. He says if you have a device that's always listening, then it has more data and more context about you and your conversations and your life, so it can give better advice and be a better companion to you.
Brian Lehrer: Did you get to talk to anybody who owns the product?
Ryan Kailath: No. He just launched in late July. He said he's been selling about 400 a week, so not nothing, but still early stages there. There was one New Yorker wearing a Friend in the subways that day, and it was Avi Schiffmann, the founder.
Brian Lehrer: Tell us more about Avi Schiffmann. All I mentioned in the intro is that he's just 22 years old.
Ryan Kailath: I grew up in Silicon Valley, and I will say I've met a million Avi Schiffmanns in my life. He's a very smart young man. He first gained notoriety, got written up in The New Yorker five years ago. He was one of the first to create one of the big early COVID tracking websites, if we can remember, all the way back to when the federal government wasn't doing such a great job with the numbers and data. He dropped out of Harvard.
He created a website called Ukraine Take Shelter that matched refugees at the start of the war with host families. That apparently had some good impact. Also got a lot of criticism because there weren't safety checks, if there were predators or people taking advantage of people. It seems like he's very much known for 'Move fast, break things,' Silicon Valley way of doing it, launch big untested ideas into the public and see how they go.
Brian Lehrer: The graffiti on these ads, as I mentioned, has been pretty harsh. Messages like 'Stop profiting off of loneliness' and 'AI wouldn't care if you lived or died.' You raised these to Schiffmann, how did he respond?
Ryan Kailath: It was all on purpose as a successful marketing stunt, you might imagine. To describe the billboards, they're big blank white canvas, the posters, with stark black text with the things like you described. 'A Friend is, I'll ride the subway with you." Schiffmann told me he left all this negative white space on purpose in order to provoke conversation and graffiti. A friend of his has a gallery going online of all their favorite messages. As you can imagine, one of the most elegant and simple ones rhymes with 'Duck AI.' He was very much courting this.
The MTA has been replacing them as they get vandalized. MTA doesn't love vandalism. Schiffmann said he didn't love that at first because he wanted this gallery of user-generated content, if you will, but now he's decided that it's a nice way of refreshing the canvas and allowing more people to participate.
Brian Lehrer: Setting aside the marketing spectacle or the spectacle of defacing this marketing spectacle, did talking to Schiffmann give you any sense of whether there's a real need that this AI Friend is filling?
Ryan Kailath: Like I said, only about 400 a week for a couple of months now have been sold. He didn't have any specific examples of people that aren't him, his friends, or his test users, and how they're using it exactly. He did tell me his hopes for it, which were interesting, and people had a lot of different reactions to them. Again, a living journal that comments on your life and gives you advice. He posited that if everybody had these, if hundreds of millions of people had an artificial intelligence life coach, companion, therapist, and advice giver, that in his words, it would 'smooth out the variance in people' and he thought it would raise the emotional intelligence of humanity, and there'd be a lot less weirdness. He also acknowledged, then being weird, would be a great way to stand out in this smoothed-out, low-variance AI future.
Brian Lehrer: Jane in Brooklyn is calling about something that her child's nanny said about this product. Jane, do I have that right? Hi there.
Jane: Hi. Yes. I am a nanny.
Brian Lehrer: Oh, you're a nanny?
Jane: Yes. The little boy that I babysit, he's on YouTube a lot. He was looking up friend.com because he was curious, because we're always riding the subway. He and I were both saying that this just feels like a clear data mining situation where he's watching YouTube videos of people trying them out, kids trying them out, and just saying things that have to do with products around them, and watch their ads instantly change.
Brian Lehrer: How old is the kid you're a nanny for?
Jane: He's eight. He's a smart cookie.
Brian Lehrer: Must be a smart cookie if he's being that media savvy and media skeptical. Ryan, interesting about Jane's call and the suspicion of that eight-year-old that this is a data mining scheme, and what they're really looking for is not just sales of the product, but to get people's information. Any indication that that's actually the case?
Ryan Kailath: Yes, Brian, that eight-year-old is going to be our boss one day, I think.
Brian Lehrer: Yes, right. Unless Avi Schiffmann is. Go ahead.
Ryan Kailath: [chuckles] Schiffmann claimed that there's end-to-end encryption here, so that actually he and his company, which is only three employees, including himself, he said that they don't have access to the data and that it's encrypted on the device as well, so if you lose your Friend, nobody else is going to be able to find it. Homework I should have done here is read carefully through the privacy policy, but I don't believe they're currently selling the information to advertisers. I'll have to check on that.
Brian Lehrer: David in Dallas, you're on WNYC. Hi, David.
David: Hey, Brian. Thank you. I was just going to mention that I use a lot of ChatGPT currently, for just leadership coaching in general. Not so much a companionship, like a friendship, but oftentimes, I'm already using AI or at least ChatGPT when I'm coming home from work and managing a conflict on my team. It's a really cool device, I feel like, but we already have a lot of these tools. I have my AirPods on and I'm just talking with ChatGPT. Wanted to, really, see what was, really, the difference. I hear a little bit of the encryption and things like that. It's just something that I already do to this day, so I just thought it was really interesting.
Brian Lehrer: Interesting question, David. Ryan, do you have an answer to that? How is this actually different from what you can already do with ChatGPT or other AI?
Ryan Kailath: David, I think you nailed it. The only real distinction here is the marketing and the branding and the fact that it's a necklace that hangs around your neck, but otherwise it's largely the same as you talking in your headphones or AirPods. Small distinction in that this is always listening rather than you're opting in, and it's running on Gemini, which is Google's model rather than OpenAI's, but I think you've nailed it. What's really different here is the way he's presented it, not the product itself.
Brian Lehrer: We certainly could talk about the ad design. They have all that white space that almost invites people to deface these posters in the subway. I really want to ask you about this text from a listener in our last minute. Listener writes, sarcastically or not, "How long till the 'Friend' fails to talk out of or actively tells someone to commit suicide?" A serious question, did you get into with Avi Schiffmann, because there have been cases like this with, I guess, virtual therapists, and that kind of thing, whether there are any guardrails that are programmed into this 'Friend' device.
Ryan Kailath: We didn't get into it, but some early journalist user reports have suggested that they're not so great. A WIRED reporter bought one of the things and tried it out for a while, and found it quickly turned sarcastic and maybe slightly combative. As you know, as we've learned with models like this, everything is downstream of the creator and the person who initially trained it. Take what you will from that. Certainly, a lot of the graffiti that we've seen around the city raises our listener's text, same point, saying, 'AI wouldn't care if you lived or died.'
[music]
Brian Lehrer: Ryan Kailath, WNYC and Gothamist arts and culture reporter, always cruising the subways for a good arts and culture story. Thank you for this one.
Ryan Kailath: Thanks, Brian.
Brian Lehrer: That's The Brian Lehrer Show for today, produced by Mary Croak, Lisa Allison, Amina Srna, Carl Boisrond, and Esperanza Rosenbaum, Juliana Fonda, and Milton Ruiz at the audio controls. Stay tuned for Alison.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
