A.J. Jacobs Tries Life Without AI
( Jernej Furman / Wikimedia Commons )
Brian Lehrer: Brian Lehrer on WNYC. This next segment is a guaranteed AI-free zone, probably. When author and essayist A.J. Jacobs decided to live 48 hours without artificial intelligence, no ChatGPT, no algorithms, not even the facial recognition feature on his phone, he thought it would be simple. Then he realized he couldn't check the weather, ride the subway, even brush his teeth without bumping into some kind of machine learning.
Now, Jacobs, who some of you know from his past appearances on this show and other things, who's made a career out of turning himself into a human experiment, living biblically for a year, remember that book, reading the entire Encyclopedia Britannica, even following the Constitution's "original meaning," set out to see how much of his everyday life in this experiment was quietly run by AI. The answer? Just about all of it. He wrote about it in The New York Times under the headline, 48 Hours Without AI, a funny, revealing chronicle of how hard it is to escape the algorithmic web we all live in.
We'll talk about what he learned, what he wore, what he ate, and what does two days without AI say about the future of being human, even? With us now is A.J. Jacobs, host of the Hello, Puzzlers podcast, essayist, author of The Year of Living Biblically, The Know-It-All, It's All Relative, in which he proved that he and I are very, very distant cousins. His latest, The Year of Living Constitutionally: One Man's Humble Quest to Follow the Constitution's Original Meaning. Always great to have you, A.J. Hi, welcome back to WNYC.
A.J. Jacobs: Delighted to be here, Brian, and also very happy to hear it's AI-free. I was a little worried I might be talking to an AI-generated Brian Lehrer, but I'm now assured you're the real thing.
Brian Lehrer: Well, I'm not an AI-generated Brian Lehrer, but I said it's probably an AI-free segment because of what you experienced. You don't even know what AI is actually influencing in your daily interaction. When was the moment you realized that this would be harder than you expected?
A.J. Jacobs: Well, the origin story is that the article is a sequel to an article I wrote for The New York Times two years ago, where I tried to avoid interacting with plastic. No using plastic products or even touching plastic, which turned out to be incredibly hard because it's everywhere, in our clothes, our carpets, our food, and we need to address that. A couple of months ago, my editor at The Times, the great Jim Windolf, had the idea, "What about a sequel instead of plastic AI?" His theory, which turned out to be true, is that AI is, as you say, everywhere. It's hiding in plain sight, and it's affecting our everyday lives without us even knowing. It's not just what you read on your Facebook feed or what you watch on Netflix, but it affects what you eat, how you get around, and on and on, so that was the origin.
Brian Lehrer: Before you even started, you had to figure out what actually counts as AI. A lot of us will think, "Oh, ChatGPT or image generators," but you also included things like apps that predict traffic or filter spam. How did you decide what made the cut and what didn't?
A.J. Jacobs: Yes, most of the experts I talked to said AI is an umbrella term. It covers ChatGPT, which gets all the attention. The new kid on the block. It also covers something that's been around for a couple of decades called "machine learning." Machine learning is how Netflix recommends movies to you. It's how Facebook adjusts your feed. It's basically a program that evolves and learns.
I think of it as a recipe where, instead of the ingredients staying the same, if the recipe senses, "Oh, people like sugar. Let's add some more sugar," and that turns out to be everywhere. Con Edison uses machine learning to figure out how to allot the energy. The New York reservoir system uses machine learning to help predict where there's going to be demand. Now, they did want to make clear. Engineers make the final decisions because they didn't want people to freak out, but they do use machine learning to help, as does so many other things in our lives.
Brian Lehrer: With that Con Ed example, did you have to live without electricity or cooking or things like that?
A.J. Jacobs: I did. I commit to the bit, as you know, Brian. I went full Amish, and I had a solar-powered generator, so I was able to use that for a while. I ended up writing the article by candlelight on a typewriter because, of course, my laptop is filled with AI and machine learning. The battery optimizer alone is machine learning. Yes, it was a time-travel experience.
Brian Lehrer: Listeners, we can take questions for A.J. Jacobs on trying to live for 48 hours without artificial intelligence, or you could scratch your heads and call in and say, "If you think you could do it, what part of your life depends most on AI, and would you even want to give it up?" 212-433-WNYC, 212-433-9692, call or text. We can talk a little bit about what you were wearing. Yours was an outfit from the 1970s inherited from your grandfather. Describe it and why that.
A.J. Jacobs: Yes, I did end up looking absurd and do some absurd things, which is part of the process. Yes, I looked like Austin Powers. I had on red and white checkered pants. I had on this flowered shirt. That was because modern clothes are touched by AI. They may not be totally generated by AI, but they are certainly marketed using AI. A lot of designers are using AI for inspiration. Anything that is shipped on the supply chain is really affected by AI because machine learning figures out what is the most efficient way to ship things. Really, I said, "I got to get away from modern clothes." I went deep in my closet, and this was what I found.
Brian Lehrer: What's the moral underpinning of this experiment? I know we're all concerned about AI taking jobs and AI sucking so much electricity that is pushing a lot of people's energy rates up, other things as well, but it also helps us in many ways. Some of these little ways that you're pointing out, from weather forecast to whatever. What was the moral point you were trying to prove?
A.J. Jacobs: I love that question. As you say, I was not saying AI is all bad or all good. I think it's both. It's a tool. Like any tool, it can be used for good or bad, as you say. It's great that AI is figuring out new medicines. I have no problem with it making our water system more efficient, so it's cheaper, but I think that we need to realize AI is already here. That was one of the big points to realize how omnipresent AI is, and that it's just going to get more intertwined with art.
We need to, A, be aware. B, I want more transparency. When I get an email, I want to know, "Was this by ChatGPT?" I also want us to think more about regulations. California is considering a regulation where every image that's AI has to be watermarked so you can see, "Oh, that's AI." I love that. I'm also concerned about privacy. Facial recognition, which uses AI, is so much more common than I thought. It was quite alarming. I ended up wearing these anti-facial recognition glasses.
Brian Lehrer: I forgot to mention that about your outfit.
A.J. Jacobs: Yes, it's quite a work.
Brian Lehrer: Anti-facial recognition sunglasses. There's such a thing. I didn't know.
A.J. Jacobs: Yes, there are privacy advocates out there who are very concerned and trying to fight back. Yes, people said I looked like Elton John. They were white and big, and they looked a little ridiculous. Yes, as you say, it's not good or bad, but let's be aware of it and be intentional in how we use it and regulate it.
Brian Lehrer: I think Bess in the Bronx has an AI pet peeve.
A.J. Jacobs: Oh, I love it.
Brian Lehrer: Bess, you're on WNYC with A.J. Jacobs. Hi, Bess.
Bess: How do you get away with getting a robotic person to make your appointments at the doctor's office? It drives me crazy. More and more of my doctors are doing this. The AI person. What's really funny about it is that they have a background noise of other people talking. It makes people feel that they're in an office.
A.J. Jacobs: Oh, tricky.
Bess: That's very tricky.
Brian Lehrer: We know we're in a new age when people are nostalgic for call centers.
A.J. Jacobs: [laughs] Good line. Well, it is amazing. There's also reports of people who have to do job interviews with robots. First of all, I had to use a landline because that uses less AI than Verizon Wireless, but I also tried to speak to actual humans, and it was a challenge. When I tried to call eBay for customer service, they just flat-out refused to connect me to a human. I think, as you say, that is a big problem. It does make their lives more efficient. A lot of times, we're in between the cracks. We have something that cannot fall into one of the buckets. I feel that with AI, we're not getting that addressed.
Brian Lehrer: A few more minutes with A.J. Jacobs on his 48 hours trying to live without artificial intelligence. If you have anything else to say or ask him about it, 212-433-WNYC, 212-433-9692. Listener texts, "Jacob's experiment reminds me of Colin Beavan's living for a year in New York City with the smallest environmental impact. The documentary, No Impact Man, was made about him." He was on the show for that. I don't know if you're familiar with this. This goes back to around 2009. Actually, to support him, we were broadcasting at that time from the 24th floor of the Manhattan Municipal Building. Just to show a little solidarity, I met him in the lobby of the building and walked up with him the 24 flights to the studio-
A.J. Jacobs: Wow.
Brian Lehrer: -because he wouldn't take the elevator. It was very instructive to me. I thought, "Okay, I'm leaving about 45 minutes for this," and it took eight minutes. It taught me something about how much we depend on technology that may impact the environment when we don't have to, but what do you think about that analogy?
A.J. Jacobs: I think it's a great analogy. I know Colin. I actually blurbed his book, so I'm a fan. I think it's a great point because electricity is driven by AI, or AI helps it, that I couldn't use it. Again, I was doing the same thing. Also, speaking of elevators, a lot of office elevators now are AI-driven in that they figure out what's the best floor to hang out on, because maybe it's the middle floor. Again, I have no problem with that. That might make my life a tiny bit better. It might save me three seconds in an elevator, but AI has these unintended consequences that we have to be aware of. Yes, I did not ride an elevator or, as I say, use my computer or iPhone. Oh, I couldn't listen to podcasts. I don't know--
Brian Lehrer: What?
A.J. Jacobs: Yes, I don't know--
Brian Lehrer: That's going too far.
A.J. Jacobs: It is. I don't recommend that as a podcaster myself, but a lot of podcasts use AI in all sorts of ways. The research, the editing. No accusations, Brian. No judgments, Brian, if you do. In the editing, they take out the "ums" and the "uhs" from the guests. Of course, you're live, so I can "um" and "uh" all I want.
Brian Lehrer: We got podcasts, too, but go ahead.
A.J. Jacobs: Yes. Well, we'll see. I'll listen to the podcast and see if I'm cleaned up. Yes, and in terms of mixing, so AI in entertainment is everywhere as well.
Brian Lehrer: I didn't know that AI could do that, clean up the "uhs" and the "ahs," on purpose. On the rare occasion that we would do that because it was overwhelmingly distracting or something like that in one particular case. We certainly wouldn't do it throughout a conversation, but maybe there was just an embarrassing moment for a guest or something like that, and we take out one little spot. We have human engineers who do that. I didn't know AI could spot those.
A.J. Jacobs: Yes, but some people say that's not very good at it, so please keep your engineers on. I don't want to cause any trouble.
Brian Lehrer: I should note, because this may be the most extreme of all the things you had to do. By the end of day one, you're out in Central Park foraging for dinner. I'm guessing that's not your usual dinner plan. What did you actually end up eating?
A.J. Jacobs: Well, that, yes. That was certainly, as you say, an extreme part. I felt I could not eat grocery-bought food because, A, industrial farms use machine learning and AI in figuring out how to irrigate and how to ship. I thought, "Well, to be really safe, I should forage in Central Park," where there's as little AI as possible. You probably had this guy, Wildman Steve Brill, on the show.
Brian Lehrer: Oh, yes, he gave tours to show people what a lot of the plants and stuff are in the city.
A.J. Jacobs: Exactly, so I took some of his advice, and I went foraging for, what are they called, plantain weeds. I had some of those. I couldn't cook them, so I had them raw. Do not recommend them. Do not recommend them, but it did keep me somewhat fed.
Brian Lehrer: I'm cringing at the thought of interaction that those weeds may have had with some New York City critters, but I will leave that there.
A.J. Jacobs: [laughs] Thank you.
Brian Lehrer: One more call. Tara in Brooklyn, you're on WNYC. Hey, Tara, we have about 30 seconds for you.
Tara: Oh, no. Hi, thanks for taking my call. One, I wanted to say, it's really frustrating that AI is everywhere. We can't opt out because of its environmental impacts, and things like data centers simply aren't regulated enough. We're against our will, forced to participate in the degradation of our climate. The other thing I just wanted to draw attention to and I'm curious with the gas--
Brian Lehrer: That's the biggest thing these days. By the way, Tara, I'll let you finish, but we--
Tara: Thank you.
Brian Lehrer: We got a text that says, "Jacobs says it's neither good nor bad, AI, but there is a strong argument to be made that between the energy use, water consumption, and pollution, particularly of Black neighborhoods, AI is inherently bad." You're seconding that, Tara? Go ahead and finish your thought.
Tara: Really what I called about was to see the guest's and yours, Brian, thoughts on Local Law 35, which requires New York City agencies to report what algorithmic tools they use and how many. If you look in the, I think, five years that that law has existed, the vast majority of the agencies still report zero, which is obviously not true. I think DOT last year reported one, an agency that controls thousands and thousands of miles of streets, of intersections, use tons of softwares. I'm curious, one, what you all think about it, and how we might ask even our local government to enforce that more.
Brian Lehrer: Yes. Tara, thank you.
[crosstalk]
Brian Lehrer: Candidly, you're telling me something that I didn't know, and so that certainly fits in to the beat of the show. We're going to check out Local Law 35, and maybe do a segment on it. A.J., 15 seconds for a last answer. We'll see if that's exactly how she described and all of that, but a last thought.
A.J. Jacobs: Some big issues for 15 seconds. Yes, I agree. More transparency. I went online and found a lot of examples of algorithms and AI they were using. More, more, more, we need to know where AI is, so that we can figure out whether it's affecting us in a good or bad way so we can regulate it.
Brian Lehrer: A.J. Jacobs' latest book is The Year of Living Constitutionally. His article in The New York Times that we've been discussing is called 48 Hours Without AI. Thank you for sharing it with us.
A.J. Jacobs: Thank you, Brian.
