What Students Lose When ChatGPT Writes Their Essays

( Brandon Bell / Getty Images )
Title: What Students Lose When ChatGPT Writes Their Essays
[MUSIC]
Brian Lehrer: It's The Brian Lehrer Show on WNYC. Good morning again, everyone. Picture this. It's late. You're a college student at your desk or in the library, and you're working your way on a paper on maybe Shakespeare that you've been putting off for weeks. Writing it is hard, but along the way, you surprise yourself. As you write, you work something out on your own. Voila. You've learned something about Shakespeare and maybe yourself, but as students use AI tools like ChatGPT for just about everything, including their essays, those moments might be going extinct. Writing is hard. ChatGPT practically eliminates the difficulty of writing altogether, or it can, depending on how you use it.
Without the magic of those little eureka moments and the enrichment that comes with them, what does that mean for an education, especially in the humanities, even if you get all As? Is AI just a tool that can help students with learning and make them more efficient, maybe even leave them more time to think, or is it outsourcing and diminishing actual critical thinking? Joining us now to try to answer some of those questions, and we'll invite you in students, college professors, others to join this conversation, is Hua Hsu, a New Yorker staff writer who is also a professor of English at Bard College and author of the memoir, Stay True, published in 2022.
His new piece in The New Yorker is titled What Happens After AI Destroys College Writing. Professor Hsu, welcome to WNYC. Thanks for coming on.
Professor Hua Hsu: Brian, so great to be here. Thanks for having me.
Brian Lehrer: Can we establish first how common it is, especially for older listeners who may not realize, how common it is for college students to use AI? Are they all using tools like ChatGPT, Gemini, Claude all the time?
Professor Hua Hsu: I don't think there's any firm statistics that would bear out quite how many, but if you talk to college students at all, it seems as though everyone is using it in some capacity. Some students use it in what we might consider to be more noble circumstances. Research, streamlining processes, but over the past few months that I've talked to college students, it seems as though it's quite widespread, and there seem to be no signs it's going to slow down with a lot of the big companies partnering with colleges now, too.
Brian Lehrer: In your essay, you start with an anecdote because it's always good to tell individual stories to get people engaged. An anecdote of two NYU students you called Alex and Eugene, who say they use AI tools all the time to write essays, emails, and even application materials. Tell us a little more about Alex and Eugene.
Professor Hua Hsu: I should say that I found them to be quite smart, charming young people. I like them a lot, and so it's been funny to read how people react to them. To me, they're typical college students. They have a sense of what they want to do. They're both interested in business and accounting, and they've essentially just devoted a lot of effort to streamlining out anything they don't really want to do. In the case of Alex, even though he is a pretty artistic kid, he had decided that he wasn't particularly interested in his art history class. He'd figured out ways to use Claude and ChatGPT to do, I would estimate, maybe 90% of the work for him. He simply didn't care about these classes and he told me that he retained very little. I think--
Brian Lehrer: Let me push on this a little bit. What was his major? What was he interested in?
Professor Hua Hsu: Accounting. I think he'd originally had aspirations of doing art, but he was interested ultimately in what he consider to be a more practical pursuit.
Brian Lehrer: I could see a world where if they had this when I was in college, which they didn't have AI yet, I might have used it to get through zoology, which I had no interest in, while I dug into my psychology and sociology and media studies courses and things like that with gusto.
Professor Hua Hsu: Yes. It's funny because, as you mentioned before, I'm an English professor, so I'm charged with making people write the papers about things that, if they're taking a requirement, it might not be something they chose to read, and they may not be writing about something that they think interests them. You hope that something takes hold, right? You hope what you're describing before is this moment of magic happens. I think it's just increasingly the situation where, with these tools, you can pretty much just outsource anything you don't want to do. When I think back to college, there are plenty of things I didn't want to do that didn't really yield magic.
It's hard for me, even as an educator, to have a categorical opinion that students are far lazier than I was when I was in college.
Brian Lehrer: Listeners, if you are a current or recent college student, maybe you even go to beautiful Bard College yourself up the Hudson Valley, or if you're a faculty member anywhere, we'll acknowledge. You'll probably all say the use of AI is extremely widespread. That's a given here, but our question to you is this. How much is AI getting in the way of thinking and learning, and how much is it just another tool that maybe even gives the students a head start into thinking through what they're learning?
Current or recent college students, current faculty members, anyone else who thinks you have a take on this, give us a call. We want to hear from you, especially if you have direct experience as a student or teacher, to help us report the story. Call or text 212-433-WNYC. 212-433-9692. Professor, to what degree might this be a moral panic, meaning the perceived problem is worse than the actual problem in terms of how much students are actually thinking and learning, or not thinking and learning with AI?
I could see at least a theoretical world where they use AI to help them get through the assignment, but because they say they have so much homework and this just makes them efficient, they actually might have more time to have that revelatory moment about Shakespeare or something else.
Professor Hua Hsu: I think that the panic is, to some degree, justified. Even though I don't think college students are lazier or thinking less than they did in the past, I do think that there's an innate process to college that is being short circuited right now. I think that there is these pockets of potential magic are going away. That said, I feel like the larger moral panic, if there should be one, should just be about the use of AI in general, not just among college students. I think for anyone over the age of 40 or 50, reading these stories about college students can be quite horrifying because they are taking the most extreme shortcuts possible.
Yet, a lot of the institutions that are at the forefront of pushing these technologies, pushing these products, even as the products themselves don't seem to be as good as they claim. I think if you look at where colleges were a year ago, trying to deal with this to where they are now, a lot of institutions of higher education are embracing partnerships with these companies, having just a year ago tried to figure out how to legislate them away. I think that it puts students in a strange position, but there's more concern, I think, at a larger social level.
Brian Lehrer: A couple of texts coming in already. One person writes, "The point of a liberal education is to learn things not related to your major, making you a more well-rounded person." Another one writes, "It begs the question of should we rethink required courses?" I guess, meaning-
Professor Hua Hsu: Yes.
Brian Lehrer: -courses that are not in your major. You're reacting to that. How come?
Professor Hua Hsu: I think there's no single answer to all of this, obviously, right? Other than if we were to magically have the capacity to philosophically rethink what education is and how it works. One solution I have heard from quite a few people is this rethinking of what requirements are. If Alex, the student in the piece, doesn't want to take an English class or doesn't really see the need for a well-rounded education, then why should this student be compelled to take these other classes? I think in the past, going back to that lovely intro you had of the kid grinding away on the Shakespeare paper, there was this potential, right?
This potential that you would almost accidentally learn something and do something difficult, but if students aren't even trying to do that now, then there is this-- I don't know how radical it is, but this somewhat extreme position, at least in my mind, that we just do away with requirements. All college would essentially become vocational in a more aggressive way than it already is now.
Brian Lehrer: That's one of the things that troubles you, I guess, because there's so much talk about that. The value of a college education in the job market is declining, especially relative to the debt that so many students have to take on. There's all this talk in both the Democratic and the Republican Party, to put it in a political context, that we need more apprentice programs, more vocational training, because college is overrated, but you're focusing on what would get lost if even fewer people go to college.
Professor Hua Hsu: I'm obviously speaking from my own vantage as someone who teaches at a liberal arts college at a very small school where we're able to have these, I think, difficult conversations and really engender a back-and-forth. When you're trying to educate at scale, when you're talking to a lecture of, I don't know, 500 to 700 kids, which I don't do, I can imagine that a lot of these nuances can get lost.
Brian Lehrer: Listener writes an analogy. "It feels like using a forklift at the gym. The purpose is not that the weights need lifting, the purpose is that you do the work." Rafiq in Brooklyn, a high school teacher, you're on WNYC. Rafiq, thanks for calling in.
Rafiq: Thanks, Brian. Thanks for having me. Really interested in this topic. We talked about it a lot as we prepare our high school students for college. Fundamentally, I think that it goes back to what exactly are we assessing, or perhaps what is college designed to do. If it's fundamentally designed to teach people how to be better thinkers and to analyze, then information acquisition doesn't really matter, which is what all AI does. It provides information in maybe clever ways.
I guess the question then is, how do we actually change what we're assessing for? Which means how do we design our college courses, and therefore, our high school courses, to really court the kind of thinking and analysis that are going to prepare the kind of citizens we want, which are stronger thinkers? At least that's our perspective that we take at my school.
Brian Lehrer: Professor, you want to engage with Rafiq?
Professor Hua Hsu: I love that. I love that Rafiq described the students as future citizens. Not as future workers or future consumers, because I think that I still hold on to, and I think a lot of listeners probably hold on to this idea that the point of education isn't necessarily to produce a workforce. It's to produce people. It's to turn you into an adult, or it's to make you understand your role in a community or a society. I do think that assessment, in general, from when I think about what it was like to be a kid in the '80s and '90s to now, young people are just assessed relentlessly.
I think that these methods of assessment, these methods of standardized testing, often make it much harder at a younger level, at younger ages for students to understand that they're in school to do difficult things or to live in doubt, or to take risks. A lot of the students I talked to talked about how, once they got to college, they were surprised that they could finally have the freedom and space to do that deep thinking that they hadn't done in middle school or high school, because, at that time, they were just trying to get the marks to get into college.
There is this challenge for some young people to rethink what the purpose of education had been all along, but to do so when you're 18 or 19 is far more difficult than if you're taught at a much younger age.
Brian Lehrer: Rafiq, stay there for a minute because I'm so glad you brought up education in pursuit of being a citizen. I want to ask a follow-up on that to both of you. If you're just joining us, my actual guest is Hua Shu, a New Yorker staff writer, professor of English at Bard, and author of the memoir, Stay True. He's got a new piece in The New Yorker called What Happens After AI Destroys College Writing.
In the context of becoming citizens, another thing I've noticed that made me at least wonder whether some of this is a moral panic. There was the Internet utopian era of the late '90s, early 2000s, when some people thought democracy would be strengthened by all the information available online right now. Now we see that all the disinformation online is degrading democracy, at least in addition to whatever good the digital era is doing in that respect. In my own work, I'm constantly doing Google searches on the topics I'm prepping for the show.
I see that this year, for the first time, the first things that come up in Google searches are not the list of links that I can choose from and go dig in and read anymore, but a Google AI summary of what I'm looking for based on my search terms. I usually go past that summary and click on some actual sources, because the AI summary can make mistakes, and it's also usually shallow, but I also do read the AI summary first, and I see what links it says the summary is based on. I noticed that the AI sources tend to be reputable news organizations or other expert sources, not a lot of fringy disinformation or propaganda sources which might come up in the links.
Could it be that all this AI popping up is actually a good thing for real information, rather than disinformation being prominent in people's feeds in the context of being citizens in a democracy? Rafiq, am I throwing something entirely new at you there?
Rafiq: No, not at all. We talk about this on our staff all the time. I think you have a lot of faith in the algorithm, Brian, faith that I don't necessarily possess, but we could actually change the word citizen to human, right? I think, if instead, we talk about, "Well, how are we actually preparing our young people for humanity?" We would actually ask a different set of questions. Again, what AI presents to us would be framed completely different. Look, I think, inherently, we try to teach our students to question everything, including us. Why? Because curiosity means that you aren't actually sure at the core, and so you pursue this path of thinking to get closer to certainty without ever reaching it.
I think that is the pursuit, right? The pursuit of humanity. We could talk about that morally, we could talk about that in terms of our values and beliefs.
Brian Lehrer: Right. The pursuit of ourselves.
Rafiq: Our schools are not built to court this. Our schools are not really built to court this, right? At least not yet. I think if we actually-- I'm interested to see what the professor thinks about this. If we instead start to ask ourselves a different set of questions about how we are preparing our young people for, we could say adulthood, we could say citizenship, or we could say humanity, we would actually, I think, start to think about our institutions we call schools, colleges, universities, very differently. I'm curious, Brian, to hear what you think and what the professor thinks.
Brian Lehrer: Professor Hsu.
Professor Hua Hsu: I totally agree with that. I think that there's a way in which AI makes anyone feel like an expert. I think that's its appeal. The appeal of, I think, the Internet nowadays, is that we all "do our own research," but I think what Rafiq is saying, and I think what's fundamental to being a human is risk and doubt and failure and being able to admit you don't know something. I do think that these tools often engender this false sense of just being a know-it-all or having a mastery or grasp over the world that, I think-- it's much harder to unlearn that the older you get.
I do think that there is, to some degree, that this is all just a moral panic, but I think that there's a very serious issue around what it is we want young people to get out of their educations. I think what Rafiq is describing is an ideal I share, too, which is to think about what it means to be a human, a member of society, a member of a nation, maybe even, and that these aren't necessarily questions that algorithms necessarily drive you toward when a lot of what we're getting from these algorithms-- I agree with Rafiq that there's lot of research that these large language models are beginning to hallucinate more or are becoming a little bit less reliable.
Brian Lehrer: Meaning, they make mistakes.
Professor Hua Hsu: Yes, they make mistakes. They generate sources, they tell you what you want to hear. I think just reorienting society around reliance on these tools is quite dangerous, but it's a far bigger problem than just at school.
Brian Lehrer: Rafiq, thank you for a beautiful call. Please call us again. Sarah in Morris County, you're on WNYC. Hi, Sarah.
Sarah: Hi. Thanks for having me. Long-time listener, first-time caller.
Brian Lehrer: Glad you're on.
Sarah: I just graduated a second degree program. I had done my first degree back in the 2000s, so before AI. Coming to this degree, I did all of my work by myself until the end of the program, when I realized many of my professors were grading my papers with AI, and I wasn't getting actual feedback. In one case, in particular, I had put in a juxtaposition in my paper, and AI told me it was a disjointed phrase. It felt really like busy work and useless efforts of my time to put all this work into writing when no one was going to be looking at it.
Brian Lehrer: There's the turnabout question, I guess. Professor Hsu, are the professors, in your experience, or to your knowledge, using AI to grade papers just like many students are using it to write?
Professor Hua Hsu: There have been some stories in recent months of instructors doing that, and there's a situation right now-- I don't know if it's resolved yet at Northeastern, where a student uncovered that their professor had been essentially grading, commenting on their papers that way. I think they were able to get a tuition refund, but I'm curious with the caller. Did you confront your professor over this?
Sarah: I did not. This moment I brought up in particular was the last semester in the last few weeks, where I realized it had been happening. At that point, I was so frustrated. I just wanted to graduate.
Professor Hua Hsu: Was your frustration that it made apparent that the assignment had been somewhat pointless, like busy work to begin with?
Sarah: Yes, and my feelings of coming to college the second time around, years later, I felt this massive increase of work. Sometimes, something like 50 assignments per week or more. I just felt like dumped with work all the time. Then to have at the end realize no one had been looking at most, if not nearly all of it, this whole time, felt very frustrating.
Brian Lehrer: The only student in America who actually wanted their work to be criticized.
Professor Hua Hsu: It's an interest--
Brian Lehrer: It shows that intellectual and moral core of you, Sarah. Professor, go ahead.
Professor Hua Hsu: I'm sorry that happened to you, Sarah. It's a conversation I've had with other educators. Just at what point would you resort to this, too? A former colleague of mine, Barry Lam, he's a philosopher in California. He tells his students up front, "If you're going to use these things and I sense that a certain percentage of the class is using it, then I'm going to grade your papers using this, too," as a playful threat. What's interesting is that Sarah still feels this sense-- a desire to feel a sense of accomplishment, or wanting to take ownership in your work.
That was a question I asked a lot of the current undergraduates. How you would feel if your ChatGPT paper didn't get the grade you thought you deserved? It seems as though a lot of contemporary current college students didn't feel as much of attachment to their work as, say, I did back in the day. I think it's that they view themselves as project managers or as collaborators more than people who should feel this-- I don't know, maybe it's old-fashioned now that you want to feel that sense of attachment or achievement, but it was a different--
Brian Lehrer: From scratch.
Professor Hua Hsu: Yes, it was a different relationship with the work I felt than what Sarah's describing or what I remember from when I was a student.
Brian Lehrer: Sarah, thank you for a wonderful call. Here's a pro-AI text coming in related to disability. Person writes, "As someone with dysgraphia, I have found that academia overemphasize writing skills and made it difficult for me to achieve. I was graded on my grammar and not my knowledge of the material during my first two degrees, but when I used AI to help me write in my second master's, I was better able to display the knowledge that I always had but couldn't express." That's a particular situation, I guess, for people with certain disabilities. The last thing I want to engage is this text, very simple text from a listener, which just writes a question, "Is using AI plagiarism?"
Interestingly, to me, in your article, you quote Alex, the NYU student, as saying, "Of course, this is cheating, but it may be more complex than just paying someone $100 to write your essay for you." For starters, no one in your article seems all that concerned about the ethics of using ChatGPT, or am I missing something?
Professor Hua Hsu: I think it's a lot more complicated. When you described doing Google searches and then noticing that Google now provides an AI, Gemini-produced summary, if you grew up-- I'm 47, but if you're, say, 25, and you've grown up only ever growing Googling things, only ever using things like Grammarly, a lot of students just see AI, ChatGPT, as an extension of that. They don't really view it as all that different. The idea of plagiarism, I think, in the old school sense, is you're giving someone $20 to write your paper, or you're just cutting and pasting from some source and passing it off as yours.
In this case, there's often no original. The AI is actually generating something, if you want to be generous, maybe it's in collaboration with the student, but it's not as simple as just cutting and pasting from somewhere else. It may not be plagiarism in the old school sense, but there probably is a considerable amount of dishonesty baked into it. I think even students acknowledge that if they're taking shortcuts in order to, say, focus on the things they want to focus on, there is still some moral friction there, even if it's not enough to discourage them from using it.
Brian Lehrer: We'll give our last word here to one more texter who writes, "Tell ChatGPT prom night is canceled." With that, we wrap it up. Professor, the callers were so good, that I wonder if they asked ChatGPT to "Write me a really good Brian Lehrer Show call in question based on The New Yorker article, What Happens After AI Destroys College Writing by Bard College Professor Hua Hsu." Thank you very much for joining us.
Professor Hua Hsu: Thank you. Have a good one.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.