How ChatGPT is Changing Education

( Photo by Jakub Porzycki/NurPhoto via Getty Image )
Alison Stewart: This is All Of It on WNYC. I'm Alison Stewart. In early 2023, just a few months after the launch of the generative AI platform ChatGPT, a survey found that 90% of college students were using the chatbot to help with homework. In the last two years, monthly visits to the website showed a drop off in June, you know, when summer vacation starts. ChatGPT and other generative AI programs are now a fact of the classroom and one that schools are struggling to know what to do about it.
James D. Walsh, the feature writer for Intelligencer from New York Mag, his latest article is titled Everyone Is Cheating Their Way Through College. It outlines just how pervasive AI chatbots have become on college campuses. Some students ask the bots to produce entire essays for them on books they never read. Others use it as a tool, blurring the lines between cheating and resourcefulness, and bring it into a question the very purpose of higher education. James, welcome to All Of It.
James Walsh: Thanks so much for having me.
Alison Stewart: Let's do a quick refresher on generative AI. What is AI, and what is generative AI?
James Walsh: Sure. Well, generative AI is the ability of this computer to produce what seems to us to be wholly original thoughts. It works-- I'm not going to pretend like I know how it works, actually, but [laughs]--
Alison Stewart: It works magic.
James Walsh: It works magic. I mean, that's what it is for these students. It is the stuff of fables. It is a genie's lamp where you could ask for a five-page essay on whatever topic you're studying and it will produce that for you.
Alison Stewart: You spoke to a number of college students for this story. In what way specifically are students using ChatGPT in their schoolwork?
James Walsh: In every way you can imagine. They are outlining their papers. They are asking for ideas for their papers. They're asking to produce entire paragraphs for their papers. They're using it in STEM to analyze data. There are many positive use cases. There are a lot of ways that AI can really improve learning, but of course, there are a lot of ways that it can be the perfect cheating tool. It can cut all cuttable corners for you.
Alison Stewart: Listeners, have you ever used ChatGP to write an essay or finish a reading? Did you consider it cheating if you're in school? Call or text us now. You can use a fake name. Our number is 212-433-9692, 212-433-WNYC. If we have parents tuning in, tell us if your kids are using ChatGPT for their homework, or if you've had to deal with cheating accusations. Do you discourage ChatGPT for your kids or do you encourage it? Are there any educators out there? We'd like to hear from you as well. Our number is 212-433-9692, 212-433-WNYC.
James, you spoke to one student who uses ChatGPT, and they told you, "I'm against copy and pasting. I'm against cheating and plagiarism, all of that. It's against the student handbook." Then, she puts in a prompt that gives her an outline and introduction and a whole bunch that basically write her essay for her. Do students think of it as cheating?
James Walsh: Sure. That was the most fascinating part of my conversations with students, is watching them work in real time to understand where their work ends and where AI begins. This is a student who truly, like she said, she prefaced our entire conversations, and she wants to follow the student handbook, and then thought, "Okay, my conversations with AI. AI is helping me. It's a tool just as it would in the workforce. It's something that I have access to, so I might as well use it," but she's using it to generate the central arguments of her essay, and of course, that's a lot of the thinking that we need and we develop in college.
That's the point of college. It's not just simply to construct sentences. She may not go on to become a writer or copy editor, but she'll use the critical thinking she got out of the exercise of writing the paper ideally.
Alison Stewart: How do people draw the distinction between cheating, not cheating, sort of cheating-esque? Are there distinctions?
James Walsh: Well, certainly. Professors have tried to flat out ban AI in their classroom or said in their instructions, "Okay, it's okay if you use AI, but only use it to have conversations before you dive in in earnest," or they might say, "Go ahead, use AI, but provide a printout of your conversation with AI so I can sort of watch the thinking.
Alison Stewart: Oh, interesting.
James Walsh: There's a lot of different ad hoc ways you can go about it. A lot of the students I spoke to think of those rules, let's say, as guidelines rather than really hard rules. The hardest part for these professors is it's really hard to catch an AI cheater. I mean, it can be easy to identify AI, they think. A lot of professors can say, I can spot AI writing from a mile away, but really proving the case kind of turns professors into the Perry Mason role trying to catch an [unintelligible 00:05:19], which is just really unpleasant for everybody. [chuckles]
Alison Stewart: Well, what is ChatGPT good at, and what is it not good at?
James Walsh: Well, anything that it's not good at, it's getting better at for the most part. There are some recent studies that show hallucinations, which is its proclivity to make up facts whole cloth, might be getting a little worse, but in terms of writing, it's getting much better. It can at least get you to a place where it doesn't take much editing to write it so that it sounds human. It is good at coding. However, your code will still have bugs, so it's important for students to know how to debug certain large formats of code. It's far from a perfect tool, but it can certainly get you 80% of the way of your assignment.
Alison Stewart: Let's talk to Anna, who is a college professor calling in from Brooklyn. Hi, Anna. Thank you so much for making the time to call All Of It. You're on the air.
Anna: Thank you so much. Yes, so a few semesters back when ChatGPT kind of first came on the scene, I put my writing assignments in my classes into it and realized that they were-- the biggest problem, actually, is that the system would print out a fairly detailed outline which then someone could put into their own words, really making it almost impossible to uncover. I reworked my assignments mostly by requiring them to listen or watch or read something that's not publicly available online, and then the rest of the assignment responds to that so that they actually have to do some of their original work.
I will say, I think teaching at the college level, I wish there was more support for instructors around this because it is all left up, at least at the institution I teach at, left up to us individually. As I think was just said by your guest, even if you suspect that something is AI-generated and there are platforms where actually professors can upload papers and it will tell you how likely is it that this was written by AI versus a person, proving it is next to impossible. Where does that leave us?
Alison Stewart: Thank you so much, Anna. Did you want to respond?
James Walsh: Well, I mean, not only that, those AI detectors, how effective they are is a matter of great debate, but even if they were 100% effective, like the caller just said, they're not going to be able to screen for the central arguments or themes that a robot could be producing. It doesn't really prevent students from cheating in a way that really isn't helping them develop their critical thinking at all.
Alison Stewart: Let's talk to Anne, who's calling in from Harlem. Hi, Anne. You're on the air.
Anne: Hi. Thanks for taking my call. I teach at the college level, and some of my colleagues have decided to forbid the use of any kind of AI like ChatGPT, and I think that's a losing game. My approach is to figure out how do we use it in a way that is pedagogically creative and still has integrity, so I ask my students if they're going to use it, that's fine, but tell me that you did and tell me how you did and cite it the way you would cite any outside source that you're drawing on.
I've used it myself, and I put together a brand new graduate seminar a couple of years ago, and I developed a syllabus, and it's very time-consuming, it's creative, and it's a good piece of work, but then when it was done, I said, let me put versions of this into ChatGPT and some other AI tools and see what comes out. It both confirmed some of the choices I'd already made. It also pointed me to material I didn't know anything about, so I was able to develop a stronger syllabus.
Then, it also hallucinated in some ways, which is what I also tell my students. If you're using ChatGPT or another AI and they make a claim there, you must verify it and show me how you verified it. They're going to use this tool. Our challenge as educators is to figure out how do we fold it in so that it's not cheating, but it's helping them learn and it's helping us figure out how can this be instructive? How can it be something that is worthwhile for them and for us, but it's not just abdicating our obligations as educators?
I haven't figured all this out yet. It's changing so fast all the time that I think I've got it settled for one semester, and by the start of the next semester, I have to kind of begin again. The point is, how do I communicate with my students so that I trust them and they trust me and we figure this out collaboratively? Sometimes it doesn't work so well, sometimes it works beautifully, so it's catch-as-catch-can still. Frankly, I'm kind of excited by these tools.
Alison Stewart: Ann, thank you so much for calling in. What did educators tell you, James?
James Walsh: I spoke to quite a few professors at the college level who were in a state of despair. They felt underwater with the sheer number of students who were clearly using AI and not citing it despite instructions to do so. In their syllabus, it says there's absolutely no AI, or similar to the caller, if you use AI, please cite it, and it just does not happen. They feel as if they're fighting this uphill battle where they're constantly confronted with and they have to make these decisions.
For example, I spoke to one professor who said, "What do I do with basically a decent paper that was written with AI? I can tell it's robotic language. The grammar is perfect. This was an AI paper. Then, I get this other paper that is barely literate. What grades do I give those people if I'm putting them on the same scale?" It presents all these really difficult problems.
Alison Stewart: My guest is James Walsh. He reported on this story for the Intelligencer, New York Magazine. It's called Everyone Is Cheating Their Way Through College. Have you had an incidence with ChatGPT or AI? Did you use it? Are you a professor who's encountered it? Give us a call. 212-433-9692, 212-433-WNYC. We'll have more of your calls and more with James after a quick break. This is All Of It.
[music]
Alison Stewart: You are listening to All Of It on WNYC. I'm Alison Stewart. My guest in studio is James Walsh. He wrote a piece called Everyone Is Cheating Their Way Through College. We're talking about generative AI. I wanted to ask you a very simple question. What did students tell you why they were using ChatGPT?
James Walsh: Right. That's a very important question. Some of the students, I would say the most techno optimist, forward-thinking students, had the approach that AI is here to stay and it's something that they are learning to use now so that they can use it effectively in the workplace. Many other students told me they use AI for the same reasons that I may have peaked at SparkNotes or something when I was an undergrad [crosstalk]--
Alison Stewart: Wikipedia or something like that.
James Walsh: Yes, right. That it was a way to cut a corner. It was a quick way to get something done more quickly just so that they had time to do something else. When we talk about cheating, the research shows that cheating correlates with a lot of really big problems. It's a student's place of belonging. It can be socioeconomic. It can be all kinds of societal forces that push somebody to cheat. It's a really big question about why students are going to AI.
I do think it's important that students learn how to use AI, especially college students, as they prepare for the workforce. The thing that what we don't want is to only learn how to do stuff that AI already knows how to do, because then, of course, they matriculate into the workforce, and they're going to be the most replaceable people on the list. They're the first to go because AI can already do that job.
Alison Stewart: How have these generative chatbots challenge us to think about the purpose of higher education?
James Walsh: That is the most fascinating part of this whole thing, that we have this magic tool, as we've called it, that can do all of these assignments, so it's forced both students and professors to question the assignments to begin with on a granular level. Why are we here? Why are we doing this assignment? Why are we enrolled in this class? Why are we then trying to get this degree? Is it to acquire a skill that's going to help us in the workplace? Is it to found the next big Silicon Valley unicorn, or is it to be well-rounded people and deep-thinking people and to learn more about our own cultures and other cultures?
That is the question that AI is forcing upon both students, professors, and the people who are running these schools.
Alison Stewart: Let's take some more calls. George is calling in from Montclair. Hi, George. Thank you so much for calling All Of It. You are on the air.
George: Good afternoon. Thank you. Good discussion. I've been dealing. Until recently, I taught English in a community college, and I teach philosophy in a private college. It's very interesting. The working-class kids did not really know how to use it. I was teaching writing too, and so I didn't have that much of a problem, but I had to deal with plagiarism. There's a lot of bad habits, especially during COVID, when people weren't watching them.
One of your respondents said, "I do this, I keep all of my models as contemporary as possible;so that they can't look it up on SparkNotes," but the other thing that I want to say is you really can't tell. We do have software that gives us a probability, and so in my philosophy class, I've stopped-- it's so funny. I've pulled back from essay questions and gone to short answer questions just to make sure that they've read the text. In other words, I used to expect them to read the text and analyze it themselves. Now I'm just getting them to reading it, and I'll just try to add the analysis. One [crosstalk]--
Alison Stewart: Yes, I'm going to-- yes, last thing, and then I want to dive in.
George: One quick story. My grandson uses AI to write code with. I was talking to him about it, and this is just last year, and I showed him some essays that were definitely, as one of the respondents said, hallucinations. The chatbot had not read the essay at all. It was an essay about-- oh God. Amanda Gorman tried to get a job for Lion King, and the writing bot assumed that she was a poor Black girl trying to struggle through life. [chuckles]
Alison Stewart: Definitely not what you want to assume about Amanda Gorman. He said something interesting there about COVID. Did Covid play a role in this?
James Walsh: Well, of course, ChatGPT went live in November of 2022, and in a way, this cheating epidemic on college campuses started during COVID. Schools saw a spike in cheating because of remote learning, essentially. Not only were students unsupervised during remote learning, they also had access to all these websites like CourseHero and Chegg, which provided on-demand answers to questions. You could just throw in a question and Chegg would say, we guarantee if you're a paying subscriber, within 30 minutes, a thorough answer to whatever question you want. Cases of cheating and honor code violations at schools across the country in some cases doubled or tripled during COVID.
Alison Stewart: One of our texts asked, "I'd like to know how James has seen ChatGPT and other AI tools used by journalists."
James Walsh: Sure, certainly. I mean, it is a very effective tool. I think journalists probably use it much the same way a lot of other people use it, to help synthesize, analyze, summarize. Journalists, I think everywhere, are extremely grateful to ChatGPT for interview transcriptions. That's been a huge thing for us. I certainly do not rely on it in any sort of way for anything factual, anything writing. That is all off AI.
Alison Stewart: This says, "I have something to say about ChatGPT. I'm a student, and I've never used it, but I don't consider it cheating. I use other AI apps for helping with math. A lot of people just thinks it gives answers, but it also gives me an explanation." That's interesting. Let's talk to Emmy. Hi, Emmy. Thanks for calling All Of It.
Emmy: Hi. I don't feel too friendly about the use of AI for student writing. I'm a college professor. I teach English, and I teach a meat and potatoes course that is required that involves a lot of writing and close textual reading. I mainly want to say that at our institution, students say again and again when asked they want to sound more professional. In fact, in the fall, I want to show them paragraphs from an A paper that are not perfect, B paper that are good but not perfect, and show them AI so they can see what the difference is in the writing.
AI is very general. It's rather sterile. There are a lot of traits that it has. I just also wanted to throw out a couple of tips to instructors about how to catch it. I have to sit with certain students, and-- The other thing is, excuse me, it is now suggested to us that we do a lot more in-class writing and have that count as part of the grade, and so I do that. In order to bring a case against a student, which is very time-consuming, but I do it, I give them a vocab test on writing they've submitted.
Generally, they can't define any of the words they use in their own writing, and I also have to compare it to their in class writing. Our phrase is that students should not use it because they gain unfair advantage, and they are very anxious about grades. They want to get into certain programs like the [crosstalk]--
Alison Stewart: Ohh. You know what? I'm going to dive in there because you brought up something really interesting, that students feel under pressure, that they feel under pressure for grades, about getting into colleges, about meeting standards. Can you talk about that a little bit?
James Walsh: Yes. I think we're long past the point where college is this ideal where somebody goes to further their horizons. For a long time now, we thought of school as transactional, like, if I do this, then I can do this and I can earn this, and so of course, that puts pressure on students. We know that high school grades correlate to your income later in life, and so students are going to want to get better grades. They feel pressure to get better grades, and so if they have this tool in their back pocket and seemingly no consequences for using it, then who can blame them for using it? It means a better life. [chuckles]
Alison Stewart: Is this something we should be thinking of as a tool?
James Walsh: It is a tool. Right now, it is a tool. It is a tool that we don't know what it's capable of, and we don't know what the consequences of using it and depending on it so much might be.
Alison Stewart: Should we be teaching kids how to use it? Maybe it's a tool that it needs instruction.
James Walsh: Well, of course, AI companies are certainly trying to capture younger and younger users. I think that makes me very, very nervous, and should make a lot of people very, very nervous. I think we should be introducing AI to kids, but I think there needs to be guardrails. That's really important, especially for young kids.
Alison Stewart: Steve, you got about 45 seconds. Go for it.
Steve: Okay. My colleagues and I at CUNY have been facing this for the 19 years that I've been on the job. It's just gotten worse and worse. Most recently, the AI has taken the whole thing off the charts. We are increasingly depending on in-class writing, A, because if I have in my left hand a scrawled paragraph of not very good English and on my right hand something that looks like it was written by a rather boring PhD candidate, then I recognize the problem. The teacher knows if you're using AI.
Two, we are increasingly using the in-class writing for just about everything. It's the only way you can assure that the students are thinking, writing, struggling with syntax and grammar, and that paper gives you an indication of how you can properly help the students. Your screener asked me, does that increase my workload? I say that it decreases my workload because I'm not trying to catch cheaters.
Alison Stewart: What is your response?
James Walsh: Well, yes, I wonder how it's going. I mean, what are you seeing on the page? Are you encouraged?
Steve: I'm encouraged to use the in-class writing more. Even students who can write, I know they can write because I have their in-class writing, even the ones who are good writers who are looking at an A are using it.
Alison Stewart: Steven, thank you so much for calling in. Anything else you wanted to add to this discussion as we wrap up?
James Walsh: I think we've covered quite a bit. It's been great.
Alison Stewart: You should read the article. It is called Everyone is Cheating Their Way Through College. It's with New York Magazine, the Intelligencer. My guest has been James Walsh. Thank you for joining us and for taking our listeners' calls.
James Walsh: Thank you, Alison.
Alison Stewart: That is All Of It for today. I'm Alison Stewart. I appreciate you listening, and I appreciate you. I will meet you back here next time, and we will have one of the stars of the film Sinners. I'll see you back here tomorrow.