A Doctor's Guide to AI Medical Advice
[music]
Brian Lehrer: Brian Lehrer on WNYC. We talked about AI and the war at the top of the show. Now we'll ask how much can AI give you medical advice? Let's stipulate right from the start that ChatGPT, for example, should not be your doctor. Do not forego a trip to the doctor's office because you believe ChatGPT got your diagnosis right. That's not even medical advice. It's just common sense. We'll talk to an actual doctor now who has some actual best practices ways. You see, he says you can use AI medically, but he also has warnings if you are using generative AI as a component of managing your over health or are curious about it.
My guest is Dr. Adam Rodman, general internist and medical educator at Beth Israel Deaconess Medical Center in Boston, where he directs AI programs for the Carl J. Shapiro Center for Education and Research, and he's an assistant professor at Harvard Medical School. Dr. Rodman is both a practicing physician and an AI researcher. Maybe you saw his New York Times essay, A Doctor's Guide to Using AI for Better Health. By the way, it's not just patients. Part of the article is about how doctors should and shouldn't use AI. Yes, they are using it. Hi Dr. Rodman, welcome to WNYC.
Dr. Adam Rodman: Thank you so much for having me, Brian.
Brian Lehrer: Listeners, we have time to take a few phone calls or questions via text. How are you using AI chatbots or generative AI for managing your health? Any successes or cautionary tales, if you're a patient? If you're a physician, call and tell us a quick story or ask Dr. Rodman a question. 212-433-WNYC, 433-9692. All right, let's jump right in. Your first best practice in the article is, use AI to enhance but not replace your medical appointments. Are people using it to avoid going to the doctor at all? When is that risky?
Dr. Adam Rodman: Fantastic question. I know that there are people who are using it to avoid going to the doctor. Almost by definition, I don't those people because they're not coming to see me. In my world, most of my patients are using it to help prepare for their visits, get additional information, or just understand more about their health. There definitely are people out there who are using chatbots in lieu of going to the doctor, and that's one of the things that I really caution against.
Brian Lehrer: The next best practice for patients, because that one is kind of obvious, is about figuring out effective ways to describe symptoms. This one was really interesting to me. You're basically giving people advice on how to ask the most useful questions of a chatbot, right?
Dr. Adam Rodman: Exactly. There's actually been a couple recent studies about many of the limitations which are very real in actual people using chatbots for their own health. One of the big reasons is that large language models AI needs a lot of context. Because of that, when a doctor uses it, a doctor who has gone to many years of medical school and residency to understand patient context, it can have better performance. Patients, obviously, know. You know the most about your own health from everybody, than anybody else, but you may not know what questions help differentiate different types of conditions. My advice to my patients is if you want to use-- you certainly don't have to-- but if you want to use an AI system to better understand your health, have it interview you like a doctor. Have it ask you questions on what might be going on, and have it interrogate you. In all of its training knowledge, it has encoded how doctors ask questions.
Brian Lehrer: How do you even do that? How do you ask a chatbot to ask you a question?
Dr. Adam Rodman: This is what I would do. I would start by giving it a one or two-line description of why you're talking to it. I might say, "I am a 41-year-old man," --not really any past medical history-- "I've been undergoing a lot of stress lately and I have a pretty bad headache, could you ask me some questions so that I can prepare to talk to my doctor about the headache?" Then, that will have the model go through the way a doctor would ask you questions to collect that information. Then at the end of that interrogation, I might ask something like, "Can you prepare a short visit summary of the top three things I should tell my doctor and the top three questions that I should ask of my doctor."
Brian Lehrer: Really interesting. One of the warnings you give in the article is beware of sycophancy. In other words, you write the tendency of language models to try to please their users. Why is that a bad thing in a medical context?
Dr. Adam Rodman: There's two reasons that it's a bad thing. I think the first one is related to cyberchondria. Cyberchondria is a phenomenon that did not start with-- it did not start with large language models. It's something that doctors and patients have been talking about and thinking about ever since the early days of the internet. That's the tendency. If you search for a symptom, if you search for a diagnosis, the search algorithm is going to serve you up more and more concerning things, so very quickly you can start searching for a headache and then soon be reading about, "Oh, could I have brain cancer?"
The same thing can happen with language models. Language models have all these safeties built into them. If you go to ChatGPT these days and you ask it a health question, it is 100% going to caveat. It's going to tell you, talk to your doctor. It's going to have all of these warnings. If you're worried about something, the language model is going to pick that up, and the same phenomenon will happen. You'll start talking about a headache, and very soon you'll be having a conversation about scary types of brain cancer, even though that's almost certainly what you don't have. That's in that sense why sycophancy is bad.
The second reason is this is why I really like when my patients use it to prepare. This is one of the reasons that second opinions, even though they can theoretically be very good with language models, but sometimes they go wrong. If you are seeking out a second opinion and say you're putting your health information in there, there's a reason you're doing that. Maybe you're really anxious that you got a bad diagnosis, and you don't want to have that diagnosis. These models can be very good at understanding what your concerns are and trying to please you.
One of the fundamental tenets, of course, of the patient-doctor relationship is that it is a deep human relationship. It's not just about making somebody happy. It's about doing what's best for them. That nature of language models can really drive the wrong direction if you're not careful.
Brian Lehrer: Here's Julia in Hyattsville, Maryland, who says she uses AI in a particular medical way. Hi, Julia. You're on WNYC.
Julia: Hi, Brian. Hi, Doctor. Longtime listener, several times caller. It's been invaluable for me in getting my A1C numbers down. What I usually do is I put my finger prick number in and also my sensor number. It's also informed me about stacking, protein first diet, and all. It's been very helpful to me.
Brian Lehrer: Interesting. A1C reflection of the sugars in your system. Diabetics keep track of their A1C or people concerned about their blood sugar going high. Doctor, is that a familiar story to you?
Dr. Adam Rodman: 100%. Health coaching is one of the best things that these models are for. Looking at your A1C, reflecting on your diet, giving you advice, helping craft a personalized exercise program specific to you. These are the sorts of tasks that I would encourage my patients to think about. I would still want to know that my patients are doing this, but these are very low harm, high potential benefit from having something that can counsel you at all times.
Brian Lehrer: Here's a text. Listener writes, "This sounds like a terrible privacy quagmire. Isn't it a bad idea to feed your personal medical data into the hands of AI companies? What do you say to that listener?
Dr. Adam Rodman: The answer is yes. I feel conflicted about this because these models are incredibly powerful for people understanding their health. They are also your personal health information. What I tell my patients, and this is what I do for myself if I have a question, is to take out any personal identifiers before you put it in. That's one of the reasons that I'm a little bit uneasy with what some of the tech companies are doing when they're automatically pulling from the health record. Obviously, you give permission, but they're automatically pulling that information in which has identifiable information if you do.
Again, I'm not saying anyone should do this, but if you do do this, I would strongly encourage you to remove your personal identifiers. Now, the caveat to that is in the LLM era, probably everything is potentially re-identifiable, so I understand where the listener is coming from and I share that uneasiness.
Brian Lehrer: How are doctors using AI? That's another part of your article in the Times.
Dr. Adam Rodman: Doctors are using AI all the time. Survey data suggests that probably roughly 2/3 of doctors use AI on at least a weekly basis. There's really two main ways doctors are doing this. The first is what are called AI scribes. Many of your listeners have probably experienced this. You sit in the room, an AI listens to you talk to your doctor, and the doctor's not staring and typing at the computer. Then afterwards, the AI scribe writes the the note. That's increasingly common, especially in primary care.
The second is decision support. In the very old days, actually when I was a medical student, we had these big books like Physician's Desk Reference. If we had a question that we'd go through. For most of my career, there have been websites that are curated by experts, like up to date. Now the way-- actually, the most popular tool and especially among younger doctors, among my residents-- it probably has a 100% penetration, they all use use it-- are these AI tools. The most popular one is called OpenEvidence, but there's a couple. They're chatbots, and they're chatbots that search the literature for you. The odds that your doctor is using that is very, very high. It has gone from-- it didn't exist a couple of years ago to being used by the majority of doctors in less than two years. Do you have warnings as an MD for your fellow MDs as you have warnings for patients on how not to use AI?
Dr. Adam Rodman: Oh, yes. The biggest warnings I have for my fellow MDs is that if you're getting complex information from a chatbot, you need to verify it, even though it's very good and it has gotten much better. It's a language model. It hallucinates. It can say things that are not true or misrepresent a paper, especially for very, very nuanced things. The bigger warning that I especially have for young doctors is that if you are using language-- this is the problem. Especially as an educator, language models are really powerful, and they actually can come up with very good recommendations.
Part of medical training is to train your brain to be a doctor, to think like a doctor. If you offload a lot of that at early stages of your career to an AI, you may never gain those abilities, and you may never be able to recognize when the AI is telling you something that doesn't make sense.
Brian Lehrer: One more call. Joe [unintelligible 00:12:02] has a story of how one of his doctors uses AI. Joe, you're on WNYC. Hello.
Joe: Hi, Brian. How are you? I actually was on in a similar segment that was done one of the days that you were out. My primary care doctor actually uses an AI app called Medical Brain to communicate with his patients. It's a specialized AI app that he uses. It keeps track or keeps in touch with me as what my health is. It checks my blood pressure, asks me for sugar numbers.
Then one time, I guess that the anecdote that I told before was it actually reminded me to check my blood sugars. I wasn't really good at that. I never really did unless I felt bad. This was post the procedure and it came up really, really high. I ended up contacting my doctors, and we found some issues that were going on with that. It seems to be really helpful.
Brian Lehrer: Interesting.
Joe: It's like a doctor checking in on you every day. It also can communicate to the doctor if you send a message as well. It's just beyond the one doctor now. It's gone through a number of members of the team. I wonder how that one is. As I said, it is pretty specialized.
Brian Lehrer: Joe, thank you very much. Good story. We have 20 seconds left. Dr. Rodman, anything you want to say to Joe? Also, do you recommend for people who are going to do this and take all your warnings and caveats about how and how not to use AI with respect to their health? Do you have any particular model company that you recommend? To answer your question first, no. Any of the companies with the frontier models, so your OpenAI models, your Google models, your Anthropics, those are all fantastic models. Everything has gotten so good.
To Joe, I love that story. That is where I hope this field goes to triadic care, where AI is an extension of the relationship you have with your doctor or your care team. To me, that's my hope on where all this is going.
Brian Lehrer: Dr. Adam Rodman from Beth Israel Deaconess in Boston, Harvard Medical School and his New York Times essay that had a few different titles on it depending on where you were clicking around one of them, Take It From a Doctor: It's OK if Your Medical Advice Comes From A.I., but really it's a more complicated article than that with warnings as well as tips. Thank you for sharing it with us.
Dr. Adam Rodman: Thank you, Brian. Thank you so much for having me.
Brian Lehrer: Brian Lehrer on WNYC. More to come.
Copyright © 2026 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
