How AI is Changing Medicine
Title: How AI is Changing Medicine
[MUSIC]
Amina: It's The Brian Lehrer Show on WNYC. I'm producer Amina Srna, filling in for Brian today. Good morning again, everyone. When we think of artificial intelligence, what comes to mind? Chatbots or uncanny images of humans with extra fingers? Maybe data centers and your utility bills, or dread about the future of the labor market? What about healthcare? AI is already diagnosing diseases, analyzing CT scans, and assisting inpatient care, sometimes more efficiently than doctors.
This raises an uncomfortable question. If AI can do a lot of the diagnostic work, what are the physicians supposed to do? How do we teach the next generation of doctors to use these tools but not develop biases or an over-reliance on them? Medical schools are already grappling with these questions. This fall, Stanford Medicine became one of the first US medical schools to make AI training mandatory for all medical and PA students. With me now to talk about how we train doctors when the tech is changing faster than the curriculum and what this all means for patients is Dr. Lloyd Minor, dean of Stanford University School of Medicine and VP of medical affairs at Stanford University. Dr. Minor, welcome to WNYC.
Dr. Minor: Thank you, Amina. It's great to be with you today.
Amina: Doctors are already using artificial intelligence, right? Can you give us some examples of things AI is doing in medicine right now?
Dr. Minor: Sure. You in your introduction, covered a lot of really exciting things that are going on in applications of AI in medicine today. From assisting in the reading of diagnostic studies, such as radiographs, to assisting in interpretation of pathology slides, to how we manage our electronic medical records, to how a doctor or healthcare system requests authorization from an insurance company, all these are areas today that are being impacted by AI.
It's not clear exactly how the practice of medicine is going to be different 3, 5, 10 years from now, but I think it is clear that it is going to be different. What we've tried to do at Stanford is to first educate our students and those of us who are already in practice about the use of large language models, the use of other forms of AI in order to prepare us so not only will we follow this revolution, but that we'll be able to lead it and responsibly deploy the AI.
Amina: Listeners, do we have any doctors or lab technicians already using AI in the workplace? How has it impacted your workflow or maybe patient outcomes? Call or text us at 212-433-WNYC, that's 212-433-9692. For anyone else, maybe some patients, I don't know if you've come across your doctor using AI and in what ways, you can help us report this story, and we can take your questions and comments on the use of AI in medicine for our guest, Dr. Lloyd Minor, dean of Stanford University School of Medicine. Dr. Minor, what is AI actually doing in cancer research and treatment right now? Maybe stuff that wasn't possible even a few years ago.
Dr. Minor: Sure. I think cancer is one of the most exciting areas where the deployment of AI is already beginning to have impact and will have even greater impact in the future. Let me mention one example that we're pioneering here at Stanford. A patient with a complicated cancer, like colon cancer or ovarian cancer or pancreatic cancer, their care will be coordinated by surgeons, medical oncologists, radiation oncologists. There will oftentimes be nutritionists involved. Typically, all of those providers, those experts, will meet in what's called a tumor board, and they'll discuss the patient's findings, and they'll discuss the treatment plan.
Now, we're starting to enable those tumor boards with an analysis through AI of the patient, their imaging studies, their other medical conditions, so that the AI is certainly not making a decision about what treatment they'll receive, but the participants, the doctors who are a part of that tumor board are getting information from AI that they may not have seen themselves before or relationships between the therapy that a patient might receive and what we know about other patients who've received similar therapies, relationships that they might not be aware of. It's in its early stages, but already today we're seeing the power of AI in impacting, making sure that the right patient gets the right treatment at the right time.
Amina: What about clinical trials? How's AI changing that process?
Dr. Minor: One way that AI is changing clinical trials is making all of us aware of when our patients are eligible for a clinical trial. Now, in our electronic patient record, the listing of all of our clinical trials is running in the background, and the person taking care of the patient will receive a notification from the electronic health record saying, "Your patient may be eligible for this trial. Click here, and you'll learn more about the trial."
There are literally hundreds of trials that are open at any one time, and no expert, no faculty member, no resident fellow student can keep all of those in their head at one time. This makes sure that providers have information about a trial a patient may be eligible for that they might not have considered before.
Amina: You write that better management of early-stage diabetes could reduce all-cause mortality in patients by nearly 20%. What about chronic diseases like diabetes or heart disease? How's AI being used there?
Dr. Minor: I think AI is being used in several ways in the management of chronic diseases. As you said, these chronic diseases are a major cause of both morbidity and mortality in the United States, as well as a major driver of cost in the healthcare system. One of the most serious problems in the treatment and the management of chronic diseases is the coordination of the various different components in the care. A patient with diabetes, for example, diet plays a major role. There may also be a role for medications, particularly the newer medications like the GLP-1 agonist.
How do you bring all that together to coordinate a treatment plan for the patient? How do you make sure that patients who need home healthcare assistance are actually getting that assistance? What AI can and is doing is coordinating the various different components in a patient's care delivery plan so that less of that has to be orchestrated by social workers, by home health nurses. More of the coordination is being achieved through the deployment of AI.
I also think one of the most exciting things about AI and its use in medicine today is it's taking a lot of the administrative burden off of healthcare providers, doing a lot of the things that would have required human effort before. Now those are being done through technology, through AI, and we as providers can focus on what we've always wanted to do and what we should do, and that is interacting with patients and providing the very best care one-on-one with patients and their families, rather than having to spend so much time in the administrative activities where we're not really interacting with patients.
Amina: Are these new practices being used by everyday doctors yet? Can you explain what you mean by the no-do gap? Are we seeing the same lag with AI?
Dr. Minor: I think the deployment of AI, it's varying-- First of all, as you mentioned in the introduction, this is evolving so rapidly. Large language models just became available to the public in, what, November of 2022. It's an incredibly short period of time for a fundamentally new technology. I think probably academic centers, major centers are deploying AI first. One of the things we've been doing over these past several years is working with Alice Walton and colleagues in Northwest Arkansas.
I grew up in Little Rock, Arkansas, and I've had the privilege of working with Alice and her colleagues over the past three years, and starting a new medical school in Northwest Arkansas, the Alice Walton School of Medicine. We have 48 students enrolled in the first class, an amazing leadership team there, led by Dean Sharmila Makhija. We're sharing all the developments we're having in AI and medical education here at Stanford. We're sharing those with our colleagues in Northwest Arkansas. There's a big role for us in academic medicine to make sure we're reaching out to other centers and sharing what we're developing and getting their input in that development.
Amina: Let's go to a caller. Joe in Rahway, New Jersey, is, I believe, a patient calling in who uses an AI app. Joe, you're on WNYC.
Joe: Hi, Amina. Thank you. Yes, I am a patient, actually. More than a year ago, I think, my primary care doctor started using this app called Medical Brain, which I guess is an AI interaction app for patients, and he swears by it. It keeps in contact with me, sometimes daily, checking on my health, asking me for information. In the morning, sometimes it'll say, "How are you feeling today? Any changes in your health?" It asks me for blood sugar since I'm a diabetes and blood pressure since I'm a cardiac patient.
As I mentioned to the screener, I guess one interesting thing was after major surgery, in which they had discontinued the GLP-1 I was on, it asked me to take my blood sugar, which I wasn't really keeping great track of. When I took it, it was actually very high. It prompted me to talk to my GP and say, "My sugar's running high. Should we do anything?" It ended up really resulting in me changing medications because the AI had prompted me to go ahead and do that.
Like I said, it will keep track of my daily health. Most recently, since I am a cardiac patient, I have a Bluetooth-connected blood pressure cuff, and it feeds the numbers to the AI app. Again, it tells me if I'm running high or running low. It reported, I think yesterday, my averages over time. When I went to my cardiologist yesterday, I actually could show him that information. It's really neat. Like I said, my primary care doctor was a real proponent of it at the beginning, and he's part of a major health conglomerate. I've talked to my other doctors, my cardiologist being one, if they were aware of it and they weren't. I don't know if it's throughout their system or he just picked up on it and decided to use it.
Amina: Joe, it's called Medical Brain?
Joe: Medical Brain, yes.
Amina: All right. Thank you so much for your call, Joe. Dr. Minor, are you familiar with that app?
Dr. Minor: I'm not familiar with that specific app, but Joe, thank you so much for sharing that with us. I think what you've described to us is a great example of the power of the deployment of digital technologies in improving healthcare. Every time we get on an airplane, the jet engines of that plane are being monitored hundreds, thousands of times a minute, they're relaying information back down to ground on how the engines are doing, when service is needed. There's constant monitoring of the performance of those engines.
We're not that far away from having similar opportunities for the performance of our bodies through wearables. Like you were saying, Joe, where it prompts you to check your blood sugar. It then tells you what that blood sugar level is, and then you also can interact with healthcare professionals. It's just assisting in making sure that the things that need to be monitored are monitored, and when they are elevated or not high enough, that you get a warning, and then also the opportunity to discuss it with your primary care doctor. It makes the process of care much more effective and efficient, and that's a really good thing.
Amina: Here's maybe the flip side to that really good experience. A listener writes, "We used to Google our symptoms. Now we're using AI to diagnose them. How does that change your work when a patient walks in convinced that a chatbot has already found the answer?"
Dr. Minor: It's a great question. I'm an otolaryngologist, that word that none of us can pronounce, including those of us in the field. I'm an ENT doctor by training, and I focus on inner ear balance disorders. I described an inner ear disorder earlier in my career. This was in the late 1990s. For years afterwards, the major source of referrals with this syndrome came from patients who had simply entered their symptoms into a search and been prompted to a website, ours or someone else, describing this syndrome, and then they said, "I think I have this." Then they went to their doctor, and they got referred to us or some other place. We've had these assistance mechanisms for diagnoses for a long time.
I think AI will take it to a whole new level. There were people appropriately concerned about "Dr. Google," but I think overall, the more information that patients can have about their health, the better it is. Always, always keeping in mind that AI is still evolving, there's still hallucinations, and you should always consult a medical professional based upon what you learn from the AI and share what you learn from the AI with them so that they can interpret it and also work with you to make sure that what you're learning from the AI is actually correct.
Amina: A listener, Ethan in Los Angeles, texts, "What potential is there for AI to reduce overall healthcare costs, or will this simply expand the scope of services and expenses? I work in tech and have been very excited about LLM and AI," LLM stands for large language model, "most especially in its medical applications, where the amount of studies and information is too vast for human ingestion. I'm excited to hear what your guest is talking about today." Dr. Minor, do you have a take on reducing using AI to reduce overall healthcare costs?
Dr. Minor: First, Ethan, thank you for your observations and for your question. I believe that in the intermediate, and certainly in the long run, that AI is going to dramatically improve the efficiency and the effectiveness of care and therefore reduce cost. Now, how quickly we get to that remains to be determined. I think we're already seeing examples of efficiencies that are reducing costs.
Roughly 20% of the cost of healthcare in the United States is spent on administering the system. That 20% has nothing to do with actually the delivery of care. It's the administrative aspects of our care delivery system. We mentioned before prior authorization, so requesting approval for an insurance company to have a procedure or to have a particular lab test. That now is much more automated than it was before. There still are components of it that require human interactions, but that's an efficiency.
We talked before with Joe about care coordination. If we do a better job of coordinating care, particularly for people with chronic medical conditions, that will keep them healthier, that's a good thing, and reduce the need for extensive medical services. Both are good things, keeping people healthy and reducing the need for services because they are healthy. Now, it takes a while to roll this out and get it to the level to which we think it will actually improve both quality of care and lower cost, but I'm optimistic that it will.
I think, and Ethan mentioned large language models, already for me as a physician, as an academic leader, the use of curated large language models, that is, large language models trained with the medical literature, has really changed the way we get information on a daily basis. Yes, we still have to be skeptical about what we're seeing, but in curated large language models where we can actually click and go to the primary reference and see the source that was used by the large language model in order to give us that information, that makes validation and verification much more accessible for those of us who are users of large language models.
Amina: Here's another text. "I'd like to know what this technology is trained on. Given how healthcare is already generally biased against women's and particular Black women's pain, for instance, combined with texts notoriously biased toward white men, because that's who's generally creating it, it's hard not to be concerned that AI in healthcare may have a lot of built-in biases." Dr. Minor, do you address that in your recent article?
Dr. Minor: It's a really important point. We know, for example, that clinical trials have not always been representative of the population of patients who may receive a particular therapy. I think AI actually is helping us address those biases because when it's properly trained and when people are using it appropriately, the AI actually points out deficiencies in the sources of information that are leading to a conclusion, and also can help us design studies in order to get to the correct conclusions.
For example, we have physicians, faculty members in our emergency department precisely on the topic of pain and chest pain, looking at how different demographic groups, how that pain reported by people in different demographic groups may be associated with different underlying pathologies, so that rather than just a pain score for everybody, we can adjust the pain score based upon what we know from the data. That would have been more difficult to do before we had the data science and analytic methods that we have today.
I do believe that properly deployed large language models and generative AI will help us to address some of the underlying biases that have come about because the way we've derived our information, as the listener correctly pointed out, it's not always been reflective of broad demographic groups.
Amina: Let's go to another call. Here's Catherine in Manhattan with a question. Hi, Catherine. You're on WNYC.
Catherine: [inaudible 00:20:32] my call. I'm really interested in the high amount of data that AI is able to capture in discovering cancers and other sicknesses. I wonder if the same AI data has been used by lawmakers and healthcare advocates, things like the EPA, in trying to identify the sources of some of these cancerous pollutants and identify and create regulations, and maybe even lawsuits against the polluters. Thank you for considering this question.
Amina: Thank you so much for your call, Catherine. Dr. Minor, I don't know how much you look into what federal agencies are necessarily doing, but maybe you want to talk about even just the pathology of AI and how it's used to study some of the causes and mechanisms of diseases?
Dr. Minor: Yes. I think it's a great question. We have a Center for Human and Planetary Health here at Stanford that's a collaborative endeavor between the medical school, Stanford Medical School, and our Stanford Doerr School of Sustainability. One of the things that scientists involved in that center are doing is to analyze data related to air quality and looking, in a very detailed level, at incidences of cancer in particular areas based upon what we know air quality is in order to be able to derive relationships that may provide us with insights into the cause of spikes in cancer in particular areas. That's certainly going on.
Amina, to your question about how AI is being used in pathology, that's one of the areas I'm most excited about because we've already seen great advances in radiology, for example, from AI. That's been enabled because we haven't actually used printouts of radiographs now for almost a decade. When you have a chest X-ray or a CT scan MRI, those images are digital from the time they're taken. It's been easier to get the data in and train the large language models.
Pathology slides are still slides, but now we can digitize those slides, and from that, develop methods of analyzing over large, large data sets, much larger than any human can possibly see in one lifetime, and help pathologists to make the most accurate diagnosis. We're already seeing improvements in diagnostic accuracy from those technologies.
Amina: When you have these powerful diagnostic tools, what is the main concern? Can you talk about maybe cognitive de-skilling and what that means for doctors and their patients?
Dr. Minor: Yes. It's a great question. We've already seen some evidence of that. There have been studies published, for example, in colonoscopy where a gastroenterologist is looking on the video during an endoscopy procedure and determining, "Oh, is this a polyp? Is this an early-stage cancer?" Now, those images are also being analyzed by AI, and what we've seen some early evidence is that maybe specialists that rely solely on the AI aren't quite as accurate as they were before they had the benefit of AI.
I don't think that means we should stop using AI. I think it means that we have to be very careful in the way we use it to make sure that we're keeping our skills up as practitioners. De-skilling, we've seen it in other aspects, other medical professions, for example, that used to rely much more on physical diagnosis, listening to heart sounds, for example, or a really, really detailed neurological exam. Now that we have imaging studies, in the case of the heart, we have echocardiography, other imaging for neurology, we have MRI scans, we're able to more accurately diagnose pathologies and plan treatments from the results of an imaging study than from a physical exam alone.
The way we train our students is being adjusted somewhat, given that reality. We have to make sure that we don't lose those essential physical diagnoses, those human cognitive skills that have gotten us to where we are today, in the process of adopting what are undoubtedly helpful technological advances.
Amina: Let's go to another call. James in Wayne, New Jersey, you're on WNYC.
James: Hello. Thanks for the call. I'm very happy to hear about some of the advances that medicine is being able to utilize AI for. I was a little concerned when you talked about the navigation capacity for AI. I had a patient, for example, gunshot wound both hands, and he needed to go to rehab, needed Medicaid, but Medicaid doesn't send you directly to rehab unless you cannot walk.
This gentleman had to go out of the hospital, miss out on timely rehab, while we danced around a system that wouldn't accommodate that. I'm afraid that algorithm-based responses are going to be built into the AI systems that are just as complicated, where a patient will be sitting, select 1, select 2, select 3, and will not get the nuanced care or nuanced attention that a human might provide.
Amina: James, you're a practicing medical professional?
James: I'm an emergency physician.
Amina: Emergency physician. Thank you so much for your call, doctor. Dr. Minor, I would love for you to respond directly to James' comment, but this was also a big concern just about this time last year about physicians were raising flags about AI and how it was increasing prior authorization denial. How do you want to come into that comment?
Dr. Minor: James, thank you for bringing this to everyone's attention. We've seen it before in deployments of new technologies. It gets back to the important point that we always have to put the patient first and the interest of the patient first. Technology should not get in the way of making the right decision for the patient at the right time.
I certainly am aware that we have seen changes in the way prior authorizations are reviewed, changes in the way they're prepared, and changes in the way approvals, as you noted for this patient, are granted. I think we have to be sure that we point out those deficiencies when they occur, and then we have to be vigilant at making sure that we, as a healthcare system within our society, are really focused on the well-being of our patients, and that technology is helping us do that and not preventing us from doing that.
Amina: You described an "AI arms race" between insurance companies and doctors, insurers using AI to deny coverage, as we talked about, and doctors fighting back. What does that look like in practical terms? Have you encountered that?
Dr. Minor: I think we've encountered some of it. I actually think it's gotten better as both the insurers and doctors and health systems are using AI. I think that now, if you will, in the arms race, we've reached an equal plateau level. I think we're seeing fewer and fewer denials that we would consider to be, as James pointed out, arbitrary or capricious. It's something that we absolutely have to continue to be vigilant about.
Amina: As we started this segment, Stanford decided to make AI training mandatory. What does it actually look like? What are the students learning to do with these tools?
Dr. Minor: It's a series of modules that we have for our students, as well as interactions with faculty. I think the big use of AI training for our students and for all of us as practitioners is how do we plan the prompts that we're using with the large language model so that we get the most we can out of that large language model? It's different. It's different than doing a simple search with Google or another search engine.
You can get a lot better, more focused information if you plan your prompts. I should also mention that all of this is available to the public. It's not behind a firewall. If you just go to Stanford School of Medicine, AI in Medical Education, with any search, you'll come to our website, and all the modules are available for the public to see and use.
Amina: That's where we leave it for today. My guest was Dr. Lloyd Minor, dean of Stanford University School of Medicine and VP of medical affairs at Stanford University. Dr. Minor, thank you so much for your time today.
Dr. Minor: Thank you.
Copyright © 2026 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.
