While generative artificial intelligence could potentially revolutionize the practice of medicine its role must be carefully assessed with humans especially physicians in firm control Image by Toowongsa
While generative artificial intelligence could potentially revolutionize the practice of medicine, its role must be carefully assessed, with humans, especially physicians, in firm control. Image by Toowongsa

Is AI OK for IM?

While generative artificial intelligence systems such as ChatGPT could potentially revolutionize the practice of medicine, their role must be carefully assessed, with humans, especially physicians, in firm control of interpreting output.

Artificial intelligence, or AI, made its most visible mark on the world on Nov. 30, 2022, when ChatGPT was released publicly. Now called "generative AI" for its ability to not just search information but synthesize its output in a human-like way, the technology has found application in numerous fields.

Early adopters in internal medicine have used generative AI in many aspects of daily practice, from routine tasks, such as coding and charting, to more delicate ones, such as generating scripts to follow when talking to patients. But while generative AI could potentially revolutionize the practice of medicine, its role must be carefully assessed, with humans, especially physicians, in firm control, experts said. Potential dangers include systems that generate incomplete or erroneous information or even make up information that doesn't exist.

Perhaps the most promising potential use of AI is in removing some of the busywork from medicine. Eric Topol, MD, MACP, author of "Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again," said that generative AI used properly could eventually offer physicians "keyboard liberation."

"When we see patients, we're data clerks. … as the patient's talking, and then even after the visit, there's lots of keyboard work that has to be done," said Dr. Topol, who will address the issue as part of his keynote speech, "How A.I. Will Change Medicine," at ACP's Internal Medicine Meeting 2024 in April. "[AI allows] automation of lots of things … and that's exciting."

'Dr. Google' revisited

In a sense, ChatGPT is just another step on the ladder from the days when patients brought in clipped magazine or newspaper articles describing something that might be relevant to their health, according to David A. Asch, MD, a general internal medicine physician who's experimented with AI in his professional life.

Eventually, they'd do the same from the internet, which led to the now-common phenomenon known as "Dr. Google." "I imagine people are asking ChatGPT things like this and bringing them in to show their doctors," Dr. Asch said.

In these cases, though, physicians might notice that the information patients are finding is much more sophisticated, said Dr. Topol, who is executive VP and professor of molecular medicine at Scripps Research and a cardiologist at the Scripps Clinic in La Jolla, Calif. ChatGPT can take the results of a search, integrate them with all the relevant citations from resources such as PubMed, and deliver a synthesized report in human-like language.

"It's no longer a search of one-off information," Dr. Topol said. "Now you're getting very specific to the individual. Let's say it could be results of tests, symptoms, exposures … it can handle all these things. … It's a conversation, it's not a one-time search."

Large-language models (LLMs), which are algorithms trained on large sets of data to predict what words follow each other in a human-like way, can also help clinicians think outside the box, Dr. Topol said. He cited examples of a boy who had seen 17 doctors in three years and was correctly diagnosed with tethered cord syndrome after his mother typed his symptoms and MRI notes into ChatGPT, and a patient thought to have postacute sequelae of SARS-CoV-2 infection whose true diagnosis of limbic encephalitis was suggested by ChatGPT after her sister entered her symptoms and lab results.

"LLMs are capable of giving pretty good second opinions in these really, really complex cases, such that an experienced internist can get something out of it," said Adam Rodman, MD, MPH, FACP, a general internal medicine physician at Beth Israel Deaconess Medical Center, researcher, and expert in medical decision making. "Certainly not as a first run, and not a replacement for human cognition, but to make sure that you think about more things, to make sure you're not missing anything."

AI can't take the next steps of determining what drug regimen a patient needs or what is the appropriate test, Dr. Rodman added. "This nuance that gets missed a lot is management decisions," he said. "Data has suggested that these general-purpose models like [ChatGPT4, the latest iteration] are not able to make management decisions that are safe."

Dr. Rodman also cautioned that ChatGPT is not HIPAA compliant, so physicians should never put patients' protected health information into it or similar systems.

"I don't think that's ethical," he said. "Also, we don't know what [any generative AI model] does with that information. But still, it's powerful in the sense that, and I use it this way, when I have a tricky case, I give it my problem representation." He "prompts" the AI by setting up basic assumptions and providing information about the expected outcome beforehand and asks the AI to tell him what else might be part of the differential diagnosis, to make sure he's not missing anything.

ACP Member Jason Hom, MD, a clinical associate professor of medicine at Stanford University in California, stressed that physicians should not automatically rely on generative AI and should always make the final judgment call for medical-legal and ethical reasons.

"It's not that AI is going to replace physicians, it's just going to hopefully streamline the workflows and augments their capabilities," Dr. Hom said. "The goal actually would be for physicians … to spend less time on documentation and less time in the EHR and more time with their patients."

Dr. Topol added, "I see it as a helper function, but it has to be under a careful human in the loop. A patient is not getting any treatment for some diagnosis until it's been reviewed by the physician who's orchestrating or prescribing the treatment."

Another danger with AI is its tendency to "hallucinate," or make things up, when it doesn't know the answer.

"[LLMs have] just been told to predict the next word," said Neil B. Mehta, MBBS, FACP, a staff physician at the Cleveland Clinic and director of its Center for Technology-Enhanced Knowledge and Instruction, which develops and hosts online learning. The resulting word will likely sound right, because of the large amount of data that the AI has been trained on, but the overall meaning can get lost amid the algorithm.

When an LLM encounters something that it doesn't have an answer to, it simply fills in the blanks with what it considers most likely. This can lead to wholesale invention, such as citing a supposedly relevant but fictional peer-reviewed paper, Dr. Mehta said.

Scuttling scut work

AI's least glamorous possibilities may offer the most promise for many physicians.

"Physicians spend a lot of time in the electronic health record," said Dr. Hom, who is also chair of Stanford's health information management committee. "Depending on what study you look at, some physicians may actually spend more time in the EHR than actually directly interacting with patients and their families. There's definitely an opportunity to leverage technology to shift that balance, which ultimately should be better for patient care, and probably better for physician job satisfaction as well."

He notes that AI could go a step further than medical scribes currently do, doing higher-level documentation tasks, including organizing and summarizing information in a sophisticated fashion. Again, he said, it's crucial for physicians to have the final review of anything that AI generates.

Dr. Asch doesn't particularly specialize in AI, but he created a buzz when he interviewed ChatGPT, asking it to evaluate its own value to medicine. The tongue-in-cheek piece appeared in NEJM Catalyst and went viral on social media.

"It was mostly to entertain myself. It was meant to be a little glib," Dr. Asch said. While he admits a lack of in-depth expertise, he's using AI in an everyday way.

"I'll probably use ChatGPT later this morning, for some things that are not too clinical. I'm actually going to see if it'll write a first draft of a letter of support I'm writing for someone, and then I'll edit it from there just to see if it'll speed up my task," he said.

Dr. Mehta said many of his colleagues have used AI to generate preauthorization letters.

"It's a pain to write these things," he said. "And you can just say [to ChatGPT], 'Write a letter saying a person has been on these two agents, got a side effect with this third agent, and [needs this new agent]. You don't put the person's name. And then when [AI] generates the letter, you download it and then add more protected identifiers in there."

Such tasks are generative AI's low-hanging fruit, Dr. Mehta said, and are accessible to any physician regardless of AI experience or practice type. He also speculated that generative AI could eventually be used for tasks like responding to patient messages, assuming that in the future privacy concerns can be adequately addressed.

"The patient says, 'I have this, this, and this; what should I do?' Just like [AI] does a differential diagnosis, it can respond to the patient message, generate a draft, and then the physician can edit it, and review it, and then send it off."

Dr. Asch was initially skeptical of these types of potential future uses, since the chatbots he was accustomed to in customer service settings weren't reliable, especially considering the high stakes of health care. "Then I played around and I was completely gobsmacked at how fluent it is, and how nicely it writes," he said. "Then I thought to myself, 'Well, gee, I have to change my opinion. This is actually fairly good.'"

Guidelines, guardrails in development

ACP has not yet released any official positions or statements on AI but is currently working on policies related to its use in the provision of health care via its committees. Annals of Internal Medicine, published by ACP, requires authors to attest at manuscript submission whether and how they used AI-assisted technologies such as LLMs, chatbots, or image creators in the production of submitted work. Annals also notes that chatbots should not be listed as authors because they cannot be responsible for a work's accuracy, integrity, and originality.

In November 2023, the American Medical Association (AMA) released its principles on AI development, deployment, and use. Key concepts include oversight, transparency, and disclosure and documentation. For generative AI, the AMA called on health care organizations to develop policies that anticipate and minimize negative impacts associated with its use.

The federal government has also stepped in, with the Biden Administration issuing an executive order in October 2023 on the safe, secure, and trustworthy development and use of AI. Among other topics, the executive order outlines steps to "help ensure the safe, responsible deployment and use of AI in the healthcare, public-health, and human-services sectors." And in addition, in December 2023, the European Union agreed to the A.I. Act to help guard against misuse of the technology. The New York Times reported that among its provisions include requirements that ChatGPT and similar systems be transparent and AI-generated images be disclosed.

Andrew Bindman, MD, FACP, who sits on the steering committee of Artificial Intelligence Code of Conduct (AICC) project, a three-year National Academy of Medicine initiative aimed at defining equitable, responsible development and use of AI in health care and research, said the challenge is not how to perfect the technology but how to harness it to achieve medicine's goals.

"To me, our goals are still defined by the dimensions of quality that were called out in the Institute of Medicine report ['Crossing the Quality Chasm'] many years ago: How do we ensure that care is patient-centered, effective, efficient, timely, safe, and equitable?" said Dr. Bindman, chief medical officer for Kaiser Permanente in California. "There's a lot of energy around the tools, but the whole purpose of the tools is to serve those goals. It's my hope that as we develop more understanding of these tools, and think about the applications of them, that we're always doing it in service to these principles."

Dr. Topol imagines a day when generative AI can use all of a patient's data, including their genome, their environmental data, their air pollution exposures, their housing location, and their socioeconomic characteristics, to help a doctor make a diagnosis or prevent other health conditions.

"We're getting to a point where diagnostic accuracy in the times ahead could be improved because of being able to assimilate, integrate, process all this data with the medical literature up to the moment," he said.

Dr. Rodman noted that AI could help to augment internal medicine physicians' role as diagnosticians.

"In the near term, you will be that expert diagnostician, you will walk into the room, and you'll have an ambient AI that is listening, and it will not replace you. What it will do is say … 'Have you thought to consider that maybe they have eosinophilic pneumonia? And you should ask them if they … have other inhaled exposures because that's on the differential,'" he said. "All of these tools are going to be designed to support systems that will make individuals perform better."

Dr. Hom added that physicians who don't begin engaging with AI now may be at a disadvantage in coming years, as systems improve and the technology becomes more secure and further integrated into actual clinical workflows.

Dr. Mehta summed up his best advice for internal medicine physicians who may still be on the fence about generative AI. "They need to know it's coming, it is here, and if they are skeptical, … I think they need to wake up," he said. "Start asking people. Start learning about it."