https://immattersacp.org/archives/2024/10/moving-forward-with-ai-in-health-care.htm

Moving forward with AI in health care

Coauthors of an ACP policy position paper outlined their hopes and reservations for artificial intelligence in medicine, underscoring the importance of continued physician education and alignment of regulatory definitions and policies.


The applications of artificial intelligence (AI) in health care may seem boundless, given its ever-evolving nature, but serious questions have been raised related to privacy, regulation, and ethics.

To address some of these concerns, ACP laid out its positions and recommendations in a new policy paper, “Artificial Intelligence in the Provision of Health Care,” published by Annals of Internal Medicine on June 4.

Predictive artificial intelligence in health care has existed for 20 years including machine learning algorithms deep learning and neural networks Image by ink drop
Predictive artificial intelligence in health care has existed for 20 years, including machine learning algorithms, deep learning, and neural networks. Image by ink drop

Experts conducted a literature review and finalized the policies with input from ACP's Medical Informatics Committee and the Ethics, Professionalism, and Human Rights Committee, explained coauthor Nadia Daneshvar, JD, MPH, Health IT Policy Associate in ACP's Regulatory Affairs department.

“There was a lot of deliberation that went on over several months in the Medical Informatics Committee, with all the members contributing to the paper's content from the lens of the physician, who is using these AI tools,” added Deepti Pandita, MD, FACP, physician and chief medical information officer and vice president of clinical informatics at the University of California Irvine Health System. “We were very cognizant of the paper being very physician-centered.”

In addition to being a coauthor, Dr. Pandita also chaired the Medical Informatics Committee when the paper was written. In an interview with I.M. Matters from ACP, she and Ms. Daneshvar elaborated on the promise and risks of AI in health care.

Q: What's the difference between predictive AI and generative AI, and which is expected to have the greatest impact in health care?

A: Dr. Pandita: Predictive AI has actually been around for quite some time. It's actually almost 20 years old now. We have had machine learning algorithms, deep learning, neural networks, all of that in place for over 20 years. In the last 10 years, a lot of work was happening around predictive modeling, predictive analytics, and using that for decision support at the point of care for physicians.

I think what changed in the last two to three years was the onset of generative AI tools. … It's escalated everything that we've known about AI and presented it in a very readable, user-friendly manner. That's why I think generative AI will have a very big impact in health care, because just since it was announced, we have already seen so many use cases, whereas for the predictive AI that's been around, still the use cases were somewhat limited.

Q: How will AI help physicians?

A: Dr. Pandita: For generative AI … currently a lot of the use cases are around, “How do we reduce the administrative burden that a clinician faces?” That administrative burden has been shown in a lot of studies to be the source of clinician burnout. Generative AI could be a prescription to mitigate some elements of clinician burnout in terms of using AI assistance to answer patient messages or to triage the patients or to [schedule] the patient without involving the physician in that administrative process. Also, ambient listening to create documentation, so the entire physician/patient interaction is captured through ambient listening, and then a note is generated.

I think the other area which physicians are excited about, where generative AI helps, is [solving the] health equity gap, because generative AI is language agnostic for a lot of things. The entire conversation between the physician and the patient can happen in the patient's preferred language if the physician also speaks that language, but the note can be generated for note-keeping purposes in English. At the same time, the note can be translated into patients' preferred language and reading level for them to take away instructions. You can see how that would be revolutionary, that one tool can solve for all those gaps, not only in terms of health literacy, but also in terms of digital literacy, in terms of solving for that language barrier, which is a social determinant and can improve health outcomes, ultimately.

Q: How should AI be regulated in health care?

A: Ms. Daneshvar: AI regulation at the moment is being handled by multiple agencies, including, for instance, the Food and Drug Administration, which evaluates AI tools or software as medical devices. Then, very recently, in the last year or two, the Office of the National Coordinator for Health Information Technology has released rules for labeling of various, what they call “decision support interventions,” to allow the end user to be able to ascertain the appropriate uses or inappropriate uses.

We declined to get into the specifics of how we think it should be regulated, because it's such a nebulous area right now. But I think what we agreed on is that there should be a unified governance framework between the different federal agencies. As a part of that, it would be helpful to a lot of end users and regulated entities, including physicians, if some of the definitions [were standardized]. Because these different agencies use different definitions for different types of AI, and that's another area that I think needs to have some sort of unification so that it becomes clear and not this disjointed regulatory patchwork.

Dr. Pandita: Not only should the federal agencies be aligned in how they define AI, how they regulate AI, how they recommend managing AI going forward, but [they should] also align with state regulations, because a lot of states are creating their own AI regulations which are not similar to what federal regulations are being proposed, and this is creating a lot of confusion. Physicians don't practice in a vacuum. They have to abide by both federal and state regulations. So I think alignment as much as possible would be really beneficial.

Ms. Daneshvar: One other thing that we say in the paper and I would add is it's important to get the perspectives and input from end users in the development of governance frameworks, policies. By end users, in this case, [we mean] physicians and other clinicians who are using these products.

Dr. Pandita: I would add patients to that … because they are the consumers of the AI, and currently, we don't even have good regulation around how much to disclose to patients. If a physician is using AI, how much do they tell the patient? Are they themselves even aware that AI is being used in their health care ecosystem? There's a lot of education and perceptions that need to be gathered in terms of getting the regulations right.

Q: How should physicians talk to patients about AI, especially those who might be concerned about data security or that it will raise the cost of care?

A: Dr. Pandita: I am of the school of thought that full transparency is always better, so always disclose to the patient if an AI tool is being used in the patient/physician relationship, and that is dependent on the caveat that the physician knows when an AI tool is being used, which sometimes they don't. It may just be seamlessly blended into how the EHR provides care or be some software in [how] their system provides care, but I am in favor of full disclosure and then [the] patient makes an educated decision. There should be a consent process for any tools, and there should be transparency around that even if a tool is used, the physician is always in the know and in the loop and in agreement with how the tool is being deployed.

Ms. Daneshvar: ACP has some thoughts on the way that AI should be either disclosed or not disclosed to patients. Some of that work, in terms of ACP's views is, I think, still going to be discussed and maybe further policies in future [will be developed] with the Ethics Committee. I also just wanted to note that this is an area where the industry and health care systems are currently grappling with how to do it, or how they should do it. There aren't yet too many regulations in that area.

Q: What guardrails does ACP recommend to ensure that AI-assisted decisions are ethical? How can overreliance on the technology be mitigated?

A: Dr. Pandita: We call this out very clearly in the paper, and we had our ACP ethics experts weigh in on this as well. The one area we call out is that the physician always be part of the decision making and not delegate the decision making completely to the AI tool. That is the responsibility of the physician, but also, physicians need to be, on the flip side, educated that this is their responsibility. There is an educational aspect to this, a learning aspect of this, but also an ethical aspect of this.

Q: How can training on AI be incorporated into medical school curricula or continuing medical education?

A: Dr. Pandita: ACP has always been very proactive in this. At least the last four or five annual conferences we have had, we have had a track around AI, so we have already been thinking about this ahead of time. We've also hosted webinars through ACP, and we'll continue to do that. ACP has a very robust Medical Informatics Committee, which leads the charge on some of this AI development and will be diving deeper into the education aspect.

I think the secondary question is, “How soon do we start the training?” Because right now we have early career, mid-career, late career physicians that we are trying to get up to speed. However, I think we need to pivot that and start even sooner, medical students, residents, fellows, start training them early on, and those educational curricula are still evolving.

Q: How can AI be used in a way that doesn't perpetuate or worsen health disparities?

A: Dr. Pandita: Again, this is where AI governance is key. I think every health care system should have an AI governance structure. If you are not part of a large health care system, or … you're an independent practitioner, any tool that you're looking into, you should question how the tool was developed. … Does the model have population studies that represent populations you're providing care for? Does the model have enough numbers to actually reflect accuracy in terms of model validation? How long has the model been studied? Is there any evidence that the model deteriorates over time or depreciates over time? Because a lot of models do.

Again, it ties to education. A physician may not know to ask these questions, but as we educate them, they will start asking these questions. You have to never accept a model at its face value. You have to be asking these questions constantly and then evaluate them at either an annual or biannual cycle to see if it's actually delivering its promise in terms of the populations you are serving, because models can be biased. Models, if they don't include populations that you are serving, can be erroneous, so you have to be very careful around just not going for the shiny object mentality and deploying these models, but rather providing due diligence and careful consideration to all the elements I outlined.

Ms. Daneshvar: This is another area that's currently being grappled with in different health care systems and industries. I think it's not a foolproof methodology, and I think that the mitigation of bias and avoiding bias and perpetuation of health care inequities will involve a lot of testing. There's been evidence that models that have been intended to mitigate race or other personal characteristics, even though they've been designed around those sorts of things and intended to mitigate inequities, they've actually contributed to inequities or still maintained inequitable [biases].

Q: What are the main takeaways from this position paper for internal medicine physicians?

A: Dr. Pandita: I think AI is going to be one solution among many for mitigating clinician burnout, improving efficiency and proficiency of our physicians, and also improving health outcomes, although that remains to be seen at this point. …More research needs to be done on that. I think that is the exciting part. The concerning part is we don't know enough about what AI does, and we don't have enough education in our physician community around the do's, don'ts, and pitfalls of AI.Ms. Daneshvar: I believe our major takeaway is what is encapsulated in our first recommendation [in the paper], which is about the role that AI should play in health care and medicine, and that it should be a complementary tool and should not replace physicians.