ACP outlines recommended policies for use of AI in medicine
A new policy position paper from the College offers recommendations on the ethical, scientific, and clinical components of artificial intelligence (AI) use and stresses that AI tools and systems should enhance human intelligence, not supplant it.
Artificial intelligence (AI) technologies should complement clinician logic and decision making, not supplant them, ACP said in a new policy position paper.
The paper, "Artificial Intelligence in the Provision of Health Care," was approved by the Board of Regents on Feb. 20 and published June 4 by Annals of Internal Medicine.
The development, testing, and use of AI in health care must be aligned with principles of medical ethics and should enhance patient care, clinical decision making, the patient-physician relationship, and health care equity and justice, ACP said in the paper. The College reaffirmed its call for transparency in the development, testing, and use of AI for patient care in order to promote trust in the patient-physician relationship.
ACP recommended that patients, physicians, and other clinicians be made aware whenever possible if AI tools are likely being used in medical treatment and decision making. The College reaffirmed that AI developers, implementers, and researchers should prioritize the privacy and confidentiality of patient and clinician data in AI models. In addition, ACP recommended that clinical safety and effectiveness, as well as health equity, must be top priorities for developers, implementers, researchers, and regulators of AI-enabled medical technology and that the use of AI in health care should involve a continuous improvement process that includes a feedback mechanism. "This necessarily includes end-user testing in diverse real-world clinical contexts, using real patient demographics, and peer-reviewed research. Special attention must be given to known and evolving risks that are associated with the use of AI in medicine," the paper said.
ACP also reaffirmed that the use of AI and other emerging technologies in health care should reduce rather than exacerbate disparities in health and health care. To meet this goal, data used to develop AI models should include diverse populations. Congress, HHS, and other key entities should support and invest in research and analysis of AI data to identify any disparate or discriminatory effects. In addition, ACP recommends multisector collaborations among the federal government, industry, nonprofit organizations, academia, and others to prioritize mitigation of biases in algorithmic technology.
AI developers must be accountable for the performance of their models, and a coordinated federal AI strategy with a unified governance framework is needed, ACP recommended. AI tools should always be designed to reduce clinicians' burdens in support of patient care, ACP said, and training in AI should be provided at all levels of medical education. Finally, ACP recommended that the environmental impacts of AI and their mitigation should be studied and considered.
"The expansion of AI and [machine learning] technologies in health care systems means that physicians are encountering new tools that they were not previously aware of or do not yet fully understand," the authors concluded. "To ensure maximum benefit and minimum harm to patients from these new technologies, and to ensure that they are used in alignment with the ethical responsibilities of physicians and the medical profession, more guidance, regulatory oversight, research, and education are needed for physicians, other clinicians, and health care systems."