Complete Story
 

07/11/2024

Artificial Intelligence: Emerging Issues in Healthcare and Insurance Public Policy

 


Provided by OSMA's exclusively endorsed partner for medical liability insurance, The Doctors Company

Remi Stone, JD, Director, Government Relations, The Doctors Company


 

The use of artificial intelligence (AI) in the healthcare setting and its effects on professional liability insurance are rapidly evolving. As AI technology advances and becomes more integrated into healthcare, the legislative and regulatory framework is just starting to catch up—as is always the case with new and emerging technologies.

Across the nation, public policymakers are attempting to balance the integration of AI in the healthcare environment with safeguarding positive patient outcomes.

In addition, matters relating to liability remain unsettled. The distribution of liability will likely shift as device manufacturers, algorithm developers, facility leaders, and other parties involved in healthcare choose to integrate AI into more diagnostic and treatment tools.

Causes of action involving professional, vicarious, and product liabilities may become more common in the court systems, and while we have not yet seen legislation enacted that specifically assigns AI liability or targets professional liability tort statutes, we anticipate the future will bring a deluge of bills that attempt to add clarity to this unsettled legal and public policy topic.

Nevertheless, legislation addressing traditional liability concerns increasingly touches on AI issues. AI-related proposals have addressed matters that include patient informed consent, scope of practice, admission decisions, and care plan development, as well as those focusing on more common issues involving data and patient privacy.

Some state legislatures have put forward ideas, such as banning healthcare insurers from using AI to discriminate on the basis of a patient’s race, gender, national origin, age, or disability. Other states are attempting to regulate the use of AI in diagnosing and treating patients.

For example, patient consent and scope of practice are central to legislation pending before the Illinois legislature (HB1002). One proposal would require that patients be informed and provide consent before a diagnostic algorithm is used, with patients having the option of being diagnosed without it. Any algorithm used must be certified by the state’s Departments of Public Health and Innovation and Technology and known to achieve results as accurate or more accurate than other diagnostic means. In addition, lawmakers are considering legislation (HB 3338) that touches on scope of practice by proposing that healthcare facilities be prohibited from substituting health information technologies or algorithms for a nurse’s human judgment. 

The public policy discussion at the federal level is also accelerating. Last October, President Biden issued Executive Order 14110 to promote responsible development of AI in the healthcare arena. The executive order and subsequently announced public-private partnerships with providers and payers focus on the “safe, secure, and trustworthy use and purchase and use of AI in healthcare.”1

In February 2024, lawmakers formed the Congressional Digital Health Caucus with the goals of educating policymakers about the rapid changes in digital health innovation, showing its potential effects on patients and the healthcare system, and ensuring that all Americans benefit from advancements.

On the international front, the European Union (EU) Parliament adopted the Artificial Intelligence Act in March 2024. The act—the first of its kind in the world—covers the 27 member EU countries with a comprehensive approach to regulating AI, including its use in healthcare. This comes on the heels of the 2022 EU report, Artificial Intelligence in Healthcare, which discussed the clinical, social, and ethical risks posed by AI in healthcare.2 The report is expected to serve as a roadmap for regulating AI in healthcare as the EU builds out the Artificial Intelligence Act’s regulatory framework.

Closer to home, the American Medical Association (AMA) provides a succinct view on physician liability, patient safety, and risk management in Principles for Augmented Intelligence Development, Deployment, and Use.

As the AMA writes, “The question of physician liability for use of AI-enabled technologies presents novel and complex legal questions and potentially poses risks to the successful clinical integration of AI-enabled technologies.”3 The AMA further outlines an approach to addressing liability concerns raised by the deployment of AI in the healthcare arena:

Current AMA policy states that liability and incentives should be aligned so that the individual(s) or entity(ies) best positioned to know the AI system risks and best positioned to avert or mitigate harm do so through design, development, validation, and implementation.

Where a mandated use of AI systems prevents mitigation of risk and harm, the individual or entity issuing the mandate must be assigned all applicable liability.

Developers of autonomous AI systems with clinical applications (screening, diagnosis, treatment) are in the best position to manage issues of liability arising directly from system failure or misdiagnosis and must accept this liability with measures such as maintaining appropriate medical liability insurance and in their agreements with users.

Health care AI systems that are subject to non-disclosure agreements concerning flaws, malfunctions, or patient harm (referred to as gag clauses) must not be covered or paid and the party initiating or enforcing the gag clause assumes liability for any harm.

When physicians do not know or have reason to know that there are concerns about the quality and safety of an AI-enabled technology, they should not be held liable for the performance of the technology in question.3

The Doctors Company, like the AMA, will continue to advocate to limit healthcare practitioner liability in the AI space and ensure that it follows the established legal framework for medical malpractice litigation. We will also continue to monitor this important issue and work to protect healthcare practitioners as AI gains momentum.

Learn More & View References >

 



The Doctor’s Advocate is published by The Doctors Company to advise and inform its members about loss prevention and insurance issues.

The guidelines suggested in this newsletter are not rules, do not constitute legal advice, and do not ensure a successful outcome. They attempt to define principles of practice for providing appropriate care. The principles are not inclusive of all proper methods of care nor exclusive of other methods reasonably directed at obtaining the same results.

The ultimate decision regarding the appropriateness of any treatment must be made by each healthcare provider considering the circumstances of the individual situation and in accordance with the laws of the jurisdiction in which the care is rendered.

The Doctor’s Advocate is published quarterly by Corporate Communications, The Doctors Company. Letters and articles, to be edited and published at the editor’s discretion, are welcome. The views expressed are those of the letter writer and do not necessarily reflect the opinion or official policy of The Doctors Company. Please sign your letters, and address them to the editor.

 


In your inbox 

Join for 2024

 

Printer-Friendly Version