November 30, 2023
Artificial Intelligence & Health Care: State Outlook and Legal Update for 2024
- As Congress holds hearings on the growing use of artificial intelligence (AI) in the health care sector, states are considering and enacting legislation to restrict the use or implementation of such technology.
- While states’ actions are aimed at protecting patient health and privacy, they risk creating a patchwork of regulations that are difficult to navigate as AI technology evolves; moreover, these actions could inadvertently stifle the adoption of technology that has real promise to improve patient outcomes.
- This insight reviews recent federal, state, and legal activity on the use of AI in the broader health care sector, including congressional hearings, state legislation, and class action lawsuits.
In November, both the Senate Health, Education, Labor, and Pensions (HELP) Committee and the House Energy and Commerce Subcommittee on Health held hearings to gain a better understanding of how artificial intelligence (AI) may be used within the health care sector, from reducing physician paperwork to improving diagnosis and disease management.[i] During the HELP hearing, witnesses expressed some key concerns around AI application for policymakers to keep in mind when considering future legislation.
As the health care sector’s adoption of AI continues apace, state policymakers have begun restricting the use of AI in health care, citing the importance of protecting patient health and privacy, which may delay patient care and stifle future innovation. States are introducing and enacting new laws that prohibit discrimination and prevent the violation of patient privacy as a result of AI adoption, as required under the Health Insurance Portability and Accountability Act of 1996 (HIPAA), as well as limiting the scope of AI algorithm use within clinical care.
In addition, legal precedents may prohibit the use of AI in health care, as insurers are facing class action litigation on the use of AI algorithms. Both UnitedHealth Care (part of UnitedHealth Group) and Cigna are facing court cases on insurance claims rejections or denials centered on Al algorithms.
Concerns about AI implementation in the health care sector typically center on patient privacy, scope of practice, and delivery of clinical care. Some policymakers see the potential for an overreliance on AI algorithms that may overlook the unique needs of individual patients. Although it is unknown how large a role AI will play in the U.S. health care system, the risks don’t necessarily outweigh its benefits. AI could improve disease surveillance, enhance diagnostic and clinical tools, and spur new drug development. State laws or legal precedents restricting or prohibiting AI use in health care delivery or administration may, in the long-term, delay or prevent patients from receiving better clinical care or medical treatments.
The insight reviews recent federal, state, and legal activity on the use of AI in the broader health care sector, including congressional hearings, state legislation, and class action lawsuits.
In September 2023, HELP Ranking Member Bill Cassidy (R-LA) released a white paper on AI and its potential impacts on health care, education, and the workforce. Broadly, the paper highlighted the potential for AI to be utilized in a variety of ways in the health care sector, including reducing physician paperwork – particularly around electronic records – and called on Congress to protect patient information within and beyond the current scope of HIPAA. To support drug research, development, and approval, the paper also stressed that manufacturers are actively using AI and are likely to continue to incorporate new advancements into the technology. Indeed, the paper noted that in 2021 over 100 drug applications submitted to the Food and Drug Administration included AI components.
Both Senate HELP and the House Energy and Commerce Subcommittee on Health hosted hearings this year to better understand the use of AI in the health care sector. Notably, the HELP Committee highlighted four areas of concern related to AI utilization: automatic insurance denials, increasing biosecurity risks, cyberterrorism, and patient privacy. Congressional committees of jurisdiction are proactively engaging on questions around AI usage in health care and are likely to introduce guardrails to address these concerns in 2024.
According to the National Conference of State Legislatures, at least 25 states, Puerto Rico, and the District of Columbia have introduced AI bills that, in most cases, require studies to help policymakers under the impact of AI or AI algorithms. Fifteen states and Puerto Rico have adopted resolutions or enacted such legislation. Yet a few states are considering more restrictive legislation to that would mandate insurance companies and others to publicly disclose algorithms and other technical information, as well as to shield patients from algorithms that could discriminate based on personal characteristics, clinical care, and mental health care.
In February 2023, California lawmakers considered legislation that would update the state’s Health and Safety Code as well as the State Insurance Code to prohibit a health insurance plan from discriminating on the basis of race, color, national origin, sex, age, or disability by AI algorithms. The bill does not prohibit the use of clinical algorithms that rely on variables to identify, evaluate, and address health disparities. The New Jersey legislature considered a similar bill.
In May 2023, Georgia enacted a law to regulate the use of AI in optometric diagnostic care. The law prohibits prescribers from using the data or information obtained from an eye assessment as the sole basis for issuing a prescription. Prescribers must also communicate to patients that the eye assessment is not a replacement for the eye examination, the eye assessment cannot generate an initial prescription, and the eye assessment can only be used if the patient has had an eye examination in the past two years. Illinois introduced similar language in several different bills.[ii]
In April 2023, Maine’s legislature considered legislation prohibiting the use of AI technology in health care facilities to achieve medical or nursing objectives or that limit and substitute direct care from a registered nurse.
Mental Health Care
In February 2023, Massachusetts introduced legislation that would require licensed mental health professionals who intend to use AI to first seek approval from the relevant professional licensing board. The bill also mandates that patients give consent to the use of AI in their treatment. Rhode Island and Texas legislatures introduced similar language.
In September 2023, Pennsylvania introduced legislation to address the use of AI in insurance claim processing. The bill would require insurers to disclose the use or absence of AI algorithms on their website. The Insurance Department of the Commonwealth would be required to implement a process to certify that the algorithms are not discriminating against any protected groups and insurers must submit their data to the department. The penalties for violating the act would include fines or license suspension.
In March 2023, ProPublica investigated patient concerns that Cigna utilized a decade-old algorithm to deny patient claims without a licensed medical professional (doctor or nurse) review. State laws typically require medical oversight of claim denials to ensure that patients are not denied medically necessary treatment. State regulators are responsible for ensuring that insurers are compliant with these processes, including denials for prior authorization, which are required by the insurance company to approve a patient’s treatment, therapy, or medication.
Following the investigation, two of the largest insurance companies in the United States are facing class action lawsuits over the use of algorithms to facilitate claim denials.
Plaintiffs argued that Cigna used an algorithm, known as procedure-to-diagnosis (PXDX), to batch similar insurance claims for automatic denial. The company’s medical directors are accused of denying a large volume of claims based on PXDX without conducting a review of individual claims as required by applicable state law. Furthermore, plaintiffs argued that Cigna medical directors denied without review 300,000 medical claims, spending 1.2 seconds evaluating each individual case. Moreover, the plaintiffs argued that the company retaliated against medical directors that did not routinely deny claims identified by PXDX.
Following the ProPublica investigation on PXDX, Cigna stated that findings of the report are biased and incomplete. Subsequently, in May 2023, House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-WA), Subcommittee on Health Chair Brett Guthrie (R-KY), and Subcommittee on Oversight and Investigations Chair Morgan Griffith (R-VA) reached out to the president and chief executive officer of Cigna for additional information and clarification.
The case was filed in August 2023 in the U.S. District Court of Connecticut. In November 2023, the case was dismissed without prejudice. In September 2023, another class action lawsuit on PXDX was brought before the U.S. District Court of Southern California. In the same month, a Cigna shareholder sued the company over the use of PXDX. It is likely that additional suits on PXDX could appear in other districts courts in 2024.
Plaintiffs argued that UnitedHealthcare used AI to deny Medicare Advantage claims based on an AI model with a 90 percent error rate. The plaintiffs contend that the insurer denied claims without medical review, specifically for post-acute care, as only a tiny fraction of the population will appeal a denial with most patients either paying out-of-pocket or forgoing the medically necessary care. Moreover, the plaintiffs accused the insurer of acting in bad faith and denying claims that the insurer is required to cover. In November 2023, a spokesperson for UnitedHealth said in a statement that the platform is not used to make decisions on coverage.
The case was filed in November 2023 in the U.S. District Court of Minnesota and remains before the court.
While Congress continues to work to better understand the role AI may play in the health care sector, states are considering and enacting legislation to restrict the use or implementation of AI technology. While state actions are intended to protect patients, they risk stifling potential benefits of AI applications for patient care. The enactment of these laws may also create piecemeal regulations that are difficult to navigate as AI technology continues to evolve.
Moreover, pending litigation over insurance companies’ use of AI algorithms may set new precedent for the use of this technology. Policymakers will have to consider how best to encourage the use of AI to improve health care treatments and medications while ensuring that the technology helps rather than hinders patient care.
[i] Daphne Edmond was an intern at the American Action Forum during the fall 2023 semester who contributed to this insight.
[ii] These bills include the University of Illinois Hospital Act, the Hospital Licensing Act, and the Medical Patient Rights Act.