Weekly Checkup

Me-Oh-My AI

Last Friday, the Senate Committee on Finance held a hearing on the pros and cons of artificial intelligence (AI) in health care. This hearing comes on the heels of a white paper released by Senator Bill Cassidy (R-LA) exploring an AI framework in health care. Let’s take a 30,000-foot view of the ways AI could impact health care services (and already has), the concerns around using AI algorithms in health care, and considerations for lawmakers as they begin defining the regulatory landscape in health AI.

First, how are people using AI in health care now, and what else could we see down the road? As it turns out, there’s already a wide variety of applications for AI. The Food and Drug Administration has approved 692 AI products as of October 2023. Of those products, 76.5 percent (530) were for radiology applications, 10.2 percent (71) were for cardiovascular applications, and the rest were divided between neurology, hematology, gastroenterology/urology, ophthalmic, clinical chemistry, and ear, nose, and throat applications. The hefty number of radiology applications is not surprising: AI is already helping clinicians to better read medical imaging, enhancing the ability of doctors to diagnose and detect diseases more accurately and more quickly than before. Additionally, AI can and is being applied to routine administrative tasks that take up significant amounts of physicians’ time. The Weekly Checkup previously discussed the issues doctors face with prior authorization, but AI offers a promising way to optimize and speed up not only prior authorization work, but billing, note-taking, and coding as well. Medical residents currently only spend 13 percent of their day face-to-face with patients because of administrative tasks; such burdens have  led to high rates of burnout among doctors and nurses. AI promises to significantly reduce the amount of time and energy physicians spend on paperwork and allow them to focus on the primary reason they got into medicine to begin with: to help patients.

Given all the current and potential good AI could do, it’s worth focusing on concerns that have made people hesitant to welcome AI into their doctors’ appointments. Chairman of the Senate Committee on Finance Senator Ron Wyden (D-OR) highlighted one major concern in his opening testimony: the potential for, and actual incidence of, algorithms containing racial bias. In one incident, researchers discovered in 2019 that an algorithm intended to identify patients at high risk of future health problems were deprioritizing Black patients for access to services due to the variables the algorithm used to calculate health needs. Essentially, the algorithm used costs as a proxy for needs, and individuals with less access to health care (often racial minorities and people in lower socioeconomic classes) use less and thus cost less – though that doesn’t mean they need less. This tool was used around the world and potentially affected the care of millions of patients. Privacy issues remain a concern as well. Algorithms need to train on large data sets to provide accurate outputs and improve, which means they use a lot of individual patient data, and so far there isn’t a clear legal framework detailing how this data can be used or how it interacts with health privacy laws.

AI provides a great deal of promise to dramatically improve our health care system. While bias and privacy issues are legitimate concerns, they can be addressed without heavy regulation. Congress should avoid trying to legislate outcomes from algorithms – which is potentially impossible, and likely to lead to negative externalities that could derail the capability of the technology. Transparency measures can address concerns about bias, but Congress must carefully define what is meant by transparency – access to the AI’s architecture, how AI is trained, what data AI models are trained on, etc. – and ensure a balance with intellectual property rights so as not to discourage new and improved AI products. Privacy issues must also be carefully defined and explained, and any attempt to grapple with privacy concerns should answer questions such as: Which of the data that developers use to train AI will require patient consent, what security and anonymization measures must be taken for patient data being used, and how do these issues interact with transparency requirements about training data sets? Of course, these questions are not exhaustive. Additionally, privacy concerns must be balanced with the reality that, to achieve its full potential, AI currently requires massive data sets.

AI promises to revolutionize how we deliver health care in the United States, but for that promise to come to fruition, policymakers should take a careful, slow, and deliberate approach in determining how to regulate AI, instead of jumping to make rules before they fully understand the technology and what we want from it.

Chart Review: Comparison of Mammogram Use by Age, Federal Poverty Level, and Insurance Coverage

Anna Grace Shepherd, Health Policy Intern

Data from the Centers for Disease Control and Prevention (CDC), as part of the 2021 National Health Interview Survey, demonstrate significant disparities in the use of mammograms by women when disaggregated by age, income, and type of insurance.

When disaggregating for age, the CDC found that 69.1 percent of women aged 40 years and up received their recommended preventative mammogram within the past two years. Further dividing these data by age group demonstrates a clear trend, however: Women aged 40–49, who are at increased risk for breast cancer, receive mammograms at a significantly lower rate than their counterparts in the 50–64 and 65 and older age groups, which are at average risk for breast cancer. The CDC data demonstrate that this trend presents itself across all levels of income, as seen in Chart 1.

Women aged 40–49 who earn below 100 percent of the federal poverty level (FPL) are least likely to have received their recommended mammogram (50.5 percent). While the percentage of these women in the 40–49 age group who received their recommended preventative mammogram within the past two years increases to 68.6 percent among those who earn 400 percent or more of the FPL, they still trail women aged 50–64 and 65 and older at the same level of income.

While women 65 and older generally outperform their younger cohorts, only 56 percent of these women earning 100 percent or below of the FPL have received their recommended screening, while 77.5 percent of women earning 400 percent or above of the FPL have received their recommended screening. The CDC data also reveal significant disparities in mammogram use by insurance coverage. As seen in Chart 2, those with Medicaid coverage are less likely than those with private insurance to receive a mammogram, but still more likely than those without insurance coverage at all. Looking at the 65 and older data in Chart 1, we can infer that those with Medicare coverage are also less likely than those with private insurance to utilize mammogram services. This large variation is concerning since these women are covered by Medicare, which pays for these screenings, and doctors recommend women receive preventative mammograms up to age 75. Policymakers should explore if our public safety-net programs are missing some aspect that leads to lower utilization of mammogram services, and what barriers to care beyond coverage exist for women of lower socioeconomic status.

Chart 1:

Chart 1

Chart 2:

Chart 2

Disclaimer

Weekly Checkup Signup Sidebar