Research

Primer: The Medicare Advantage Star Rating System

Introduction

For years, policymakers and health insurers have looked for ways to simultaneously reduce federal health care expenditures and ensure better quality care for patients. For both hospital services (Part A) and physician services (Part B), the Centers for Medicare and Medicaid Services (CMS) has implemented multiple programs to track providers’ performance on various metrics and adjust payments accordingly—similar to efforts being imposed by private insurers. For Medicare Advantage (MA or Part C), CMS operates the Star Rating System. This system provides a relative quality score to Medicare Advantage Organizations (MAOs) on a 5-star scale based on their plans’ performance on selected criteria, and is now used to determine whether or not an MAO will receive bonus payments and/or rebates for their enrollees.

How Stars are Calculated

The 5-star rating system was first implemented by CMS for MA plans in 2008 serving as a tool to inform beneficiaries as to the quality of the various plan options and assist them in the plan selection process. Ratings are set at the MAO contract level—not the plan level—meaning all plans under the same contract receive the same score. Stars are assigned to each contract for each individual measure being evaluated, based on relative performance compared to the other contracts. The overall summary score for each contract is then calculated by averaging the star ratings for each individual measure for a contract.

Performance is not weighted by plan enrollment; a contract performing well with many enrollees does not receive any extra credit for providing high-quality care to more people than a contract with lower enrollment.  Further, for the majority of measures in the Stars Rating program, performance is not adjusted for patient characteristics or socioeconomic status. There are a few lower-weighted Consumer Assessment of Healthcare Providers and Systems (CAHPS) measures, which measure patient satisfaction with the care they received that include some adjustments for age, education, mental and physical health, income, and state of residence.[1] However, adjustments are not made for the higher-weighted Healthcare Effectiveness Data and Information Set (HEDIS) or the Health Outcomes Survey (HOS) clinical measures which more closely and objectively measure the quality of health care provided through reviews of patient medical charts and insurance claims, and which are more likely to be impacted by those adjustment factors.

Since 2011, CMS has set thresholds (based on historical trends) which must be attained to achieve 4-star status for roughly half of the measures. However, they are eliminating the thresholds beginning in 2016 as CMS no longer believes the target indicators are needed and that the thresholds increase the risk of rating misclassification. Analysis by CMS has shown that greater improvement is typically achieved for measures which do not have predetermined thresholds than those that do. While this may be because the incentive to improve any further is significantly diminished once the threshold for receiving the bonus payment is achieved, it may also result from underlying differences between measures which have been given thresholds and which have not, as they are not randomly selected.[2]

In 2014 and 2015, measures were based on five broad categories, with weights varying based on the category’s level of importance as determined by CMS[3]:

Metric Category

Weight

Improvement

5[4]

Outcomes

3

Intermediate Outcomes

3

Patient Experience

1.5

Access

1.5

Process

1

Compared to the first year bonuses were given—when clinical quality metrics accounted for only 49 percent of the total rating—such metrics now account for 63 percent of the rating.[5] Additionally, the “Reward Factor” (previously the Integration Factor or “i-Factor”) which measures a contract’s quality rating consistency across all measures relative to other plans will continue to be used.[6]

How Rewards are Calculated

Under a provision of the Affordable Care Act (ACA), these star ratings began to be used to adjust payments to MAOs beginning in 2012. Bonuses were to be awarded for contracts receiving 4 or more stars. However, at the same time base payments to MA plans were scheduled to be reduced as part of the Medicare cuts provided in the ACA, CMS also launched a three-year demonstration project from 2012-2014 providing bonuses to plans achieving 3 or 3.5 stars in order to determine if providing bonuses at this level would lead to “more rapid and larger year-to-year improvements”.[7] That demonstration project has now ended.

Rewards are two-part: direct bonus payments to the plan operator and rebates which must be returned to the beneficiary in the form of additional or enhanced benefits, such as reduced premiums or co-payments, expanded coverage, etc.

Bonus payments—like base MA plan payments—are paid per enrollee and are calculated as a share of the MA benchmarks, which vary by county[8], and thus bonus payments vary by county. In 2014 and subsequent years, bonuses for 4-star plans or higher are 5 percent of the area’s benchmark.[9] New plans (offered by an organization which has not had an MA contract in the three preceding years and thus do not have a sufficient amount of data upon which to qualify) are awarded a 3.5 percent bonus. Contracts in counties with certain demographic factors receive double bonuses.[10] Plans that fail to report are treated as having less than 3.5 stars and thus do not receive any bonus payment.

Rebates in the MA plan existed prior to the Star Rating System, and operate in virtually the same way under this new system, though not at the same percentage as before. Traditionally, MA plans have received a rebate equal to a percentage (previously 75 percent) of the difference between the plan’s bid and the benchmark for that area if the bid is below the benchmark; plans bidding above the benchmark receive no rebate and are only paid the benchmark amount per beneficiary by CMS—plan beneficiaries selecting that plan have to pay an additional premium to make up the difference. Under the Star Rating System, pursuant to the ACA, plans bidding below the benchmark now receive their rebate based on a percentage of the difference between the bid and the benchmark, adjusted to include the amount of any bonus payment received, as follows:[11]

Plan Rating

Bonus Payment

New Benchmark

Rebate Payment

4.5 & 5 Stars

5%

105% of Benchmark

70%

4 Stars

5%

105% of Benchmark

65%

New Plans

3.5%

103.5% of Benchmark

65%

3.5 Stars

None

Benchmark

65%

3 or Fewer Stars

None

Benchmark

50%

Plans Not Reporting

None

Benchmark

50%

Including the bonus payment amount in the benchmark against which rebates are calculated allows for higher rated plans to increase their bids (and get a higher payment from CMS) while still receiving a rebate for their enrollees. Rebates must be returned to enrollees in the form of reduced premiums or increased benefits.

The MA Stars system is not a typical pay for performance program. Since CMS does not directly pay care providers in MA, but rather pays insurers offering private coverage to Medicare beneficiaries, the reward is actually being paid to an intermediary in the provision of care. Thus, MAOs, relying on the care providers who see their patients in order to earn a reward, must educate the providers in their networks as to which metrics are being evaluated; though, as discussed later in this report, the best they can do is inform providers as to which measures were evaluated in the year prior.

Additionally, regulators and health care providers should pay attention to quality metrics that develop under the new payment system which will result from the recently passed Medicare Access and CHIP Reauthorization Act of 2015. Hopefully the various systems will evaluate similar metrics so doctors are not given conflicting indicators as to how they should be treating their patients.

Results Thus Far

In 2012, 91 percent of MA contracts received a bonus payment, but only 4 percent of the total bonus payments came from funds designated for these bonuses by the ACA—the rest of the bonuses were paid through the demonstration project which allowed for bonuses to be paid to 3-star plans.[12] Two thirds of total payments went to plans with less than 4-star ratings.[13]

On average, higher ratings are correlated with longer length of time operating an MA contract,[14] possibly suggesting that over time MAOs learn how to best achieve the results desired by CMS. Generally, average scores have been increasing and the number of plans with higher ratings has been increasing. All plans will not be able to achieve top ratings, however, because the system uses relative scoring, essentially ranking plans in order of achievement—not everyone can be the best.

Potential Problems with Current System

Effective Tool for Patients?

While it is likely that the star ratings have been a somewhat useful tool for beneficiaries in differentiating between otherwise similar plans, it seems that individual preferences do not exactly line up with the criteria CMS has decided to use in evaluating MA plans under the Star Rating System. In 2012, 51 percent of MA-eligible beneficiaries had the option of choosing a plan with a 4-star rating or better, but only 29 percent chose such a plan.[15] Though, one study found a 1-star-higher rating is associated with a 9.5 percent increased likelihood that a new beneficiary will enroll in that plan and a 4.4 percent greater likelihood of enrollment among current enrollees switching plans.[16] (Once a year, except for a short window from December 1 to December 7, enrollees may elect to use a Special Enrollment Period to switch into a 5-star plan.[17]) Further, ratings were found to be directly correlated with voluntary attrition rates (22 percent for 2-star plans, on average, and only 2 percent for 5-star plans).[18]

Adverse Impact on the Poor

Many have expressed concern that the Star Rating System—because of how measures are evaluated and rewards are paid—unfairly punishes both low-income enrollees and the plan sponsors primarily serving such enrollees. It is argued that a significant portion of the measures evaluated are influenced by a patient’s socioeconomic conditions, yet very few of the measures are risk-adjusted to neutralize the impact of such differences between patients, thus not allowing for a fair comparison between plans with high versus low enrollment of low-income individuals. This concern has led to calls for either establishing a separate rating system for Special Needs Plans (SNPs) or any MA plan in which enrollees are predominantly low-income, or providing a score adjustment for such plans in order to compensate for those patient differences.[19] The National Quality Forum, in its report released in August 2014, notes the well-documented link between patients’ sociodemographic conditions and health outcomes, and recommends that such factors be included in risk adjustments for performance scores.[20]

An association has been found in various studies between dual-eligible status and performance on specific MA and Part D measure ratings, and there exists a “significant and growing performance gap” between dual-eligibles and non-dual-eligibles in MA plans.[21] Because duals use services at least as much as non-duals, some believe this performance gap results from a lack of compliance with treatment plans (which may be due to a lack of resources or understanding) rather than a lack of access to care.[22] Where a beneficiary lives may also be a key factor in what plans are available to them, and conversely, how well the plans in their area score. Geographic variation in fee-for-service (FFS) costs is associated with geographic variation in plan ratings which will result in lower benefits in areas that disproportionately have higher poverty rates; thus, benefits will be lower where patients are poorest.[23]

Contrary to the norm for the MA population overall—where enrollment tends to be highest among higher rated plans—enrollment was not as strongly associated with star ratings for African American, rural, low-income, or the youngest beneficiaries.[24] It is possible that this is an example of CMS choosing criteria which does not properly align with beneficiary needs, or it may be the result of a lack of access to higher-rated plans. In 2012, based on CMS data, the American Action Forum (AAF) found that higher-rated plans are less likely to be available in counties with higher poverty rates; a non-poor county is 2.6 times more likely to have bonus-eligible plans than a poor county (with a 25 percent or higher poverty rate).[25]

In 2013, one analysis by Inovalon found that “contracts with a high percentage of SNP members performed worse [than plans without a high percentage of SNP members] 86 percent of the time”.[26] While SNP members are not necessarily low-income or dual eligible, SNP membership is limited to people who live in certain institutions (such as a nursing home or intermediate care facility) or require home health care, dual-eligibles, or people who have specific chronic or disabling conditions.[27] As low-income individuals are more likely to be dual-eligibles and to have multiple chronic conditions, SNP members are often low-income.[28]  Further, in a follow-up analysis in 2015, the same organization analyzed seven Star measures and found sociodemographic characteristics contributed to at least 30 percent of the performance gap between dual and non-dual eligible MA plan members.[29] Community resource characteristics, which are often linked to an area’s economic wellbeing, also accounted for a large share of the performance gap.[30] More specifically, another analysis found that while “results show continued improvement among Chronic-SNPs and Institutional-SNPs, that [improvement] has not been mirrored by D[ual]-SNP focused contracts”.[31] However, seven plans in which duals account for 85 percent or more of their enrollees achieved 4 or more stars, indicating that it is not impossible for such plans to achieve a bonus under the current system.[32]

In response to requests to address the discrepancies that many have found, CMS admits in its 2016 Call Letter that there are differences in performance for dual-eligible beneficiaries; however, they do not believe the differences or evidence are robust enough to warrant adopting a separate measurement system at this time and call for continued research on the issue. It is worth pointing out that while CMS notes they controlled for characteristics such as age, sex, and race/ethnicity in their analysis, they do not claim to have controlled for income, language, or education, all of which are more strongly correlated with likelihood of being dual-eligible, and thus may have muted the magnitude of the impact contributable to dual status.[33]

Poor Program Structure Creates Misaligned Incentives and Unintended Consequences

The Star Rating System has had other unintended consequences resulting from poor program structure and misaligned incentives. Some of the biggest problems with the program structure relate to timing. The measurements that will be evaluated each year are determined and announced after both the period from when the measurements are taken and after contract submissions for the following year are due. This leaves plans unaware of what they’re being evaluated on, which makes it difficult to know what they should be doing or to make appropriate changes for the next year resulting in a two-year lag on adjustments by plans and their providers, at best. Another concern is that the retrofitting of the evaluation criteria could allow for CMS to pick winners and losers by selecting criteria that specific companies perform particularly well (or poor) on. Further, the bonus payments are based on the benchmark price and enrollment in the following year from when the measures were taken, which means plans are rewarded for patients they weren’t necessarily covering at the time the reward was earned. Finally, not making the evaluation criteria known ahead of time and delaying the reward is inconsistent with all theories on how to make reward incentive programs effective.

The rebate structure is also poorly designed and may reduce plan choice. It leads to benefits increasing for plans with higher ratings, rather than giving high ratings to plans with more benefits. Beneficiaries will thus be incentivized to go into a subset of plans (even if those plans aren’t truly the best option for the beneficiary), and competition and the range of plan options available may be reduced.[34] The increased rebate rewards the beneficiary for enrolling in a high-quality plan, rather than rewarding the operator of the plan for achieving high-quality standards. The rebate can be viewed as an indirect reward to the plan operator if the rebate providing better benefits increases enrollment since bonus payments are based on enrollment, but if beneficiaries prefer a plan that does not have a high rating, for reasons not being measured by CMS’s rating criteria, the beneficiary is essentially penalized by not getting the enhanced benefits that beneficiaries in plans with high ratings are afforded.

As happens with most reward programs, plan sponsors are focusing on the metrics which they can control.[35] Given that the things they are least able to control are patient outcomes, this may not be the desired result. As plan providers become more familiar with how the Star Rating System works, they may be able to unfairly take advantage and manipulate their scores. For example, only a small sample of patients is taken to assess for treatment of mental health issues and at least one company can predict which patients will be sampled, thus allowing them to remind doctors to assess these specific patients and game the system, without properly evaluating all of their patients.[36] Additionally, conflicting interests may ensue. One challenge for plans which has arisen is trying to ensure they are only networking with high quality providers, while simultaneously not limiting access to care.

Conclusion

The Star Rating System appears to be increasing the quality of the plans available and care provided to Medicare Advantage beneficiaries. However, it is not clear that the criteria being evaluated by CMS is necessarily the criteria of most importance to MA beneficiaries, and thus may not be accurately reflecting enrollee preferences. This mismatch of preferences and criteria may be causing more problems than just weakening the effectiveness of the star ratings as an informational tool for patients. Inadequate risk adjustment and consideration of patients’ socioeconomic status may be resulting in ratings which do not accurately reflect the quality of care and service provided, particularly for plans enrolling high proportions of low-income beneficiaries. The corresponding bonus and rebate payment structure may actually be harming the most vulnerable beneficiaries as a result.


[4] In 2014, the improvement metric had a weight of 3; in 2015, it was 5.

[8] MA benchmarks are based on average fee-for-service (FFS) expenditures per beneficiary in a given rating area. Because FFS costs vary by geographic area, MA benchmarks will also vary by geographic area.

[10] Double bonus counties: the metropolitan statistical area has a population of more than 250,000; at least 25 percent of eligible beneficiaries are enrolled in an MA plan; and Medicare fee-for-service (FFS) costs in that area are lower than the national average. http://americanactionforum.org/research/medicare-advantage-star-ratings-detaching-pay-from-performance

Download PDF

Disclaimer