Executive Summary

Methodological Considerations in Generating Provider Performance Score

Public reports of health care providers' performance on measures of quality, cost and resource use, patient experience, and health outcomes have become increasingly common. These reports are often intended to help patients choose providers and may encourage providers to improve their performance.

At the July 2009 National Meeting of Chartered Value Exchanges (CVEs) hosted by AHRQ, CVE stakeholders identified a dilemma: Two organizations could, by making different methodological decisions, use the exact same data to produce divergent public performance reports that send conflicting messages to patients and providers. At the request of CVEs and in response to this dilemma, AHRQ commissioned RAND Corporation to develop a white paper to identify the key methodological decision points that precede publication of a performance report and to delineate the options for each. Our overall aim in developing this white paper is to produce a resource that is useful to CVEs and other community collaboratives as they consider the range of available methodological options for performance reporting.

Many methodological steps underlie the construction of provider performance scores for public reporting. These steps include data aggregation, measure selection, data validation, attribution of data to providers, categorization of providers by levels of performance, and assessment of the likelihood of misclassifying a provider's "true" performance. The purpose of this white paper is to review a number of the key methodological decision points CVEs and other community collaboratives may encounter when generating provider performance scores. The paper also discusses the advantages and disadvantages associated with various choices for each of these decision points. While the discussion focuses on analytic methods, there are rarely "right" answers. At each decision point, methodological considerations will be balanced by other stakeholder goals and values.

We recognize that CVEs and other community collaboratives may approach the process of developing provider performance reports in a variety of ways and may start at various points along the continuum of steps in constructing performance scores. Thus, while this paper can be read from front to back, it is written so that the reader can skip straight to any topic of interest.

In constructing provider performance reports for public reporting, a key concern, particularly among providers, is the possibility of generating performance scores that do not reflect the provider's "true" performance. In the lexicon of methodologists, this possibility is called the risk of "misclassifying" a provider (e.g.,scoring a 4-star provider as a 1-star provider). Some degree of misclassification is always possible in any real-world report of provider performance. But the methodological decisions that a CVE makes can help to determine the frequency and magnitude of provider performance misclassification.

This report is intended to help CVEs understand different types of measurement error, how sources of error may enter into the construction of provider performance scores, and how to mitigate or minimize the risk of misclassifying a provider. Again, the methods decisions generally involve important tradeoffs. There are rarely clear "right answers," and value judgments underlie most decisions.

To illustrate some of the ways CVEs and other community collaboratives are approaching the methodological decision points discussed in this paper, we interviewed the leaders of nine such organizations. Quotes from these leaders are included throughout the paper, following many of the discussions about methods. The contents of these leadership interviews are also synthesized at the end of the paper in a section titled "Summary of methodological decisions made by a sample of CVE stakeholders."

Our report focuses on the steps involved in producing the comparative performance scores for public reporting. An equally important step and one that has a different set of methodological considerations (such as a report's understandability to consumers) is the design of provider performance "report cards." For guidance on the design of performance reports, we direct you to separate documents by Drs. Judith Hibbard and Shoshanna Sofaer that were sponsored by AHRQ as part of the "Best Practices in Public Reporting" series, How To Effectively Present Health Care Performance Data to Consumers and Maximizing Consumer Understanding of Public Comparative Quality Reports: Effective Use of Explanatory Information1-2. AHRQ's "Talking Quality" Web site (http://www.talkingquality.ahrq.gov/default.aspx) and "Model Public Report Elements: A Sampler" also provide guidance on the design of performance reports.

Page last reviewed September 2011
Internet Citation: Executive Summary: Methodological Considerations in Generating Provider Performance Score. September 2011. Agency for Healthcare Research and Quality, Rockville, MD. http://archive.ahrq.gov/professionals/quality-patient-safety/quality-resources/value/perfscoresmethods/perfscosum.html