Performance measurement involves collecting and reporting data on practices’ clinical processes and outcomes. Measuring clinical performance can create buy-in for improvement work in the practice and enables the practice to track their improvements over time. This information should also be used to identify and prioritize improvement goals and to track progress toward those goals. In addition, these data should be used to monitor maintenance of changes already made.
Selecting Clinical Performance Measures
You will work with your practices to identify the areas of clinical performance they want to assess. The areas of clinical performance should connect to the improvement goals the quality improvement (QI) team has set as well as any mandates from the funder. Common sources for performance measures are the Healthcare Effectiveness Data and Information Set (HEDIS), quality indicators developed by the National Committee for Quality Assurance, and criteria selected by health plans.
In addition to selecting a set of performance measures that the practice wants to track, the QI team will need to decide how frequently to collect data. Data collection timelines should allow sufficient time for change. They also should be generated frequently enough to show progress over time through the use of run charts and other methods of comparing data collected across multiple time periods.
Refining Clinical Measures: Defining the Numerator and Denominator
Many performance measures are rates, with the numerator indicating how many times the measure has been met and the denominator indicating the opportunities to meet the measure. For example, let’s say your practice wants to measure how well it is complying with annual comprehensive foot exam recommendations for its diabetic patients.
In specifying the numerator, the practice will need to define what constitutes the desired performance. Will monofilament testing alone be adequate, or will it need to be combined with visual inspection, testing for sensation, or palpation of pulses? Or will any one of these approaches be deemed adequate? How accurately these events are documented will be important in determining the usefulness of the available data.
In specifying the denominator, the practice will need to establish what constitutes an opportunity to deliver the desired action. For this example, you might define the denominator as the number of diabetic patients who have had a health care encounter in the past 12 months. Or you might define the denominator more broadly from a population health perspective as any diabetic patients in a provider’s panel regardless of the status of their most recent visit.
Denominators in particular are important in understanding and interpreting data so it is very important that you are careful to use the appropriate denominator. For example, if you are working with a practice to determine what percentage of its patients with diabetes have hemoglobin A1c (HbA1c) values of 8 or higher, you would want to use for the denominator only those patients with diabetes who have HbA1c values available in their record. If you use any diabetic patients regardless of whether they have an HbA1c value available, the percentage of patients who have elevated HbA1cs will be artificially depressed.
As you and the practice monitor progress in improving performance on this metric over time, you will need to consider how the denominator may change. For example, a monthly audit of performance on this metric might use diabetic patients receiving care in the previous month as the denominator and the number of these patients who had received a foot exam within the past 12 months as the numerator.
It can be tricky defining an appropriate denominator. If you do not select the correct denominator, you may under- or overstate performance. For example, when calculating the percentage of diabetic patients with low-density lipoprotein (LDL) below 100, you would specify the denominator as the number of diabetic patients with an LDL test, not just the number of diabetic patients. Similarly, if you were tabulating the percentage of patients who gave the most positive response to a question on a survey, you would specify the denominator as the number of patients who answered that question, not the number who were surveyed.
You will also need to help the practice decide which, if any, subgroups they want to evaluate. For example, you may want to measure performance for patients who have had a visit in the past quarter or who have been in treatment for at least 6 months. You will also need to decide whether you want to stratify performance measures for different populations. For example, you might want to compare performance for patients based on age, gender, race or ethnicity, disease severity, or treatment status.
|Pay attention to numerators and denominators when benchmarking. It is important to ensure that you are making “apples to apples” comparisons.|
Benchmarking is the process of comparing a practice’s performance with an external standard. Benchmarking is an important tool that facilitators can use to motivate a practice to engage in improvement work and to help members of a practice understand where their performance falls in comparison to others. Benchmarking can stimulate healthy competition, as well as help members of a practice reflect more effectively on their own performance. See Figure 7.1 for an example of a benchmarked practice report card.
You will need to work with your practices to identify appropriate benchmarks. Benchmarks can be generated from similar practices in the same area or by comparing them to a larger group of practices from across the country. They can also be drawn from standards set by an authoritative body.
Good sources for benchmarks include local quality collaboratives where several practices collect similar performance data and can compare among themselves. Community clinic associations often host this type of local effort, often through managing multiorganization QI projects on a particular condition such as asthma, and may benchmark across the participating sites as part of their work with their members.
Other sources might be required data reports to Federal agencies and funders such as the Health Resources and Services Administration’s Uniform Data System reports required from Federally Qualified Health Centers. National associations and the National Committee on Quality Assurance are other potential resources for benchmarking, as well as State and local health and public health agencies.
Health information technology vendors are also emerging as a source of benchmarks when they allow comparison across organizations using their systems. Large data networks such as DARTNet and SAFTINet funded by AHRQ may also become a resource for both local and national benchmarking.