Part IV. Selecting Quality and Resource Use Measures

Selecting Quality and Resource Use Measures: A Decision Guide for Community Quality Collaboratives

The first three sections of this Decision Guide provide background information for community quality collaboratives to consider before selecting quality and resource use measures. In this section, Questions 19-25 provide a framework that collaboratives may use to organize the information available to them. They also can use this framework to design a measure selection process that is consistent with accepted theoretical constructs as well as the collaborative's own stated objectives and constraints. We start with a brief overview of national initiatives related to the standardization of measures, to provide context for this topic.

Question 19. What national initiatives and forces are driving the standardization of quality measurement?

Historically, the National Committee on Quality Assurance and The Joint Commission initiated the standardization of quality measurement. Increasingly, the momentum is being carried forward by providers' desire for harmonized measures to minimize the burden of data collection and organizations' desire for more comparative national and regional benchmarks. As stakeholders in a community adopt the same measures, they conserve resources that would otherwise be diverted to developing or testing their own measures, and they send a stronger “market signal” to providers to improve quality in those specific areas.

In concert with the standardization movement is the health information technology (HIT) effort that is rapidly progressing with the coalescing of public (Federal) and private support (perhaps most notable is the $19 billion HIT funding earmarked in the 2009 Federal stimulus package).121 Multiple national initiatives are trying to build on this effort to establish broadly supported, valid measures of quality and cost that can be used nationwide. Community quality collaboratives may consider participating in, or at least monitoring, these national initiatives that will influence how quality and resource use measures are collected and reported over the next decade.

National Initiatives for Standardizing Measurement

The National Priorities Partnership (NPP) is a collaborative effort of 28 major national organizations, convened by the National Quality Forum (NQF) to represent multiple stakeholders. These stakeholders include consumers, employers, government, health plans, health care organizations and professionals, scientists, accrediting and certifying bodies, and quality alliances. As a first step, the Partners “identified a set of six National Priorities and Goals to help focus performance improvement efforts on high-leverage areas—those with the most potential in the near term to result in substantial improvements in health and health care—and thus accelerate fundamental change in our health care delivery system.”122 As a second step, the Partners have agreed to work together over the next year to align the drivers of change, such as payment reform, accreditation and certification, and performance measurement, around these goals:

  1. Engaging patients and families in managing their health and making decisions about their care.
  2. Improving the health of the population by creating communities that foster health and wellness.
  3. Improving the safety and reliability of America's health care system.
  4. Ensuring that patients receive well-coordinated care within and across all health care organizations, settings, and levels of care.
  5. Guaranteeing appropriate and compassionate care for patients with life-limiting illnesses.
  6. Eliminating overuse while ensuring the delivery of appropriate care, continually and safely reducing the burden of unscientific, inappropriate, and excessive care.

To implement the vision of the NPP, a stronger national infrastructure is needed for health care performance measurement and reporting. The Quality Alliance Steering Committee (QASC) was formed in 2006 as a collaborative effort to ensure that quality measures are constructed and reported in a clear and consistent way that informs both consumer and employer decisionmaking, as well as the efforts of practitioners to improve. QASC includes existing quality alliances, government agencies, physicians, nurses, hospitals, health insurers, consumers, accrediting agencies, and foundations. Convened by the Brookings Institution with financial support from the Robert Wood Johnson Foundation, the QASC's High-Value Health Care Project is testing approaches to combining data from the public and private sectors to measure and report on physician practices in a meaningful and transparent way for consumers and purchasers of health care. Based on these pilot projects, the QASC hopes to develop an infrastructure for combining summary provider information from Medicare and private health plans at the national level, offering a more complete picture of providers' cost and quality.

An overlapping group of nearly 130 organizations has recently joined together as Stand for Quality in Health Care (www.standforquality.org) to develop and support specific recommendations for policymakers considering health care reform. In a report titled Building a Foundation for High Quality, Affordable Health Care: Linking Performance Measurement to Health Reform, these organizations argue for dedicated Federal support for six key functions of the performance measurement, reporting, and improvement enterprise:

  1. Set national priorities through a multistakeholder process (based on the NPP) and provide ongoing coordination and self-evaluation.
  2. Endorse and maintain valid, reliable, evidence-based, feasible, actionable, and usable measures for national use through a multistakeholder process (based on the NQF).
  3. Develop measures to fill gaps in priority areas, including care coordination and transitions; palliative and end-of-life care; overuse and waste; promotion and adoption of healthy lifestyles; episode-based outcomes, processes, and costs; and disparities.
  4. Implement effective and open consultative processes so that stakeholders can inform policies on use of measures (based on the alliances described further below).
  5. Collect, analyze, and make performance information from health plans, clinicians, nursing homes, hospitals, clinics, and other providers available and actionable at the local, State, and national levels (based on the QASC).
  6. Support a sustainable infrastructure for quality improvement in all settings.

AHRQ's Chartered Value Exchange (CVE) program, formed in 2007, brings together 24 CVEs, or community quality collaboratives, from across the country. In aggregate, these collaboratives involve more than 575 health care leaders and represent more than 124 million lives, which is more than one-third of the U.S. population. The collaboratives are multistakeholder initiatives with a mission of quality improvement and transparency. The program is built on three overarching principles:

  1. All health care is “local.” National goals and common standards are important, but real improvement needs to take place in local settings where the various stakeholders know and work with one another.
  2. Transparency in measuring and reporting accurate and meaningful information on quality and cost is key to helping providers improve and consumers become engaged managers of their own health and health care.
  3. Collaboratives involving key stakeholder groups (e.g., public and private payers, providers, plans, consumer organizations, State data organizations, quality improvement organizations, health information exchanges) hold the promise to foster requisite reforms.

Through AHRQ's national Learning Network, CVE members learn from each other and from experts, sharing experiences and best practices through meetings, Web conferences, documents, and an electronic bulletin board. The Learning Network's areas of focus are driven by the needs of the CVEs and include: collaborative leadership and sustainability; public at-large engagement; quality and efficiency measurement; public reporting; provider incentives; consumer incentives; coordinated cross-organizational, cross-stakeholder quality improvement; and health information technology/health information exchange. AHRQ tools for CVEs and other community collaboratives are available at http://www.ahrq.gov/qual/value/localnetworks.htm.

To facilitate planning and coordination at both the national level and in CVE communities, AHRQ regularly meets with leaders of other key community-based quality improvement initiatives. Some of these organizations include QASC and its National-Regional Implementation Workgroup; the Robert Wood Johnson Foundation Aligning Forces for Quality Program; and the Network for Regional Healthcare Improvement (NRHI).

Aligning Forces for Quality (AF4Q) helps participating communities work toward sustainable health care quality through the leadership of a multistakeholder local alliance. These alliances focus on three key intersecting program areas: (1) developing a local quality improvement resource that will help health professionals improve once they recognize the need, (2) helping the public in those communities get substantially better at using appropriate information in making health and health care decisions, and (3) working to substantially increase performance measurement and public reporting of those measures. AF4Q communities actively promote nurse leadership in the effort and also focus on racial and ethnic disparities in all aspects of the initiative. By aligning key stakeholders to address these and other key issues, AF4Q aims to achieve communitywide, sustainable transformation of health care. Teams in AF4Q communities improve the quality of information about physician performance and public access to that information.

There is tremendous synergy between the CVE and AF4Q initiatives; 13 of the 15 AF4Q sites are also affiliated with CVEs. For example, the Health Improvement Collaborative of Greater Cincinnati simultaneously functions as a CVE and leads the Cincinnati AF4Q. It is working to enhance local infrastructure and to align key drivers of overall health care improvement through a condition-specific approach. Currently, they are focused on diabetes-care messaging among employers, health plans, providers, and community-based organizations; initiating quality improvement among primary care providers; and initiating regionwide public reporting of selected primary care practices' quality-related outcomes.123 A collaboration in Wisconsin provides another example of the confluence of both initiatives.

Hospital-Related Initiatives

The Hospital Quality Alliance (HQA), a multistakeholder alliance concerned with hospital quality of care, has taken a leading role in selecting hospital quality measures for presentation on the Hospital Compare Web site developed by the Centers for Medicare & Medicaid Services (CMS).124 Its role as an adopter of measures will be discussed further under Question 21.

In the aftermath of several studies demonstrating the advantages of enhanced administrative data,26, 93 AHRQ is sponsoring “Adding Clinical Data Pilot and Planning Projects” to link clinical data, especially present on admission coding and laboratory results, to existing administrative data sets collected for the Healthcare Cost & Utilization Project (HCUP). This method is seen as a practical, effective, and cost-effective way to produce more accurate and thorough quality assessments of hospitals. Results from the pilot sites in Florida, Minnesota, Virginia, and Washington were expected in late 2009.

Physician-Related Initiatives

The AQA Alliance was founded by physician organizations (in collaboration with AHRQ and America's Health Insurance Plans) as the Ambulatory Care Quality Alliance but has expanded to include other stakeholders interested in measuring the quality of care provided by physicians and other licensed professionals. Similar to HQA, the approval of measures by the AQA has been a factor considered by CMS in the selection of physician quality measures for use in public reporting and pay-for-performance programs. Its role as an approver of measures is discussed further under Question 21.

Bridges to Excellence (BTE) is a national pay-for-performance program with a standard data exchange platform and performance measurements. This program is currently available in limited markets across the United States and is closely aligned with NCQA efforts. Physicians self-report common performance measures endorsed or approved by NQF, AQA, or the American Medical Association's Physician Consortium for Performance Improvement (PCPI) in nine clinical areas. Physician-specific performance results are not publicly available; however, participants use the data for quality improvement and pay-for-performance programs. BTE-recognized physicians are listed on the HealthGrades® Web site.

Before physician performance measurement can reach its full potential nationally, the market needs strong consensus standards for managing and auditing the measurement process, which is led by health plans and other organizations. The Consumer-Purchaser Disclosure Project (CPDP) created a “Patient Charter for Physician Performance Measurement, Reporting and Tiering Programs.” The charter establishes a national set of principles to guide measuring and reporting to consumers about doctors' performance. The CPDP intends for health plans and others to adopt the Patient Charter and abide by CPDP's guidelines for physician measurement. The guidelines include auditing the measurement process imposed by health plans to reduce administrative burden and to ensure measurement transparency for participating physicians. NCQA was named as the first approved independent reviewer to certify organizational compliance with these guidelines. 

Details about the national initiatives regarding resource use measure development and implementation can be found in Question 17 of this Decision Guide.

Return to Contents

Question 20. How can the Institute of Medicine's six “quality domains,” the National Priorities Partnership's six National Priorities, and Donabedian's “structure, process, and outcome” typology be used to select appropriate measures of quality?

Institute of Medicine Framework

The Institute of Medicine's (IOM) 2001 report, Crossing the Quality Chasm, identified six “aims for improvement to address key dimensions in which today's health care system functions at far lower levels than it can and should.”3 These quality domains include:

  1. Safety — avoiding injuries to patients from the care that is intended to help them;
  2. Effectiveness — providing services based on scientific knowledge to all who could benefit and refraining from providing services to those not likely to benefit;
  3. Patient centeredness — providing care that is respectful and responsive to individual patient preferences, needs, and values;
  4. Timeliness — reducing waits and sometimes harmful delays for both those who receive and those who give care;
  5. Efficiency — avoiding waste, including waste of equipment, supplies, ideas, and energy; and
  6. Equity — providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status.

This framework gives community quality collaboratives a useful way to conceptualize where they want to go as they move forward to improve health care. It also helps collaboratives to think about what they want to measure and whether the available set of measures adequately covers the domains of concern. In general, the largest number of measures available to collaboratives is in the domain of effectiveness, but the CAHPS®(Consumer Assessment of Healthcare Providers and Systems) family of surveys has greatly expanded opportunities for measuring both patient centeredness and timeliness. In the last few years, there has been an explosion of efforts to measure quality in the previously neglected domains of safety, efficiency, and equity.

National Priorities and Goals

Under the auspices of the National Quality Forum (NQF), the National Priorities Partnership (NPP) (described under Question 20) has identified a set of six national priorities and goals to help focus performance measurement and improvement on “high-leverage areas.” These areas have “the most potential to result in substantial improvements in health and health care” and “thus accelerate fundamental change in our healthcare delivery system.”  These goals complement and update the IOM framework, as follows.

  1. Patient-centeredness: Engaging patients and families in managing their health and making decisions about their care. Health care should honor “each individual patient and family, offering voice, control, choice, skills in self-care, and total transparency,” adapting readily to individual and family circumstances, and to differing cultures, languages, and social backgrounds.
  2. Equity and population health: National, State, and local systems of care should be fully invested in the prevention of disease, injury, and disability, “helping all people reduce the risk and burden of disease.”
  3. Safety and effectiveness: America's health care system should be “relentless in continually reducing the risks of injury from care, aiming for zero harm wherever and whenever possible,” “guaranteeing that every patient, every time, receives the benefits of care based solidly in science.”
  4. Timeliness and coordination of care: Ensuring that patients receive well-coordinated care within and across all health care organizations, settings, and levels of care. “A healthcare system should guide patients and families through their healthcare experience, while respecting patient choice, offering physical and psychological supports, and encouraging strong relationships between patients and the professionals accountable for their care.”
  5. Appropriate and compassionate care for patients with life-limiting illnesses: Health care should promise “dignity, comfort, companionship, and spiritual support to patients and families facing advanced illness or dying, fully in synchrony with all of the resources that community, friends, and family can bring to bear.”
  6. Efficiency: Eliminating overuse while ensuring the delivery of appropriate care. Health care should promote “better health and more affordable care by continually and safely reducing the burden of unscientific, inappropriate, and excessive care, including tests, drugs, procedures, visits, and hospital stays.”

Donabedian Typology

Another useful typology of quality measures was developed by Avedis Donabedian about 30 years ago,125-127 based on earlier work by Sheps in the 1950s. Donabedian described three approaches to acquiring information about health care quality:

  1. Structural measures focus on the conditions under which care is provided. These include the material (e.g., facilities, equipment) and human resources (e.g., staffing ratios, qualifications, experience) available to provide care, as well as the organizational context (e.g., size, volume, IT systems) that facilitates or impedes the delivery of optimal care.
  2. Process measures focus on what a health care provider does to maintain or improve patients' health, including appropriate and evidence-based screening, diagnosis, treatment, rehabilitation, education, and prevention.
  3. Outcome measures focus on changes in health status that are attributable to health care, including mortality, morbidity (e.g., complications, unplanned readmissions), functional status, quality of life, and health-related knowledge and behaviors.

IOM-Donabedian Matrix

Although the IOM framework and the Donabedian framework categorize quality measures differently, one can create a two-dimensional matrix to help clarify how well one is covering all of the domains of interest (Table 7). Within each IOM domain, measures of structure, process, and outcome are typically available. Two examples of structural measures would include quality improvement systems and adequate nurse staffing, both intended to facilitate safe and effective care.

Some structures facilitate equitable care, such as adequate interpreting services. There may also be structures that facilitate patient-centered care, such as providers' use of patient survey data to improve patient-centered care. Structures facilitating timely care might include health maintenance organization (HMO) policies on prior authorization and provider policies on scheduling urgent care appointments. 

Process and outcome measures can address almost any IOM domain of care, although there is clearly some overlap across domains. For example, among the AHRQ Quality Indicators, the Patient Safety Indicators may serve as indicators of safe inpatient care, the Prevention Quality Indicators as indicators of timely and effective outpatient care, the mortality-based Inpatient Quality Indicators or IQIs as indicators of effectiveness, and utilization-based IQIs as indicators of resource use.74

Table 7. Matrix of quality measure typologies with examples

IOM DomainsStructureProcessOutcome
EffectiveCardiac nurse staffing, nursing skill mix (RN/total)Use of angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) for patients with systolic heart failure30-day readmissions (or mortality) for heart failure
Patient CenteredUse of survey data to improve patient-centered careDid the nurses treat you with courtesy and respect?Overall rating of care
TimelyPhysician organization policy on scheduling urgent appointmentsReceived beta blocker at discharge and for 6 months after AMIPotentially avoidable hospitalizations for angina (without procedure)
SafeComputerized physician order entry with medication error detectionUse of prophylaxis for venous thromboembolism in appropriate patientsPostoperative deep vein thrombosis or pulmonary embolism
EfficientAvailability of rapid antigen testing for sore throatInappropriate use of antibiotics for sore throatDollars per episode of sore throat
EquitableAvailability of adequate interpreting servicesUse of interpreting services when appropriateDisparity in any other outcome according to primary language

To achieve our national goals for effective, patient-centered, timely, safe, efficient, and equitable care, as set forth by the IOM and recently reinforced by the NPP, quality measurement will eventually need to cover all domains and borrow from all approaches. However, few markets will start with the expertise to collect and report measures in all domains using multiple approaches. Community quality collaboratives may want to consider prioritizing and developing a multiphase strategy for using available data and measures. This matrix may be helpful in facilitating such a phased approach, which begins by capturing “low hanging fruit” and then gradually expands to cover a broad range of domains and approaches. 

Although the greatest number of available measures is still in the effectiveness domain (using the process approach), stakeholders are increasingly seeking out a broader range of measures in other IOM domains, including outcome measures. As described at AHRQ's Talking Quality Web site, providing consumers with a clear framework for understanding quality also helps them to grasp the value and relevance of a broader range of quality indicators.  

Return to Contents

Question 21. What are the roles and responsibilities of the organizations that endorse or approve measures versus those organizations that develop measures?

Measure Developers

Measure developers use a standardized approach to creating, maintaining, and retiring quality performance indicators. After prioritizing medical conditions or interventions of interest, they identify and recruit experts to research evidence-based literature to identify candidate measures. Potential process measures are often based on clinical practice guidelines developed by professional organizations and evidence reviews supported by AHRQ or similar organizations. Technical specifications for data collection and measure calculation are field tested for validity, reliability, and feasibility. Figure 1128 and Figure 2129 illustrate the Centers for Medicare & Medicaid Services (CMS) and National Committee for Quality Assurance (NCQA) approaches to measure development. 

Measure developers include a number of professional associations, accrediting bodies, and government entities at the national level. Among them are the American College of Cardiology, AHRQ, American Cancer Society, American Medical Association and its affiliated organizations (structured as the Physicians' Consortium for Performance Improvement), CMS, The Joint Commission, and NCQA. These organizations submit measures for review by endorsement and approval organizations.

Measure Endorsers, Approvers, and Adopters

Measure endorsers or “approvers” use a consensus-based approach to evaluate the feasibility, reliability, validity, and usability of quality performance measures. The process is similar to the measure development process and includes multiple stakeholders to reflect varied perspectives and expertise. The National Quality Forum (NQF) is the premier organization for evaluating health care performance measures throughout the continuum of care.130 It uses a consensus development process, as shown in Figure 3,131 to assess and endorse, when appropriate, voluntary measures submitted by various measure developers. A list of NQF's currently endorsed measures is available at www.qualityforum.org. The criteria that NQF uses in evaluating candidate standards are further described in Question 22.

The Hospital Quality Alliance (HQA), a multistakeholder alliance concerned with hospital quality of care, characterizes itself as a “measure adopter” and selects hospital measures from among those previously endorsed by NQF.124 By implementing consensus-based, nationally standardized performance measures, HQA promotes a common, unified approach to measurement and reporting. Its current compendium of adopted measures is at www.hospitalqualityalliance.org/hospitalqualityalliance/qualitymeasures/qualitymeasures.html . Because HQA-adopted measures feed into CMS's Hospital Compare Web site, community quality collaboratives often rely on HQA recommendations when selecting measures for their own reporting programs. However, CMS has stated that Federal law only indicates “that measures must reflect consensus among affected parties and, to the extent feasible and practicable, must include measures set forth by one or more national consensus building entities... the Secretary is not required to limit measures to those endorsed or adopted by any particular consensus organization or quality alliance...”132

The AQA Alliance is a voluntary, multistakeholder collaborative of physicians and other clinicians, consumers, purchasers, health plans, and other interested parties. They have joined together to determine how to most effectively and efficiently improve performance measurement at the clinician or group level. In addition, they examine ways to collect and aggregate performance data and report meaningful information to consumers, clinicians, and other stakeholders to inform decisionmaking and improve outcomes. The AQA has published specific criteria for approving measures. The criteria emphasize the value of measure sets that are aligned with the IOM's priority areas, evidence based, complementary of hospital/facility measures, and focused on high-impact problems. The AQA Alliance defers to NQF as the final arbiter for measure endorsement (“for indicators not endorsed by the NQF during their call for measures and endorsement process, AQA-approval will be rescinded”).133 Its current compendium of approved measures is at www.aqaalliance.org/files/CompendiumofApprovedMeasures.doc  (Plugin Software Help).

The American Medical Association Physician Consortium for Performance Improvement (PCPI) offers the opportunity for professional collaboration on developing, testing, and maintaining evidence-based clinical performance measures for physicians. It also reviews measures developed independently.134 PCPI's role is self-limited to development and adoption rather than endorsement; PCPI measures are submitted to AQA and NQF for approval and endorsement. The Joint Commission and NCQA are also measure developers that sometimes adopt measures developed by other organizations.

The Joint Commission's Core Measures are used to inform the hospital accreditation process and NCQA's HEDIS measures are used to inform the plan accreditation process. However, these measures generally are not used by community quality collaboratives for public reporting or pay for performance until they are approved or endorsed by one or more of the three organizations described above (i.e., NQF, HQA, AQA).135 In this way, collaboratives can avert potential criticism for reporting “experimental” measures that are not accepted by stakeholders at the national level. Some collaboratives may implement unendorsed measures, at least in confidential reporting, as a means of testing them, demonstrating their usefulness and feasibility, and thereby helping to move them through the endorsement process.

Page last reviewed May 2010
Internet Citation: Part IV. Selecting Quality and Resource Use Measures: Selecting Quality and Resource Use Measures: A Decision Guide for Community Quality Collaboratives. May 2010. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/professionals/quality-patient-safety/quality-resources/tools/perfmeasguide/perfmeaspt4.html