Care Coordination Accountability Measures for Primary Care Practice

Measure Development Gaps and Recommendations

Our review of existing measures of care coordination revealed several key measurement gaps for primary care practice accountability and recognition purposes. These are priority areas for further measure development.

Measurement Gaps

Domains Not Captured by Existing Measures

No adult measures from the patient/family or health care professional perspectives that are applicable for primary care practice evaluation mapped to the Facilitate Transitions as Coordination Needs Change sub-domain. To date, most of the focus in the literature on changing coordination needs has centered on the transition from pediatric to adult care, and related measures have targeted pediatric populations. However, other changes in needs occur during patients' life spans, such as increases in coordination needs during periods of acute illness or injury, following changes in patients' support networks or personal circumstances, or as some elderly patients' functional or cognitive abilities decline. No measures have been identified that assess how well primary care practices respond to these kinds of changes in coordination need. This area is ripe for further measure development.

Perspectives Not Captured by Existing Measures

A full understanding of care coordination requires measurement from multiple perspectives. The Care Coordination Measures Atlas framework identifies three key perspectives: patients and family, health care professionals, and system representatives. During the measure selection process for the accountability set, no measures suitable for measuring care coordination from the health care professional or system representative perspectives were identified for the purposes of accountability. (Measures from these perspectives suitable for quality improvement use are identified in the companion measures set section of this report). This gap reflects the predominance to date of survey-based measures of care coordination. Relying on self-assessment through surveys is not appropriate for accountability purposes.

One way to address this measurement gap would be to develop methods of auditing measures that are self-reported by health care professionals or system representatives. Another method would be to develop measures that rely on data other than self-reported survey responses. While some such measures exist, to date they have been limited in scope, typically focusing on a particular process for specific disease populations, and as such are not appropriate for this measure set, which aimed to identify measures that covered Atlas care coordination activity domains comprehensively in the setting of primary care. Developing a set of care coordination measures that rely on auditable data sources and that together evaluate all care coordination activity domains would enable measurement from these additional perspectives.

Focus on Care Coordination and Measurement Burden

Most measures considered for the Care Coordination Accountability Measures for Primary Care Practice were broad in scope rather than focusing specifically on care coordination. Developing or refining instruments to focus on care coordination would help fill this gap and reduce measurement burden. Enabling and encouraging measurement of care coordination in the context of growing demands for quality measurement in other areas will depend on the availability of measures that offer valuable information with minimal measurement burden. Much work is needed to reach this goal.

Additional Recommendations to Advance the Field

In addition to gaps in available measures, we found many gaps in evidence relating to existing measures. Routinely providing this additional information would help further advance the field of care coordination measurement.

  • Most measures we reviewed need more robust reliability and validity testing. Indeed, many measures had no such testing reported in the published literature. At a minimum, internal consistency and test-retest reliability and well-designed multivariate evaluations of construct validity should be performed and reported. Evidence linking measure results to key outcomes such as hospitalization rates, readmissions, mortality, costs, or patient satisfaction will greatly enhance the validity of such measures. This information is particularly important when considering measures to be used for accountability or recognition purposes.
  • Information on the feasibility of measures as demonstrated by the resources required to use the measure and evidence that the measure has been successfully implemented for quality improvement or accountability purposes was rarely reported in the published literature, and was also often lacking from supporting materials, such as user guides. Understanding the burden of data collection is a key consideration in choosing measurement tools and is difficult to assess in the absence of such information. Feasibility information, such as typical survey completion times and completion rates, should be routinely reported for all measures as part of reports of reliability, validity, and measure development. In addition, the usability of the measure was rarely reported. In some cases a measure may be easy to collect, but difficult to interpret without extensive additional work.
  • When multiple versions of a measure are developed, (e.g., versions targeted towards pediatric vs. adult patients, or versions designed for patients vs. health care professionals), or multiple means of collecting data (e.g. use of an interviewer vs. self-administration), each version and method should undergo reliability and validity testing, and results of this testing should be reported. Although the content of related instruments may be very similar, their reliability and validity may differ when completed by different respondents or by different methods. For example, reading comprehension likely varies for groups of elderly patients compared to groups of physicians. Extrapolating testing results from one group of respondents to another offers only weak evidence of reliability and validity; this weak evidence is likely insufficient to assure appropriateness of a measure for use as an accountability or recognition tool.
  • In many instances, adapting existing instruments, with repeated reliability and validity testing, may greatly improve their value for care coordination-specific measurement without undertaking entirely new measure development. Users who adapt existing instruments by, for example, using only a sub-set of items from the original instrument, should repeat and report reliability and validity testing. Making this information available to others will help advance the field of care coordination measurement.

Return to Contents
Proceed to Next Section

Page last reviewed July 2018
Page originally created September 2012
Internet Citation: Measure Development Gaps and Recommendations. Content last reviewed July 2018. Agency for Healthcare Research and Quality, Rockville, MD.
Back To Top