Introduction

Private Performance Feedback Reporting for Physicians: Guidance for Community Quality Collaboratives

Over the past decade, a growing body of research and online resources has emerged to provide guidance on effective practices for publicly reporting information on provider performance for consumers.1 These recommendations are for the most part evidence based and assume that an effective report is one that contains performance information that consumers understand and find both credible and relevant. In addition, information is conveyed in a way that makes it as easy as possible for consumers to use it to make good choices among providers.

Another key audience for performance reporting is physicians themselves. Health plans and medical groups have sponsored private physician "performance feedback" reports for many years, with the intention of supporting internal quality improvement efforts as well as patient care management.

More recently, multistakeholder community quality collaboratives, including roughly half of the Chartered Value Exchanges (CVEs) supported by the Agency for Healthcare Research and Quality (AHRQ), have begun to produce some type of private report for physicians in parallel to their public report for consumers. These groups recognize that a single report designed for one audience cannot meet the needs of both. In addition, the Centers for Medicare & Medicaid Services (CMS) has sponsored pilot studies of the effects of providing individual physicians and medical groups with performance feedback based on claims data and CMS's Physician Quality Reporting System.

In contrast with public reports, private reports are often confidential and limited in distribution to those with a "need to know." Thus, little research even of a descriptive nature has been conducted on the various forms that private reporting has taken. Limited discussion of how to define and measure the effectiveness of such reports and little published evaluation research are available.2 Therefore, the science of private "feedback reporting" for physicians is nascent at best.

As CVEs and other community quality collaboratives consider strategies for private feedback reporting to physicians and other health care providers, they will need to address basic issues such as report design and distribution. They also will need to examine their role in relation to existing and planned internal performance reporting activities of the health systems and medical practices in their markets. In contrast to public reporting for consumers, where the role of a neutral, multistakeholder collaborative is relatively well accepted as a source of objective, communitywide performance data, the role of community collaboratives in private feedback reporting is not always so clearly defined.

Many health plans and health systems, which may themselves be collaborative members, have developed very sophisticated internal reporting systems of their own based on electronic health records. In the context of these and other private performance reporting initiatives, community collaboratives will need to determine the unique value-added features that their private feedback reports can provide. The goal is to complement rather than compete with reports from their provider members or other report sponsors.

This resource document is intended to provide practical information and guidance primarily to CVEs and other community quality collaboratives interested in the design, dissemination, and use of private feedback reports on physician performance. We begin with an overview of private performance feedback reports for physicians, including a discussion of their goals and a conceptual diagram illustrating their relationship to public reports and the flow of information among participating health plans and medical groups in the context of a community quality collaborative.

We then focus the main body of the resource document on 13 specific recommendations for consideration by CVEs and other community quality collaboratives that are either currently engaged in or are contemplating private feedback reporting. These recommendations include overall strategic issues as well as more specific elements related to the content, design, and dissemination of private feedback reports.

Our recommendations are drawn from the relevant literature, examples from early private feedback report developers, and a case study we conducted with the Health Collaborative, the CVE in Greater Cincinnati. In this case study, we conducted physician focus groups and interviews with quality managers of local health systems. We wanted to assess how the Cincinnati CVE's current private feedback report might be made more useful to clinicians seeking to improve quality and care management processes.

Return to Contents

 

Overview of Private Feedback Reporting for Physicians

Private performance feedback reports are aimed at serving the performance measurement and improvement goals of physicians and other health professionals as well as practice managers and leaders. They communicate objective information about performance as captured through such indicators as clinical process, clinical outcome, patient experience, and resource use measures, with the broad aim of facilitating assessments of or improvements in care.

Return to Contents

 

Goals of Private Feedback Reports

As with public reports, the effectiveness of private reports can appropriately be measured in relation to the goals they are intended to serve. These goals include:

  • Enabling clinicians and practice leaders to assess performance relative to peers, benchmarks, or evidence-based practice guidelines, as well as their own past performance;
  • Motivating efforts to improve performance;
  • Supporting patient care management; and
  • Providing access to improvement tools and resources.

In measuring the effectiveness of private feedback reporting in relation to these four goals, only for the first goal can effectiveness be assessed by asking a physician to read a report and answer questions about it immediately afterwards. Success in reaching the other goals depends on what is done afterwards. The critical outcomes are not whether a report has been read and understood, but whether it has contributed to better care. This means that evaluation of the effectiveness of private feedback reporting also has to be embedded in the larger context of quality improvement and/or patient care management.

Return to Contents

 

Relationship to Public Reports and Patient Registries

The first two goals listed above overlap with those of public reporting, in that private feedback reports provide a basis for assessment of clinician performance and help to motivate and focus quality improvement efforts. However, the second two goals are quite different from public reporting. Public reports give patients and consumers summary-level information to inform their choice among providers. In contrast, to be most effective, private feedback reports must give clinicians more granular information so they can take specific actions to improve performance.

 

Although the data sources for public and private reports are often the same, the audiences and functions of these reports are quite different, as illustrated in Figure 1. In the context of a community quality collaborative, data from claims, medical records, or patient surveys are provided by participating health plans and medical groups to an entity such as a data warehouse sponsored by the collaborative. This entity can aggregate and transform these data into performance measures for reporting.

The performance information conveyed through public reports serves multiple audiences and provides incentives to health plans, care systems, medical groups, and clinicians to improve performance. The information and tools needed to actually implement improvements, however, are provided through the collaborative's private feedback report.

The data generated internally by these organizations can also be used for physician performance monitoring and improvement, either separately or in combination with the private feedback reports obtained through the community quality collaborative. Often, internal data are collected and maintained in patient registries. Registries are systematic aggregations of data on a defined population of patients, such as people with diabetes or patients with cancer or heart disease.

Registries may serve many purposes, such as describing the natural history of disease and measuring or monitoring safety and harm, as well as measuring quality of care.3 When performance-related data from a registry are communicated to providers, the resulting communication can be considered a form of feedback reporting, but the registry itself is not.

Return to Contents

 

Guidance for Private Feedback Reporting

Community quality collaboratives such as CVEs and others face a range of considerations as they develop or redesign private performance feedback reports for physicians. This section presents 13 specific recommendations aimed at supporting community collaboratives in this process. These recommendations are based on the following sources of information:

  • An environmental scan we conducted in 2011 that reviewed the private feedback reports sponsored by five community quality collaboratives: (1) Health Collaborative of Greater Cincinnati, (2) Oregon Health Care Quality Corporation, (3) P2 Collaborative of Western New York, (4) Indiana Health Information Exchange, and (5) Wisconsin Health Information Organization (an affiliated but separate organization from the Wisconsin Collaborative on Healthcare Quality).
  • Results of a survey conducted in 2010 by the Ambulatory Quality Alliance (AQA) Reporting Workgroup (AQA Alliance, 2012), which contacted 104 organizations and received completed surveys from 41 (39.4%), including 36 that identified physicians as one of their target audiences.
  • Privileged communications with several health systems engaged in private reporting, as well as from a set of interviews Teleki and colleagues conducted in 2006 with informants from 12 organizations that reported performance information to individual physicians in ambulatory care settings.2
  • A case study project we conducted in early 2012 with the Health Collaborative of Greater Cincinnati aimed at developing recommendations for improving the utility of its private "physician dashboard" report. The case study involved a series of interviews with Collaborative staff that led to a decision to obtain feedback directly from Cincinnati physicians and physician practice quality improvement managers. They provided feedback regarding their needs for performance information reporting, as well as suggestions for improving the content and functionality of the Collaborative's current physician dashboard. Insights from physician focus groups and interviews we conducted in Cincinnati are interspersed throughout the recommendations below. Further information on our case study methods, including a summary of our findings and recommendations, is provided in Appendix B.

Return to Contents

 

1. Understand the Goals and Information Needs of Your Target Audience

One of the most important first steps in developing effective reports, whether public or private, is to clearly understand the goals and information needs of your target audience. In developing performance feedback reports for physicians, this step implies involving physicians and other intended users of these reports, such as quality improvement managers and practice leaders, in identifying the needs that the report can address and in developing the report itself.

It is highly desirable for physicians to be involved in the development of performance feedback reports for several reasons. First, the more heavily they are involved in development, the greater their ownership of the product and the more likely they are to use it. Second, their input helps to ensure that their interests and needs will be reflected in the product. Third, their active engagement in developing a report aimed at quality improvement helps to set the stage for behavior change. It can be especially helpful to involve physicians in teams, since making quality improvement a collaborative enterprise helps to reinforce the notion that striving for quality is an imperative consistent with professional norms.

While the logic of doing this seems obvious, it is not always done. Engaging physicians and other relevant stakeholders takes time and effort, and scheduling time with busy clinicians faced with many competing priorities is not easy. While engaging physicians early and often can pay big dividends, not doing so can result in pursuing a course of action that fails to meet the real needs of physicians and their practices, or duplicates reporting activities already underway by the practices themselves.

For example, although the Cincinnati Health Collaborative has successfully engaged physician leaders in many aspects of their public reporting activities, the Collaborative did not conduct any initial testing or review of their private physician dashboard reports with physicians. Not doing so has contributed to a lack of awareness and use of the Collaborative's private report. Lack of physician engagement also has contributed to a widely shared view that the private report, accessible through a secure data portal, adds little value to the performance feedback information already available to many physicians through their internal systems:

"I don't think our physicians have logged in to look at their personal stuff; I don't know how much they are encouraged to do that. What they care about is what the public can see [on the public reporting site]."
"There's not a reason for individual doctors to look at the portal or their reports; they have no reason to do it when the practice managers are handing them the results [drawn from their internal reporting systems]."
"I don't think they [physicians] are finding value in the portal because we are already giving them the data quarterly, at least on the D5 [diabetes measures]."

—Cincinnati Quality Managers

To their credit, Collaborative staff recognized the need to reach out directly to physicians for their feedback and input on the private physician portal. This feedback, gathered through our case study, led to useful insights regarding what the focus and content of the Collaborative's private feedback reporting should be, as described below.

Return to Contents

 

2. Identify Your Value-Added Reporting Niche

Once you have engaged your target audience to establish their information goals and needs, an important next step is to determine which valuable information you are in a unique market position to supply. For example, in our case study with the Cincinnati Health Collaborative, we learned that virtually all practices in the Cincinnati market, whether part of a large health system or unaffiliated as small, independent offices, appear to be meeting most of their performance feedback information and reporting needs on their own. However, it was also apparent that some important features of private feedback reports could only be met through the Collaborative. Among these is the unique ability of the Collaborative to provide community-level benchmarks that no single system can create on its own:

"The one thing that would be helpful, that I hear from my docs, is they want data from other places that their patients go to. We have in our system only what we have in our system. But if an eye exam was done [some place outside of our system] we won't have a record of it. Or if we were lucky and we got a report [from that place] then we'd have it. The health plans have a lot of data that we don't have, and we have data they don't have, but never the twain shall meet."

—Cincinnati Physician

Another area that almost all informants in Cincinnati agreed would be a useful value-added function for the Collaborative is to provide both the data reports and facilitation expertise to support specific quality improvement projects, modeled after a successful diabetes improvement collaborative:

"We spend too much time pulling the data, not enough time working the data."
"The convening function is a good role for the Collaborative. The diabetes collaborative was a good experience. It could be expanded to other areas."

—Cincinnati Quality Managers

The importance of targeting and embedding private performance feedback reports to physicians in the context of ongoing quality improvement activities is described further below.

Return to Contents

 

3. Select Performance Measures That Are Relevant and Actionable

After clearly establishing your performance feedback reporting goals and market niche, the next set of considerations relates to report content and design. To be useful to physicians, the content of measures included in feedback reports must be clinically relevant (i.e., perceived as important aspects of process of care or related to patient outcomes) and actionable (i.e., offering clear steps that a clinician can take to improve performance on the measure).

Among the private reports we reviewed, all contain performance information on clinicians (nearly always physicians), but the type of measures varies. All of the CVE reports we examined included process measures such as preventive screening rates and measures of chronic disease management, such as the D5 for diabetes carei and the C4 for optimal cardiovascular care.ii However, only two included clinical outcome measures, and none included patient experience measures, such as the CAHPS (Consumer Assessment of Healthcare Providers and Systems) Clinician & Group Surveys.

Only the Wisconsin and Oregon collaboratives included resource use metrics, such as utilization rates and cost per episode. The data sources for these measures also varied, with some collaboratives using claims and pharmacy data submitted by health plans and others using direct submissions of clinical record data by medical groups.

 

Table 1. Variation in Performance Measures Used Across Five Community Collaboratives

Measure TypeCincinnatiOregonWestern NYIndianaWisconsin
Clinical process
Clinical outcome   
Patient experience     
Resource use   

Source: Environmental scan conducted between June 2011 and January 2012.

It should be noted that the absence of patient experience measures and limited reporting of resource use and clinical outcome measures in this small sample of five collaboratives reflect the relative unavailability of these measures at the community level, not their relative importance. Indeed, these are very important performance measures and collaboratives may be in a unique position to help create communitywide data for comparison.

For example, some collaboratives have succeeded in publicly reporting patient experience measures at the medical practice or physician level (such as the CVEs in Colorado, Massachusetts, and Minnesota). In addition, the Massachusetts CVE has developed separate private feedback reports to participating medical practices that include more detail for supporting improvement. These examples illustrate how collaboratives can use their leveraging power to expand the content of performance measures available for both public and private feedback reporting.

Not surprisingly, publicly and privately reported measures tend to overlap for collaboratives that have both. Community quality collaboratives generally include all publicly reported measures in their private reports, together with additional measures that provide more detail on performance. Aligning private feedback reporting with the measures that are included in public reports reinforces their relevance, since physicians will have strong incentives to improve on those measures that are publicly reported.


i The D5 refers to a set of five clinical measures of effective diabetes management (control of blood pressure, cholesterol, and blood sugar levels; aspirin use; and no use of tobacco).
ii The C4 refers to a set of four clinical measures of effective cardiovascular care (control of blood pressure and cholesterol; aspirin use; and no use of tobacco).


Return to Contents

 

4. Include Benchmarks for Comparison to Peers and Normative Standards

Peer comparisons are important in private feedback reports if the goal is to change physician behavior, since physicians are motivated by comparative information.4 In addition, peer comparisons provide a nearly irresistible incentive to engage with a report.

Performance measures used in feedback reports typically compare individual clinician performance with one or more of the following:

  • Practice site, group, or system average.
  • Other individual clinicians in the practice site or group.
  • Community or State average.
  • Peer comparisons by type of practice, such as:
    • Safety net.
    • Rural/urban.
    • Multispecialty.
  • Normative standard or benchmark, such as:
    • 90th percentile.
    • "Best in class" (top performer).
    • Achievable Benchmark of Care™ (ABC).

Effective benchmarks set goals that are achievable so that they cannot easily be perceived as unreasonable or unattainable and therefore be ignored.5 The ABC, developed at the University of Alabama at Birmingham, provides an objective method for identifying benchmark performance levels that are already achieved by "best-in-class" clinics within a specified region. For detailed information, go to http://main.uab.edu/show.asp?durki=14503.

As with the performance measures themselves, methods that collaboratives use to compare clinician scores to benchmarks or peers vary widely, as shown in Table 2.

 

Table 2. Variation in Benchmarks Used Across Five Community Collaboratives

Benchmark TypeCincinnatiOregonWestern NYIndianaWisconsin
Practice average   
Group average   
Individual doctors   
Community 
Normative    

Source: Environmental scan conducted between June 2011 and January 2012.

 

Two examples of performance feedback reports using benchmarks are shown (go to Examples 1 and 2).

Return to Contents

 

5. Use Displays To Highlight Most Important Patterns

Although private feedback reporting for providers is separate and distinct from public reporting for consumers, many of the design principles are the same. In both cases, the goal is to design a report with graphic displays and text that tell a clear story about performance that is understood and seen as useful by the target audience.

Clinicians are highly educated and accustomed to viewing and interpreting data displays, so they are in many respects a more forgiving audience than consumers. On the other hand, clinicians have many constraints on their time, so they may be unwilling to invest much effort in decoding an opaque or unnecessarily complicated data display in a report. Therefore, as is true for a consumer audience, it is best to design report formats that will make it as easy as possible for clinicians and practice managers to get the gist of the message in the data without having to work hard to understand the presentation.

The elements of a data display should make it as easy as possible to see the most important patterns in the data. If the main purpose is to convey relative performance of clinicians or groups, displays that order the entities from highest to lowest performance work well. If performance relative to a benchmark is important, a graphic display of the benchmark can be placed near the graphic display of individual performance to make visual comparison easy (as illustrated in Example 1).

Certain design features of data displays have been found to help consumers process and interpret quality information, and most of these should also be helpful for physician audiences. These helpful features include using highlighting, boldface, and text boxes or sidebars6 and placing specific boundary lines to separate different types of information.7 Reports with more white space and improved formatting have been found to help consumers make better decisions than they make when the same information is presented in reports lacking these features.8

Use of meaningful icons can help to highlight important information, but it is important to use icons consistently across all information provided to facilitate comparison and minimize confusion.9 Text labels can be even more effective than graphic symbols in helping consumers correctly evaluate information,10-12 since people differ in their ability to interpret graphics.13

Explanations of measures and scoring are often of greater interest to clinicians than to consumer audiences. Such explanations need to be clearly written, easy to find, and placed on a display in a way that does not create clutter. On a Web display, use of a hover function can make information readily available without being intrusive.

Developing an effective design requires working with members of the intended user audience, presenting mockups of possible displays and testing how understandable and usable they are. Useful guidance on creating well-designed data displays may be found at https://www.talkingquality.ahrq.gov/.

Return to Contents

 

6. Provide Access to Patient-Level Data

 

A key feature of some online private feedback reports for physicians is the ability to view the underlying patient-level data that go into the reported performance measures. This feature of private reporting enables clinicians to identify specific patients who are not in compliance with specified management goals or who are overdue for specific services or followup and to view additional data related to the patient, such as name, age, and date of last visit.

If patient-level information is shared across business entities, a Business Associate Agreement (BAA) is required in order to comply with the Health Insurance Portability and Accountability Act Privacy Rule, which protects the privacy of individually identifiable health information. The BAA provides written safeguards that such protected information will be used only for authorized purposes, including quality assessment and improvement activities, and will not be disclosed in any way that would violate the Privacy Rule.14

Access to patient-level data is a key feature that differentiates private reports that are aimed primarily at assessment from those that also provide tools for improving care management. For providers to take the steps needed for improvement, they need to know where to direct their efforts. Reports that include patient-level data are more actionable than those that do not, since the clinician can drill down to patient-level data to identify where improvements can be made.

Some of our interview respondents said they present information (such as a list of specific patients due for care) in an appendix, companion report, or registry to provide the physician with guidance on specific ways to improve his or her performance (e.g., contacting patients on the list who are due for a mammogram). It is important that data be downloadable to Excel spreadsheets. It is also helpful to have the capacity to download images for use in slides. See Example 3.

Return to Contents

 

7. Enable Physicians To Correct Patient-Level Data

 

Access to detailed information at the patient level is a key to physicians' ability to trust the data. Online private reports that include access to patient-level data may include a function that allows clinicians to note where data appear to be in error and need to be corrected. This feature provides a feedback loop for improving data quality. The visible presence of a correction feature may also provide a basis for physicians to have greater trust in the data.

Our case study interviews with quality improvement managers in Cincinnati revealed additional benefits of providing physicians with the means to correct data. Many of the managers are investing substantial amounts of time and resources transitioning to new electronic medical record (EMR) systems and working with practice site managers and physicians to adjust to new data coding and entry protocols. Although the EMR transition process is challenging, getting physicians to focus on data accuracy in charting and engaging physicians in the process of correcting data can help engage them in using the information to provide better care:

"Getting physicians to verify and clean up the data is a very useful step to take because now you have physician buy-in."

—Cincinnati Quality Manager

As shown in Table 3, most of the collaborative feedback reports we reviewed include both access to patient-level data and the ability for physicians to make corrections. See Example 4.

 

Table 3. Variation in Benchmarks Used Across Five Community Collaboratives

Benchmark TypeCincinnatiOregonWestern NYIndianaWisconsin
Access to patient data  
Ability to correct patient data  

Source: Environmental scan conducted between June 2011 and January 2012.

Return to Contents

 

8. Use Sound Methods and Make Them Transparent

Physicians' perceptions of the accuracy and completeness of the data are critical if the data are to have sufficient credibility for physicians to rely on them as indicators of their need for quality improvement. Factors contributing to the credibility of the data include2:

  • A sample size that is adequate to produce reliable performance estimates.
  • Reasonable procedures for attribution of clinical responsibility, clearly explained.
  • Transparent measurement and scoring processes.
  • Case-mix adjustment procedures that remove the effects of patient factors that are not under the clinician's control.
  • Evidence of a competent data collection process by a trustworthy entity.

In 2006, the AQA developed a set of principles to guide the reporting of performance information to clinicians and hospitals. A major focus of these principles, included in Appendix A, is on the issue of transparency. Guidelines published in 2012 by the American Medical Association emphasize the need for greater industrywide standardization of reporting formats, the importance of transparency regarding the processes used to create the report, and the need for physicians to have access to patient-level data.15

Return to Contents

 

9. Update Data At Least Quarterly

A key feature influencing the value of private feedback reports is the timeliness of the data presented, which is largely determined by the frequency with which the data in the report are updated. Most physicians and quality managers we interviewed as part of the Cincinnati Health Collaborative case study project stated that monthly reports are ideal for both tracking performance and for monitoring patient care. All agreed that data should be updated no less frequently than each quarter.iii

One of the biggest drawbacks noted regarding the current Collaborative physician dashboard is the lack of timely and current data:

"It's just annual; it's old data."
"They [physicians] can't use it for process improvement because they're not gonna wait that long to see if a process is going to improve an outcome."
"A one-time snapshot each year is helpful for community comparisons but not for patient management."

—Cincinnati Physicians and Quality Managers

As noted above, timeliness of data is especially important for management of patient care. For performance assessment, longer measurement intervals are often needed to gather enough data for reliable assessment, but timeliness is still important.


iii The importance of timeliness of feedback to physicians perceptions of the meaningfulness of the data has also been shown in the context of quality improvement in hospital settings. See Bradley EH, Holmboe ES, Mattera JA, et al. Data feedback efforts in quality improvement: lessons learned from US hospitals. Qual Saf Health Care 2004;13:26-31.


Return to Contents

 

10. Build in Capacity To View Performance Trends

Another feature that enhances the value of private feedback reports for performance monitoring related to quality improvement goals is the ability to view measures over time. Monthly run charts are often used as a quality improvement tool for tracking performance against the median score or some other performance goal.

 

One aspect of the Cincinnati Health Collaborative's private data portal that was widely cited as extremely valuable by quality managers (as well as by the few physicians who were aware of it) is the component of the site supporting the diabetes quality improvement project described earlier. The features of this dashboard component that make it especially useful include its monthly updating of measures and the ability of practice sites in one system to view their performance relative to sites in other systems, a capability that no single organization has on its own. See Example 5.

"The run charts are great. I want to see trends, and drill down to specific D5 components."

—Cincinnati Quality Manager

Return to Contents

 

11. Distribute Reports Through Multiple Channels

Media usage habits of physicians (as well as other professionals and the broader public) have undergone rapid change in recent years, but not everyone has adopted electronic media in the same way or at the same pace. A study of physician-level private reports conducted in 2006 found that dissemination strategies involved the use of various media: printed hard copies; electronic static copies; and flexible, interactive Web-based versions.2

Producers of Web reports noted that interactive Web formats were especially helpful in allowing users to tailor information according to the amount of detail they preferred. Web-based formats are also more easily updated. Web-based formats were used by all five community quality collaboratives in our environmental scan.

Given that the best ways to reach individual physicians are likely to vary within most practice communities for some time, a distribution strategy that combines multiple approaches may be more likely to be successful in many communities. Examples of various methods that can be used to distribute private feedback reports include:

  • Email messages to clinicians and practice managers with relevant reports attached as Excel or PDF files.
  • Email messages to clinicians with a URL that directs them to online reports.
  • Faxed reports to clinicians who do not respond to email.
  • Mailed hard copy summary reports to medical groups, practice sites, and clinicians (that may supplement access to online reports).
  • Posting of comparative reports onsite (to support transparency related to clinician performance).
  • Presentation and discussion of private reports in the context of Quality Committee meetings, quality improvement collaborative meetings, staff meetings, and individual performance appraisal reviews.

As we noted earlier, meta-analytic studies of the most effective interventions for changing practitioners' behavior indicate that interactive approaches work best. Therefore, we recommend that community quality collaboratives not rely exclusively on dissemination strategies that get the information into the hands of individual physicians by whatever means necessary (email, fax, mail, etc.). That is a necessary beginning, but individual distribution is unlikely to achieve the kind of interactive engagement that the literature shows to be important.

To achieve that, collaboratives may find it more effective to encourage interactive engagement with the substantive findings in a private report by teams of physicians in the context of setting or reviewing quality improvement goals. Engagement in a collaborative task that makes use of data from private reports to advance quality improvement goals is likely to strengthen shared professional values, beliefs, and norms related to quality improvement.16

Return to Contents

 

12. Embed Feedback Reporting as an Integral Part of Quality Improvement

The single most important lesson to be gleaned from the research literature on the effectiveness of interventions to change clinical practice is that if private reports are used as a standalone intervention aimed at changing practice primarily through a feedback mechanism, they can be expected to have only modest effects on practice. Substantial literature exists on the effects of audit and feedback on clinical practice, and the picture that emerges from scores of studies generally indicates only small to moderate change.17

However, substantial change can occur under certain conditions, such as the following:

  • The professionals receiving feedback are not doing very well to begin with.
  • The audit and feedback is performed by a supervisor or colleague.
  • The feedback is provided more than once, both orally and in written form, and is accompanied by clear targets and an action plan.18

There are at least two reasons that change is usually quite modest. First, much of what doctors do in their practice is routinized and customary behavior.19 It is not thought out anew on each occasion in the thorough way that is likely to have occurred when practice habits were first being formed. Routinized behaviors are hard to change unless something happens to call the usual response into question. Second, information is not in and of itself very motivating, especially for professionals who are inundated with information on a daily basis. Without a trigger prompting them to do so, doctors are unlikely to review the potential practice implications of most new information they receive.

However, private feedback reports can likely play a significant role in promoting practice change if they are used strategically within a quality improvement program. Programs to change practitioner behavior and patient outcomes are more successful when they use interactive techniques (e.g., interactive education, academic detailing, improvement collaboratives) than when they use less interactive measures such as didactic lectures and distributing print materials alone.20 If private reports are designed and disseminated in ways that encourage physician engagement and interactive use, they have the potential to occupy a "sweet spot" on the interactivity continuum that influences intervention effectiveness.

Our case study in Cincinnati reinforces the value of using physician feedback reports as a tool for quality improvement. Practices participating in the diabetes collaborative sponsored by the Health Collaborative submit monthly uploads of data to the physician dashboard portal for a sample of patients, and the private dashboard displays run charts as well as scatter plot charts for each measure at the practice site level. Physicians participating in the diabetes collaborative expressed enthusiasm for the process of identifying and testing process improvement methods:

"It's not just the numbers; it's the interaction. It [the learning collaborative] brings us together in a different spirit."
"Internal benchmarking is certainly useful but if we are only looking at potential for improvement in isolation, we're not challenged by what another practice might be demonstrating or trending or making a major improvement. So we benefit from that competition on one level and then certainly, more altruistically, what we can learn from each other."

—Cincinnati Physicians

Because the goals of private reporting involve identifying specific ways in which practices need to be changed and supporting those changes by providing useful information, private reporting should always be designed and refined with those goals in mind. It is helpful to consider the role that private feedback reporting can play in a six-step Plan-Do-Study-Act (PDSA) cycle (Institute for Healthcare Improvement).

 

As shown in Figure 2,21 the six-step PDSA cycle applied to the use of private reports would proceed as follows:

  1. Use private reports to flag areas that need improvement, confirming with other data. [Plan]
  2. Select performance measures. Develop new ones if needed. [Plan]
  3. Set goals for improvement and write an action plan. [Plan]
  4. Implement the action plan. [Do]
  5. Assess progress and refine the intervention. [Study]
  6. Monitor improvements to make sure they stick. [Act]

Performance measurement plays a key role in most of these steps, and private reporting serves both to inform and to motivate the actors who need to design, implement, and monitor the action plan.

Return to Contents

 

13. Evaluate Private Feedback Reports Against Reporting Goals

To ensure continued effectiveness over time, private feedback reports on physician performance, like all reporting tools, should be periodically assessed for their utility to users and their impact in helping to achieve their intended goals. Patient management and quality improvement goals have different foci (the individual patient vs. patients with specific characteristics or a practice as a whole), and the role played by feedback reports for these goals is likely to differ as well. For example, aggregation of data across patients and issues of sample size and comparing across physicians become important for assessing quality at the practice level but not for managing individual patients.

Feedback can be obtained from focus groups and key informant interviews, both of which we used in assessing the Cincinnati Health Collaborative's physician dashboard (go to Appendix B). Other methods for monitoring and assessing reporting effectiveness include surveys of users and, for online reports, simple tracking of user registrations and log-in activity, or Web-based tracking programs such as Google Analytics.

Evaluating the effectiveness of private performance reports against the goals of reporting requires going beyond measuring mere use of private reports; it means assessing whether private reports have provided information that has helped physicians to manage patients better or has enabled systems and practices to improve care. Obtaining feedback from users is a critical part of such an evaluation, but so is an ongoing assessment of how reports can be used in new ways to identify opportunities for improved care.

Page last reviewed November 2012
Internet Citation: Introduction: Private Performance Feedback Reporting for Physicians: Guidance for Community Quality Collaboratives. November 2012. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/professionals/clinicians-providers/resources/privfeedbackgdrpt/privfeedbackgdrptintro.html