Part One presents basic information about physician feedback reports in a question and answer format:
- Section 1-1: What are confidential physician feedback reports and what is their purpose?
- Section 1-2: Do confidential physician feedback reports work?
- Section 1-3: What types of organizations develop physician feedback reports?
Confidential physician feedback reports refer to data that are shared with physicians on their clinical performance over a specified period, as captured by various quality and resource use indicators. In contrast to public performance reports aimed at supporting the information needs of consumers and purchasers, confidential feedback reports are designed to support the improvement goals of physicians, other clinicians, and health care organizations. Feedback reports also are distinct from electronic “reminders” designed to provide clinical decision support to physicians at the time of a medical encounter with a patient.
While this guide uses the term “confidential feedback reporting,” other terms (e.g., audit and feedback, performance feedback, data/feedback and benchmarking, relative social ranking, practice profiling) represent the same or similar types of reports. Feedback reports are not typically made available to the public, although the extent to which the performance reports remain “confidential” within an organization varies. See discussion of “unblinded” reports in Appendix 1.
Physician feedback reports can be print, Web-based, or embedded in electronic medical records. Although the content, design, and delivery of feedback reports vary widely, most include some way to compare the performance of an individual or group of physicians to that of a comparator group. See discussion of comparators in Section 2-3.
Figure 1 is a hypothetical feedback report comparing six physicians in the same primary care practice site with each other and with a target goal for each of five specific clinical indicators relevant to diabetes care. Scores that are shaded indicate performance below the target goal. For example, the percentage of Physician C’s patients achieving low-density lipoprotein (LDL) cholesterol <100 (28.6%) is below the target goal of 30 percent.
Physician feedback reports are designed to facilitate assessments of care that will lead to improvements in clinical care quality, patient experience, appropriate resource use or cost-reduction, and timely uptake of new clinical advances.
Objectives typically include:
- Enabling physicians, other clinicians, and health care organization leaders to assess their performance, which is a prerequisite to improvement.
- Facilitating dialogue among team members to identify and prioritize areas of health care delivery needing improvement, and in some cases to shift clinical or organizational attention to areas of relative deficit.
- Motivating efforts to improve, specifically by shifting attention to areas of relative need.
- Evaluating efforts to improve, perhaps in response to an earlier feedback report.
Other objectives that may be supported by feedback reports, depending on how they are designed, include:
- Identifying clinical teams or interventions associated with high performance, which can inform improvement plans.
- Supporting patient care management, i.e., when patient-level data are included, by providing access to data that enable clinicians to track whether individual patients are meeting their specified management goals or may be overdue for specific services or followup care.
- Providing linkages and facilitating access to additional improvement-related tools and resources.
Data sources used in creating feedback reports include medical records, registries, administrative or claims databases, observations, and patient surveys. Feedback reports can be delivered periodically or can be designed for ongoing, real-time access if they are built into electronic health information systems. See also related discussion in Section 2-2.
Ultimately the success of physician feedback reporting depends on actions taken on the basis of the feedback. The critical outcomes that measure success are not whether a report has been read and understood, but whether it has contributed to better care (Shaller and Kanouse, 2014). See also related discussion in Sections 3-2 and 3-3.
The potential of feedback reports to support physician behavior change and performance improvement is well documented in the research literature. However, the size of the effect varies considerably across implementation contexts. In contrast, some of the more widely used medical education techniques—didactic educational programs (i.e., lectures) and provision of printed text—have little to no effect on physician behavior and performance (Bloom, 2005).
A recent systematic review of randomized clinical trials examining the effect of quality improvement interventions that included feedback reporting compared with usual care found an overall positive effect on measured outcomes. Particularly promising, out of 98 study comparisons in one review, 27 showed an improvement of at least 10 percent (Ivers, et al., 2012). Go to the illustrative case example in Text Box 2.
An important conclusion emerging from the systematic review of the evidence is that, while feedback reporting can work, the design and manner in which feedback reports are implemented appear to have a major impact on the extent of their effectiveness.
Part Two of this guide identifies evidence-based design practices, with the aim of increasing the effectiveness of future feedback reports, nudging them toward improvement in outcomes of 10 percent or more, as was the case in 27 studied interventions cited above.
|The power of feedback reporting to influence professional behavior can be illustrated by the results of a randomized, controlled trial evaluating the effect of electronic feedback reporting on the behavior of clinicians treating patients with Type 2 diabetes. Clinicians in a Danish county caring for similar types of diabetic patients were randomized to receive or not to receive electronic feedback on their quality of care. Clinicians receiving feedback showed significant improvements in care quality as measured by their adherence to treatment according to guidelines on evidence-based process of care. For example, physicians in the intervention practices succeeded in motivating their patients to fill prescribed medications for Type 2 diabetes treatments at nearly three times the rate of patients in the control group (Guldberg, et al., 2011).|
The number and variety of organizations that develop feedback reports have greatly expanded beyond health care providers, such as hospitals and medical groups. They also include organizations such as health plans, the Medicare and Medicaid programs, professional societies and boards, and regional health improvement collaboratives. Feedback reports are also used by educational campaigns focused on expediting the adoption of a particular new clinical advance (van der Weijden, et al., 2005; Grol, et al., 2005; Flottorp, et al., 2010).
It is important to note that the “type” of developer has potential implications for the impact a feedback report might have on physician behavior. For example, the developer may be perceived as biased toward a particular goal (e.g., reducing costs) that is not aligned with the goals of the recipient physicians (e.g., improving quality of care). In this case, the uptake of the reports may be affected negatively (Ivers, Sales, et al., 2014). See also related discussion in Section 2-4.
Feedback reports developed by health care providers
Hospitals and large medical practices have developed data collection systems to support performance improvement. Feedback reports are a natural extension of such activities. Go to Text Box 3 below.
|HealthPartners Medical Group (HPMG), based in Bloomington, Minnesota, consists of a network of 55 clinics providing both primary and specialty care. It structures its feedback reports to provide actionable information on clinical quality, patient experience, and total cost of care. Performance on more than 80 metrics is reported each month for each of the following levels: medical group, division, clinic location, individual physician, and individual patient. The ability to review these reports monthly enables team members to identify both successes and missed opportunities quickly so that needed changes can be made and then monitored. It also provides a sense of ownership so that team members can work together to reach their goals (contribution by Nancy Salazar, Director of Care Innovation and Measurement, HPMG, to Shaller and Kanouse, 2014).|
Figure 2 below displays an excerpt of a feedback report developed by HPMG, which set an improvement goal of having 62 percent of a clinic’s patients achieve the optimal vascular care (OVC) measure. The results for the top 10 clinics are listed on the left; for example, Clinic C exceeded the goal, with 70 percent of its eligible 100 patients meeting the OVC goal. The results for the top 25 physicians are listed on the right; for example, Dr. G exceeded the goal, with 88.9 percent of his 36 eligible patients meeting the goal.
Source: Health Partners Medical Group, 2014.
Some reports developed by providers are driven exclusively by their own internal performance goals. Others, at least in part, adopt and include metrics explicitly selected to mirror measures used in accountability programs external to their organization. For example, measures may be based on accreditation and certification requirements, public reports for consumers, pay for performance targets, and national and regional campaigns to promote uptake of new clinical discoveries from the field of clinical effectiveness research.
When providers develop their own reports, depending on the metrics used, they may not have access to data external to their organization that they need to produce performance comparisons with others outside their organization. Such data could be an important feature in engaging and motivating physicians. See also related discussion in section 2-3.
Feedback reports developed by organizations external to providers
A wide variety of organizations external to providers develop and disseminate confidential physician feedback reports to support improvement aims (Grol, et al., 2005). These should not be confused with quality reports available in the public domain and designed for use by consumers. This section discusses confidential reports designed for use by physicians in assessing and improving their performance but developed by organizations external to the provider, i.e., using externally sourced data.
- Private/public health plans and purchasers, including accountable care organizations. Commercial insurers have access to large administrative databases containing utilization and financial information on affiliated physicians that can be and often are used to develop performance reports. United Healthcare, for example, has developed a feedback report for its affiliated physicians on HEDISi measures. In the public sector, the Centers for Medicare & Medicaid Services (CMS) produces feedback reports for physicians and physician groups through its Quality and Resource Use Reports program. CMS also is producing feedback reports featuring key cost and quality metrics for its affiliated accountable care organizations.
At the State level, a growing number of Medicaid agencies are producing performance feedback reports; an informal poll of a subset of Medicaid Medical Directors identified 10 State Medicaid agencies that develop some type of feedback report for affiliated physicians. Other State Medicaid agencies rely on the managed care organizations with whom they contract to develop and implement feedback reports for physicians in their networks (Shaller and Kanouse, 2014).
Figure 3 is an excerpt of a feedback report for medical groups developed by a health plan, BlueCross BlueShield of Massachusetts. The excerpt presents one of several dozen measures that are collectively linked to financial performance incentives developed by the plan. It compares performance on the rate of patients screened for breast cancer with a set of performance targets designed to reward both performance and performance improvement, which for this measure fall between 77.1 and 90 percent screened. For example, “Your Group” achieved a 79 percent screening rate in 2011 and a 78.7 percent rate in 2012, exceeding the minimum threshold of 77.1 percent each year.
i HEDIS = Healthcare Effectiveness Data and Information Set.
Source: BlueCross BlueShield of Massachusetts, 2016. Report layout modified with permission.
- Regional multi-stakeholder health care improvement collaboratives. There are more than 30 such collaboratives in the United States (NRHI Web site, 2015), including thoseformerly sponsored by AHRQ’s Chartered Value Exchange (CVE) program and the Robert Wood Johnson Foundation’s Aligning Forces for Quality program. A growing number have either developed or plan to develop some kind of provider feedback report. A 2012 informal poll of 24 CVE project directors revealed that at least 15 developedsome kind of feedback report, most focused on physicians or medical practices. Many of these collaboratives have created an infrastructure to support the communitywide collection of claims and, in some cases, medical record data; others have developed approaches to communitywide collection of patient experience survey measures.
Physician feedback reports developed by regional health improvement collaboratives have four advantages over those developed by a single plan: (1) the report represents a larger pool of patients and thus more completely reflects physicians’ care; (2) the larger the data pool is, the greater the ability to validly and reliably measure performance; (3) regional collaboratives have a unique ability to provide region-level, regionwide benchmarks, which no single care system or heath plan can create on its own; and (4) consumers are one of the key stakeholders at the table, so feedback reports are more likely to include consumer-valued metrics as a focus for improvement (Shaller and Kanouse, 2014).
Figure 4 is an excerpt of a feedback report developed by a multi-stakeholder regional collaborative, Oregon Health Care Quality Corporation, which compares screening rates across clinics. For example, Clinic 1 achieved a Cervical Cancer Screening score of 83.3 percent, compared with Clinic 3 at 78.7 percent and Clinic 4 at 63.2 percent.
Source: Oregon Health Care Quality Corporation, 2016. Report layout modified with permission.
- Clinical professional societies and boards. A growing number of clinical specialty societies and boards have developed confidential feedback reports for their members. The College of Cardiology, for example, develops reports for physicians who voluntarily participate in their registries.
Figure 5 is an excerpt of a feedback report developed by a professional society, The Society of Thoracic Surgeons. For example, the percentage of patients with “any complication” during isolated coronary artery bypass graft (CABG) procedures is shown for a database participant (i.e., typically a hospital cardiac surgery program, a practice group of cardiothoracic surgeons, or uncommonly, an individual surgeon), Surgeon Group Q, for years 2012, 2013, and 2014. The report also shows Surgeon Group Q compared with the percentage of patients with “any complication” for “like groups” performing the same procedures in 2014.
Professional societies and boards also play a more general role in encouraging physicians to support measurement and reporting activities. The 24 member boards of the American Board of Medical Specialties have implemented a program of Maintenance of Certification that requires certified physicians to participate in a number of educational and improvement activities. Many of these activities are supported by a form of feedback reporting. The American Board of Internal Medicine, for example, offers its members Performance Improvement Modules that typically include some type of comparative data collection and measurement to support specific improvement aims (Granatir contribution to Shaller and Kanouse, 2014).
1 Includes reoperations for bleeding/tamponade, valvular dysfunction, graft occlusion, and other cardiac problems.
2 Includes surgical and PCI/transcatheter interventions.
3 Excludes patients with zero vein grafts.
Source: The Society of Thoracic Surgeons, 2016. Report layout modified with permission.
- Campaigns and programs to support and expedite physician uptake of new clinical advances. “Patient centered outcomes research” programs and other campaigns focused on expediting the time it takes for new clinical evidence to be implemented into clinical practice often develop a feedback report to motivate uptake and track the rollout of new evidence. Such reports can be freestanding, focused narrowly on the new clinical evidence to be implemented. Or, the tracking metrics can be integrated into a pre-existing physician feedback reporting system, the developer of which has agreed to partner with the campaign.
The latter approach has the advantage of using an established reporting infrastructure, which likely has a degree of familiarity among affiliated physicians. If the pre-existing report already tracks performance for a large number of metrics, however, any added measures to support uptake of new clinical evidence may get lost. To the extent the geographic focus of a campaign is large, it is well situated to produce performance benchmarks tailored to a range of implementation contexts.
Practice-Based Research Networks (PBRNs), which consist of 176 networks of primary care clinicians and practices, represent one type of program working to translate research findings into practice (AHRQ PBRN Web site, 2015). Some incorporate feedback reporting into their work. One PBRN in particular, PPRNet, links practices across the United States that use electronic health records to support feedback reporting on 62 quality measures at the practice, physician, and patient level. PPRNet feedback reports also include network and national comparators for practices to use in assessing their progress (PPRNet Web site, 2015).
Figure 6 is an excerpt of a feedback report developed as part of a PBRN campaign to accelerate implementation and diffusion of chronic kidney disease (CKD) guidelines in primary care practice. The first table shows lab test results for patients of “Practice A” before implementation of the CKD guidelines. The second table shows lab test results for the same patients in “Practice A” after implementation of the CKD guidelines. For example, Practice A’s patients had a mean Urine Micro/Creat score of 255.3 before the intervention, which improved after the intervention to a mean Urine Micro/Creat score of 18.7.
Source: Oklahoma Physicians Resource/Research Network project, “Leveraging PBRNs to Accelerate Implementation and Diffusion of Chronic Kidney Disease Guidelines in Primary Care Practice,” 2016. Report layout modified with permission.
With the growth in the number and variety of organizations that develop physician feedback reports comes a significant challenge to the broader enterprise of feedback reporting. An individual physician may receive multiple reports from different sources, such as his or her medical group, the different health plans with which he or she contracts, a regional health care improvement collaborative, and his or her professional society (Teleki, et al., 2006). There is no guarantee that reports produced by different developers are aligned in focus or measure specification. The phenomenon of dueling feedback reports may diminish the visibility—and importance—of any single report, and in the case of conflicting scores, may create confusion and undermine the credibility of provider feedback reporting.