Page 1 of 1

The Outcome of Outcomes Research at AHCPR: Lessons from Outcomes and Effectiveness Research

Lessons from Outcomes and Effectiveness Research

In addition to the broad accomplishments attributable to OER, the past decade of work has led to some important new understandings about the production and use of OER. Advances that support recommendations made in the final section are discussed here.

Insights About Methods

The effectiveness initiative was anchored in an awareness that information already in hand, routinely collected administrative data and pre-existing research studies, might be used to reduce variation in evaluation and management and inappropriate care. In its simplest form, the framers of the effectiveness initiative realized that existing data and studies might represent an inexpensive source of knowledge about effective care. Besides expense, the key advantage seen in the use of large, administrative datasets for effectiveness studies was that trials suffered from limited generalizability. Administrative data reflect patterns of care and outcomes as they occurred in the "real world." On the negative side, efforts to use these databases confronted problems of controlling for systematic bias, such as adequately adjusting for baseline differences in patient populations receiving different care. Furthermore, the accuracy and completeness of data collected for purposes other than research arose as a common challenge. Finally, a central premise of evidence-based medicine—that study design is a critical determinant of the validity of findings—raises an important dilemma for investigators who examined questions of effectiveness using administrative, observational data.

In fact, a fairly heated debate took place in the early 1990s over the relative merits of observational versus randomized studies (Sheldon, 1994; Reinhardt, 1990). While many subtle arguments were put forward, much of the underlying substance involved the difficulty of controlling for bias in observational studies, and the lack of generalizability for results of RCTs. The debate over the pros and cons of RCTs and observational studies partially obscures a basic observation that is less controversial. Different research designs are associated with different susceptibility to systematic bias. The well-described hierarchy of evidence (large RCTs at the top, followed by various types of nonexperimental studies, through expert opinion at the bottom) orders study designs according to the likelihood that bias, rather than causal relationship, can explain a reported association. The two critical questions to ask when considering the adequacy of a particular study design then are: "How likely is it that bias is affecting the results?" and "How certain of the results is it necessary to be in order to change policy or practice?"

In other words, the appropriateness of a particular analytic method cannot be determined without an assessment of the clinical problem being studied. For example, only a large, well-designed randomized study will be adequate to determine the risks and benefits of thrombolytic therapy for acute MI. Clinicians and policymakers would not make decisions about this treatment based on observational studies, in part because the negative consequences of error in this context are substantial. On the other hand, an observational study on the outcomes of beta-blocker use in elderly patients may be adequate to affect treatment decisions (mainly because the benefit of this treatment has already been demonstrated for other demographic groups through large RCTs). Furthermore, studies of the impact of system interventions on health outcomes may not need to be randomized, since decisionmakers may be willing to base these decisions on a weaker study design. The factors to be considered include the consequences of basing a decision on a false conclusion versus the time and expense required to obtain information of greater reliability. This is a matter of judgment applied to methods within a particular context, not simply a function of research methods.

It will be valuable to do a more systematic exploration of how to associate the features of a particular clinical problem with the most appropriate tools and methods to study that problem (given that the goal is to promote decisions that will improve outcomes of care). It should be possible to determine in advance whether the methods proposed to investigate a specified hypothesis are adequately (or overly) rigorous to produce usable answers (Hornberger and Wrone, 1997).

The original vision behind the effectiveness initiative suggested that observational studies might often serve as a substitute for clinical trials. The emphasis on analysis of large databases was derived from anticipated efficiencies in the use of research resources rather than a scientific assessment of the evidence needed to stimulate practice change. Observational studies can be a useful and important complementary analytic complement to clinical trials. In the past decade, experience with observational databases has shown that they provide the sort of valuable information described earlier. However, limitations of observational designs prevent these studies from providing definitive answers to many questions of comparative clinical effectiveness, which was one of the original primary goals for OER.

As Richard Deyo summarized his team's 5-year effort leading the low back pain PORT:

"Thus, this PORT (low back pain) has summarized knowledge for both patients and providers, identified important uncertainties in patient management, and helped to establish priorities for further investigation. It has generated new hypotheses about treatment effectiveness and safety and has developed new outcome measures that could be incorporated into future research. Unfortunately, we did not establish "what works" in medicine and would argue for a greater role of randomized clinical trials as part of the outcomes research portfolio" (Deyo, 1995).

Or, as William Blake put it even more succinctly:

"The true method of knowledge is experiment" (William Blake, All Religions Are One, 1788).

To further pursue the goal of determining "what works in health care," it will be necessary to increase efforts to develop creative and efficient strategies for identifying and answering these questions. In particular, this exploration should focus on the critical limitation of many prospective studies—external generalizability. Strategies for gathering high quality, prospective "real world" data will be developed if investigators are supported in this effort. Further discussion of this strategic approach is presented in the options section of this report.

Return to Contents

Recognition of Complexity of Changing Practice

Another important development over the past two decades is the increased recognition that producing new information about outcomes does not inevitably lead to changes in practice. Or, as Yogi Berra said, "In theory there is no difference between theory and practice. In practice there is." Early OER was pursued with the expectation that clear descriptions of practice patterns would stimulate profound and sustained changes in practice. This view was consistent with the basic tenets of Continuing Medical Education (CME)—that all paths to improved quality directly traverse the doctor's cerebral cortex. Dissemination was a requisite part of the first generation of PORTs, reflecting a limited appreciation of the true complexity and resource requirements for effectively creating change.

Research and experience have demonstrated that development and dissemination of even high-quality, highly credible information is often insufficient to alter practices. Many factors and forces in addition to credible information will affect clinical decisionmaking (Lomas, 1993). Where evidence has successfully altered behavior, it is usually in the context of a supportive practice environment, incentives for change, and a focused implementation program (Davis et al., 1995). This recent research has led to a better appreciation for the need to consider dissemination and implementation as activities that require as much time and resources as the OER itself. AHCPR has responded with requests for proposals that address implementation directly. In addition, hospitals and managed care organizations have incorporated a more sophisticated understanding of behavior change into disease management and quality improvement programs.

While it may be important to focus on developing methods to change behavior, measure compliance, and assess changes in patient outcomes, it is less clear what if any part of this activity COER or AHCPR should take on. It may be most appropriate for COER to continue to support high-quality outcomes studies, and put some additional thought into translating this information into user-friendly formats and materials. At a minimum, strategies to assure that important findings are used by key change agents are essential. Studies intended to impact practice directly will require partnerships with organizations likely to benefit from the desired changes—as well as new approaches to facilitating the requisite partnerships.

Return to Contents

Realization that Savings Are Difficult to Achieve

Appropriateness studies published in the late 1980s reported that up to 40 percent of health care services were unnecessary. Regional differences in resource utilization were interpreted to mean that hundreds of billions of dollars could be saved through greater knowledge about effectiveness. Experience over the past decade shows that large savings are much easier to identify in theory that they are to achieve in reality.

There are several reasons why savings have not been as easily realized as expected. First, there is always an up-front expense associated with making changes to any system, and in some cases, those expenses offset some or all of the potential savings. In other cases, they may pose a barrier to initiating the necessary changes. Second, cost-reducing strategies for care will sometimes induce other changes in the amount or type of care delivered, which may offset anticipated savings. Third, the general argument that better quality costs less ignores such obvious cost-increasing quality enhancements as private hospital rooms for all patients and higher staff-to-patient ratios. Many consumers prefer privacy, faster staff response time, and other service improvements that are inherently more expensive. Finally, OER studies have not generally focused on the quality or cost implications associated with underutilization and impaired access. Addressing these problems will also lead to increased spending.

These caveats underscore the importance of decisionmakers' use of cost-effectiveness analysis (CEA) as a framework for making difficult tradeoffs between cost and quality. Evidence to date suggests that current use of CEA is limited at best. Strategies for developing analyses that are timely and offer a conceptual framework rather than 'answers' are an important component of future OER.

Return to Contents

Importance of the Current Quality of Evidence

A substantial challenge for developing the field of OER has been the fact that some clinical areas (e.g., coronary artery disease) are studied much more extensively than others (e.g., many surgical procedures) (Powe et al., 1994). For example, several of the first generation of PORTS had research teams that were wholly or partly in place prior to the initiation of the PORT program (e.g., back pain, BPH, ischemic heart disease). Others came together in response to the PORT request for proposals (e.g., biliary tract disease, hip fracture, total knee replacement). For the former group, infrastructure, methodologies, and background work had already been completed, and these groups may have been more productive as a result of these advantages.

Measurable impacts on practice or outcomes are likely to take significantly longer when OER is undertaken in fields where the state of the science is less developed. In these areas, it is necessary to develop new methodologies and the requisite infrastructure before effectiveness studies can be completed. In addition, the progression to higher levels of impact requires changes in professional norms of knowledge and practice. To the extent that medical professional organizations facilitate changes in norms and practice, fields that have been more extensively studied are more likely to have available 'receptor sites' within the relevant professional organizations.

It is important, however, to focus research on the type of studies appropriate to the level of evidence available, and to clarify expectations so that contributions in these areas are recognized. Furthermore, effective dissemination of these early studies (i.e., ensuring level 2 impacts) has as much importance as ensuring that the more developed research is acted upon. For example, medical professional organizations and other intermediaries of changing norms and policies need to be engaged in and exposed to these studies.

Return to Contents
Proceed to Next Section

Page last reviewed October 1999
Internet Citation: The Outcome of Outcomes Research at AHCPR: Lessons from Outcomes and Effectiveness Research. October 1999. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/findings/final-reports/outcomes-research/lessons.html