Translating Evidence into Practice 1998 (continued, 5)

Conference Summary


Session 5. Clinical Practice Guidelines: The National Guideline Clearinghouse™ and Structured Reviews

Moderator: Jean Slutsky, P.A., M.S.P.H., AHCPR

The National Guideline Clearinghouse™: Overview of ECRI's Technical Approach—Vivian Coates, M.B.A., Vice President, Information Services and Technology Assessment, ECRI

Ms. Coates stated that ECRI is under the contract with AHCPR to develop and implement the technical aspects of the National Guideline Clearinghouse™ (NGC), an effort that includes the development of viable options to transform the NGC into a self-supporting entity. Key components of the NGC ( include structured abstracts about the guideline and its development; a utility for comparing attributes of two or more guidelines side-by-side; syntheses of guidelines covering similar topics, highlighting areas of similarity and difference; links to full-text guidelines, where available, and/or ordering information for print copies; an electronic forum for exchanging information on clinical practice guidelines, their development, implementation, and use; and annotated bibliographies on guideline development methodology, implementation, and use. ECRI uses the Institute of Medicine's evidence-based definition of a clinical practice guideline.

The guideline summary sheet is the primary tool used to create the standard NGC view formats—the Brief Guideline Summary, the Complete Guideline Summary, the Guideline Comparison, and the Guideline Synthesis. The guideline summary sheet lists more than 50 guideline attributes that correspond to a data field in the NGC database. In general, these attributes describe three categories of information—bibliographic, methodologic, and clinical information. Thus, NGC allows a user to perform a search of the guideline summary sheet, retrieve guidelines, select the guidelines for comparison, and receive a tabular bibliographic, methodologic, or guideline synthesis comparison.

Development of a Structured Appraisal Framework and Its Application in the UK—Françoise Cluzeau, M.Sc., Lecturer, St. George's Hospital Medical School, University of London

Ms. Cluzeau said that guideline development in the United Kingdom has been largely uncoordinated. A joint working program among the Health Care Evaluation Unit at St. George's Hospital Medical School in London, the Health Services Research Unit from Aberdeen University, and the Royal College of Physicians was established. The first objectives of the program were to produce an instrument for appraising the methodologic quality of clinical guidelines and to formulate a set of national recommendations for the systematic development of clinical guidelines.

A new appraisal instrument was designed containing 37 questions structured along three conceptual dimensions. The first dimension assesses the rigor of development, dimension two looks at the context for applying the guidelines and content, and the third dimension assesses the application of the guidelines. The instrument is used as a national template for the rigorous development of guidelines with the flexibility to be adapted locally. The National Health Service (NHS) Executive has commissioned the Health Care Evaluation Unit to formally appraise national guidelines using the research methodology. Appraisal reports are used by the National Guidelines Committee as a basis for recommending guidelines to the NHS. The UK is also considering a national guideline clearinghouse.

Return to Contents

Session 6. Linking Evidence Reports to Practice Improvement

Moderator: Francis Chesley, M.D., AHCPR

Making Continuous Quality Improvement a Routine Part of Primary Care Practice—Harold I. Goldberg, M.D., Professor of Medicine, University of Washington

Dr. Goldberg summarized the results obtained from the first multicenter, randomized, controlled trial to study the effectiveness of using continuous quality improvement (CQI) teams in chronic disease care. The study was designed to test whether the continuous involvement of CQI teams could improve compliance with national clinical practice guidelines for the care of hypertension and depression.

Initially, the CQI teams were asked to focus on five areas of improvement: for hypertension, the goals were to change prescribing and improve control rates; for depression, the goals were to better recognize depression, change prescribing, and decrease symptomatology among patients. The teams' failure to produce positive results led to the conclusion that the "plan, do, check, act (PDCA)" approach was too difficult for most local settings to implement successfully.

Another clinic tested the implementation of a computerized reminder system in which the clinic was asked to perform the "do" and "act" functions of the PDCA, while Dr. Goldberg and his group performed the "plan" and "check" functions. The study proved to be low cost and unobtrusive. Trials such as these could be helpful in encouraging the rigorous identification of best practices within primary care clinics.

The Stroke Prevention Policy Model: Linking Evidence and Clinical Decisions—David B. Matchar, M.D., Director, Health Services and Policy Research, Duke University Medical Center

Dr. Matchar and his colleagues sought to bridge the gap between the evidence-gathering and implementation phases via their Seven-Step Model for Practice Improvement Research. The basic elements of this approach are the functional definition (the site-independent characteristics of optimal care) and the tool box (the means by which the functional definition may be implemented at the local level). Also essential is the assessment of implementation, which is how the sites determine whether they are accomplishing the intended intervention.

The seven steps of the model are: (1) identify the potential target of opportunity; (2) synthesize information about optimal practice; (3) synthesize information about current practice; (4) identify reasons for discrepancies between current and optimal practice; (5) develop a strategy for practice improvement; (6) assess effectiveness and cost-effectiveness of the practice improvement strategy; and (7) determine whether or not the practice improvement strategy should be implemented. Managed care organizations (MCOs) may provide excellent opportunities for testing new strategies for linking evidence into practice.

Return to Contents

Session 7. Cost-Effectiveness Analysis and Decisionmaking

Moderator: David Atkins, M.D., AHCPR

Is the Societal Perspective in Cost-Effectiveness Analysis Useful for Decisionmakers?—Louise B. Russell, Ph.D., Professor, Institute for Health, Health Care Policy, and Aging Research, Rutgers University

Dr. Russell stated that the U.S. Public Health Service Panel on Cost-Effectiveness in Health and Medicine had made recommendations that provide a framework for consistent practice in cost-effectiveness analysis (CEA). The main purpose of CEA is to provide decisionmakers with information about the costs and the health outcomes of interventions so that they can better evaluate various interventions and the trade-offs involved in choosing among them. The panel recommended that a reference case analysis be included in studies relating to the broad allocation of health resources across conditions and interventions. The panel also recommended a societal perspective for the reference case.

CEAs performed for various purposes should be compared with one another and accrued to a database that can be used to inform public decisionmaking about social priorities. The major groups affected include patients, clinicians, families and friends, and others, including third-party payers. The societal perspective seeks to ensure that none of the parties who may be affected by a particular intervention and none of its potential effects are ignored. The CEA should include all the elements that went into developing the cost-effectiveness ratios. Items such as benefits, harms, costs, and how much and to whom to provide information about potential tensions between the parties involved are vital to an effective CEA.

The Once and Future Application of Cost-Effectiveness Analysis—Marc L. Berger, M.D., Executive Director, Merck & Company

Dr. Berger stated that CEAs are not being applied currently in the marketplace, which often misinterprets cost-effectiveness to mean cost-savings. Potential applications of CEAs at the population level include payers using CEAs to inform coverage decisions; providers using CEAs to inform formulary decisions, treatment-guideline adoption, and disease-management targeting; large purchasers and employers using CEAs to inform decisions regarding choice of quality measures; and policymakers using CEAs to inform decisions regarding resource allocation and core coverage, strategic planning, and shared valuation of new technologies. The fact that decisionmakers are not currently using CEAs to inform such decisions has led to major inconsistencies in the cost-effectiveness of what is being provided and what is not.

Fundamental questions are implicit in every CEA:

  • From a social standpoint, are all lives equal in importance?
  • From a political standpoint, do we live in an egalitarian or a utilitarian society?
  • From an economic standpoint, what are our responsibilities for the welfare of others?

We want a way to measure value at the personal and societal levels, and CEA by itself is insufficient to establish value. A social process is needed that legitimizes choices for a population that includes individuals and groups with conflicting needs and values. This requires shared deliberations, an appeals process, and making the values explicit in contracting, so people know what they are getting in their health care plans. Assigning value to health care will optimally be defined by a process that facilitates and legitimizes reconciliation of the different value systems of various patient groups that are informed by CEAs, but not defined by it.

Cost-Effectiveness Analysis and Decisionmaking in Health Plans—Gwen Wagstrom Halaas, M.D., M.B.A., Associate Medical Director, and Director, Medical Policy Health Partners

Dr. Halaas discussed why it may appear that health plans do not take advantage of such sophisticated tools as CEAs. Some barriers to use of CEAs in decisionmaking are: for providers, there is no incentive to eliminate waste or consider broader concerns when faced with an individual patient; for health plans, there is limited knowledge about the costs and benefits of alternative interventions and a high degree of variability based on patient preferences and needs; and for departments of health and other regulatory bodies, there is a lack of consensus in our society as to what is valued in terms of quality of life issues, and there are various regulatory constraints.

Within health plans, CEAs could be used to increase the efficiency of systems, to guide difficult resource-allocation decisions, and to aid in utilizing best practice guidelines. Further, CEAs can be used to better inform health plan decisions, to assist in the development of guidelines, to guide outcomes research, and to better inform public agencies. Sophisticated CEAs can perhaps only be done in limited institutions, and health care providers and plans can then use the information developed to make good decisions.

Return to Contents
Proceed to Next Section

Current as of January 1997
Internet Citation: Translating Evidence into Practice 1998 (continued, 5): Conference Summary. January 1997. Agency for Healthcare Research and Quality, Rockville, MD.