Translating Evidence into Practice 1998 (continued, 3)

Conference Summary

TRIP

Session 1. Pooling Research Results: Benefits and Limitations of Meta-Analysis

Moderator: David Atkins, M.D., M.P.H., AHCPR

The Contributions of Meta-Analysis to Effectiveness Research—Neil Powe, M.D., M.P.H., M.B.A., Johns Hopkins University School of Hygiene and Public Health

Dr. Powe reviewed the types of evidence—randomized or nonrandomized controlled trials, testimony or theory, meta-analysis, case reports and anecdotes, observational studies, narrative review articles, and case series—used in decisionmaking for clinical practice and policy. Meta-analytic tools have been used in effectiveness research for guidelines development, patient outcomes research (published reports, methods development, decision and cost-effectiveness analysis, areas for research, quality improvement of literature), and coverage decisions by health care plans.

Evidence-based reports and meta-analysis rely on experts in methodology. Meta-analysis fails when results are combined to create a large sample size even against better judgment, leading to misinterpretation of attempts. Heterogeneity of meta-analysis results from different origins (content, intensity, timing, duration, and setting of the interventions; patients; providers). Therefore, study differences should be assessed both quantitatively and qualitatively.

The best evidence needs to guide clinical practice decisions. Evidence-based medicine examines all study results and provides greater statistical power, when pooling is appropriate. Meta-analysis has a large role in designing future clinical trials—selecting appropriate primary and secondary endpoints and measurements, estimating sample size in overall population and subgroups, and identifying important factors for blocking.

Advantages and Limitations of Meta-Analysis—Joseph Lau, M.D., Associate Professor of Medicine, New England Medical Center

Dr. Lau said that cumulative meta-analysis has demonstrated that routine updating of meta-analyses provides timely information for therapeutic guidance. Discrepancies between results obtained from meta-analysis and mega-trials have been reported, igniting debate about the reliability of meta-analysis. When results disagree, the issue is whether the meta-analyses of the large trials are unreliable or if a common thread can explain the differences. Meta-analyses of smaller studies are generally comparable to large studies, and differences can be attributed to insufficient sample size, control rate, and publication bias.

The problem of heterogeneity in meta-analysis can be estimated, ignored using the fixed-effect model, incorporated using the random effect model, or explained using group analysis or meta-regression methods. Increasingly, response surface analysis, also known as individual patient analysis, is being used to provide additional precision and insights.

Pooling Research Results: Benefits and Limitations of Meta-Analysis—Donna Stroup, Ph.D., M.Sc., Centers for Disease Control and Prevention

Community-based (epidemiologic) research differs from clinical medicine in focus (does it work in populations versus how does it work?); justification (does it work [impact] versus can it work?); funding (project-specific versus investigator-specific); and ethics (humans versus animals cells, extracts). Dr. Stroup noted that in community-based research, rules of evidence are determined by disciplines rather than projects, lack consensus on hierarchy of evidence, and have complex compound interventions. Meta-analyses offer expanded generalizability of results and potential to explore sources of heterogeneity and utility, when randomized controlled trials are not feasible.

Meta-analyses are being done and used for policy decisions. The challenge is to learn how to use meta-analysis as a tool and evaluate it in our policy decisions.

Return to Contents

Session 2. Shifting Standards of Evidence? From Coverage to Case Review to Court

Moderator: Clifford Goodman, Ph.D., The Lewin Group

Dr. Goodman suggested that a set of rules of evidence has the following parameters: prospective studies over retrospective studies, randomization the preferred method of assigning patients to an intervention, large sample size better than small sample size, contemporaneous controls over historical controls, and blinding of patients and providers over unblinded studies. These rules have been assembled into hierarchies of stronger to weaker evidence, which have at the top large randomized control trials, then observational epidemiological studies, with case series further down, and at the bottom, case studies and expert opinions. Smart policymakers link the strength of evidence to the strength and direction of their recommendations, with a paradigm of use of evidence going from stronger to weaker.

Evidence and Coverage of Medical Technology by Health Care Plans—Claudia A. Steiner, M.D., M.P.H., Physician Researcher, AHCPR

A national cross-sectional survey was conducted on a population that included HMO plans, indemnity insurers, and separately the four largest insurers. Medical directors were targeted because they were key people in making individual coverage determinations.

Dr. Steiner said for the review of individual medical coverage, 27 percent of the respondents said the medical directors alone made decisions, but 19 percent thought the medical directors should be making the decisions alone. Clinical input was quite high with 90 percent of these plans having physicians involved; however, the medical directors do not have autonomy in making decisions.

Consensus on the rigor of evidence ranked randomized control trials at 90 percent, followed by meta-analyses and review, nonrandomized controlled trials, and, very far down on the list, were testimony, theory, and anecdote. Barriers were administrative, external, or regulatory problems with lack of timely effectiveness, timely cost-effectiveness, and safety data as other considerations.

Independent Medical Case Review: Is There Common Ground Between Decisions for Populations and for Individual Patients?—Jeffrey C. Lerner, Ph.D., Vice President for Strategic Planning, ECRI

Technology assessments (as provided by ECRI) are used by providers or payers to underpin coverage decisions on a population basis. Dr. Lerner said ECRI wanted to combine health services research with clinical judgment.

The medical review program starts with the statement that an independent review is not just a second opinion; people want guarantees with their reviews. Objective technology assessments must be provided to the clinician-reviewer. The reviews must be evidence-based, and they must be timely. The reviewers must be screened for conflict of interest: the independent review organization selects the reviewer panel; reviewers must not treat the patient; and their credentials must be verified. The independent review organization must not be tied to a professional or patient constituency. It should not hold fiduciary responsibility, and it must be able to demonstrate integrity of processes through quality control and quality audits. Patients should be able to choose among independent review programs. Informed consent, where patients buy into the process of carrying out a medical review before it is initiated, is essential.

Shifting the Standards of Evidence: The Courts and Scientific Evidence—Richard D. Carter, J.D., Carter & Coleman

Mr. Carter explained that in much of the 20th century, the legal standard was the Fry test, which asked whether the evidence was generally accepted in the medical community. The standard was changed with Daubert v. Merrell-Dow Pharmaceuticals, in which Daubert won claims that birth defects were occurring in offspring of mothers who took Benedictin.

Most times, scientific evidence is used to answer a question before the court. Testing is important because it is at the heart of scientific methodology. The court also will consider whether a theory has been published or is subject to peer review. Theories published in respected scientific journals have error rates published and their impact explained, and therefore favor admissibility. The use by the scientific community is still a potent form of proof of admissibility. However, scientific evidence based on experience or training may not be subject to the Daubert factors.

Return to Contents

Session 3. Computer Support in Evidence-based Medicine

Moderator: J. Michael Fitzmaurice, Ph.D., AHCPR

Computer Support in Evidence-based Medicine: Bringing Guideline-Based Knowledge to the Point of Use—David F. Lobach, M.D., Ph.D., Assistant Professor, Department of Community and Family Medicine, Duke University Medical Center

Dr. Lobach stated that to improve health care outcomes, scientific evidence needs to be brought into the clinical setting. However, there need to be deliberate, systematic pathways to change clinical practice. A potentially effective vehicle for transporting evidence into practice is clinical practice guidelines. The guideline life cycle concept has steps that include creation, dissemination, implementation, utilization, and evaluation and modification. Solutions to problems using guidelines to influence health care outcomes were identified.

The SIEGFRIED (System for Interactive Electronic Guidelines with Feedback and Resources for Instructional and Educational Development) project demonstrates the use of a computer system to bring guideline-based knowledge into the care process. The purpose of the project is to make clinical practice guidelines easily accessible at the appropriate time and location. Specific project goals are versatility, flexibility, and efficiency. SIEGFRIED is now in use at an orthopedic clinic and shortly will be used throughout an entire practice in the department of family medicine.

Computer Support in Evidence-based Medicine—Clement J. McDonald, M.D., Regenstrief Institute for Health Care

To build evidence-based guidelines, Dr. McDonald suggested that we should focus on highly illuminated areas, such as preventive care, where there are solid studies based on a relatively few points. The guideline must be tested against real patient cases before implementation. For guideline development, especially focused guidelines, the exact data to answer the specific questions should be collected.

A computer-stored medical record system—the Regenstrief Medical Record—is in use at Indiana University Medical Center and over 30 sites in central Indiana via the Web browser and the Internet. The system includes thousands of reminders. Several studies have documented the success of this system in improving medical care.

Computer-Supported Preventive Care for Children—Stephen M. Downs, M.D., M.S., Assistant Professor of Pediatrics, University of North Carolina

Dr. Downs emphasized that opportunities to provide preventive services are often missed. A computer-based tracking and reminder system, CHIP (Child Health Improvement Program), was developed at the University of North Carolina to help overcome problems. CHIP prompts physicians to provide services at the point of care, facilitates rapid documentation of data collected and services provided, and automates electronic clinical data entry. CHIP contains a guideline database that comprises the complete childhood preventive services program in the form of physician prompts. The guidelines are based on the recommendations of American Academy of Pediatrics (AAP), CDC, and U.S. Preventive Services Task Force and are prioritized to present the most effective preventive services, based on a measure of the expected benefit of a preventive service.

At each visit, the CHIP system compares the patient database to the guidelines database to identify all eligible services and generates a preventive service worksheet with the 10 highest priority prompts for the physician. Each prompt has a stem that provides an explanation or educational message intended for the physician or the parent. The stem is followed by one to six responses associated with check boxes that indicate data collected, clinical assessment, services provided, or referrals.

Several small studies with CHIP have demonstrated that a computer-based tracking and reminder system is acceptable to clinicians, increases the rate of preventive services delivery, improves the quality of preventive counseling, improves the effectiveness and efficiency of counseling, and captures useful clinical data for policy development.

Return to Contents
Proceed to Next Section

 

Current as of January 1997
Internet Citation: Translating Evidence into Practice 1998 (continued, 3): Conference Summary. January 1997. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/news/events/other/translating-evidence-1998/trip98b.html