Making All Research Results Publically Available: The Cry of Systematic Reviewers

Slide Presentation from the AHRQ 2011 Annual Conference

On September 20, 2011, David Moher made this presentation at the 2011 Annual Conference. Select to access the PowerPoint® presentation (880 KB). Plugin Software Help.


Slide 1

Making All Research Results Publically Available: The Cry of Systematic Reviewers

Making All Research Results Publically Available: The Cry of Systematic Reviewers

Ottawa Hospital Research Institute/Institut de recherche de l'Hôpital d'Ottawa (OHRI/IRHO)

Slide 2

Outline of Talk

Outline of Talk

  • Definitions.
  • History.
  • Who contributes to publication bias.
  • Impact of publication bias.
  • Funnel plots and interpreting them.
  • Outcome reporting bias.
  • Critical appraisal of systematic review.

Slide 3

Memory Jog

Memory Jog

  • Grey literature:
    • "That which is produced on all levels of government, academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers".
  • Publication bias:
    • "Investigators, reviewers, and editors submit or accept manuscripts for publication based on the direction or strength of the study findings".

Slide 4

Publication Bias

Publication Bias

  • 1959:
    • 294 reports from 4 leading psychology journals.
    • 97.3% reported statistically positive results.
  • 1986-1987:
    • 456 reports from three leading from psychology journals and 3 healthcare journals (NEJM, Am J Epi, Am J Pub Health).
    • 97% of psychology journals reported positive results.
    • 85% of Medical journals reported positive results.

Slide 5

Publication Bias: Contributors (1)

Publication Bias: Contributors (1)

  • Researchers:
    • Are likely the major source.
  • Peer reviewers:
    • Experimental evidence has shown that reviewers are highly influenced by the direction and strength of results in a submitted manuscript1.
  • Editors.

Slide 6

Publication Bias: Contributors (2)

Publication Bias: Contributors (2)

  • Editors!

Letter from the editor for a major environmental/toxicological journal to the author of a submitted manuscript:

Unfortunately, we are not able to publish this manuscript. The manuscript is very well written and the study was well documented. Unfortunately, the negative results translates into a minimal contribution to the field. We encourage you to continue your work in this area and we will be glad to consider additional manuscripts that you may prepare in the future.

Slide 7

Consequences of Non-publication Bias (1)

Consequences of Non-publication Bias (1)

  • Management (survival) of ovarian cancer:
    • Results of 13 published trials.
    • Results of 16 trials (including 3 registered only).
  • Pooling the results of published trials only:
    • Statistically favouring combination chemotherapy compared to alkylating agent (16% advantage).
  • Pooling all 16 trials:
    • Non-significant advantage of 5%.
  • Provides clinicians and patients alike with differing estimates as to the purported effectiveness of a cancer intervention.

Slide 8

Consequences of Non-publication Bias (2)

Consequences of Non-publication Bias (2)

  • Methods:
    • Compared the results of 365 published trials with 102 'grey' trials, included in 33 systematic reviews.
  • Excluding the results of grey literature exaggerated the treatment effectiveness by 15%, on average.
  • Grey literature accounts for approximately 25% of studies included in systematic reviews.
  • The 102 grey literature randomized trials included more than 23,000 participants.

Slide 9

Publication Bias: Impact

Publication Bias: Impact

  • Consequences for Systematic Reviews and Meta-Analyses:
    • Biased summary estimates that are falsely positive, precise and accurate.
  • Consequences of guideline and health policy development:
    • Practice may be influenced (and even mandated) by false conclusions.

Slide 10

Publication Bias: Detection

Publication Bias: Detection

  • The Funnel Plot:
    • A measure of the treatment effect size plotted against a measure of the study's sample size or precision.
    • The precision of the estimation of the true effect increases with larger sample sizes.
    • Funnel plots investigate whether studies with little precision (small studies) give different results from studies with greater precision (larger studies).

Slide 11

Funnel Plots

Funnel Plots

Image: A funnel plot graph is shown.

Slide 12

Interpreting Funnel Plots

Interpreting Funnel Plots

  • The ability of researches to identify bias using funnel plots was shown in one study to be 53%.

Slide 13

Funnel Plot Asymmetry

Funnel Plot Asymmetry

  • Causes:
  • Selection bias:
    • Publication bias—(One of many reasons!).
    • Language bias.
    • Citation bias.
    • Multiple publication bias.
  • True Heterogeneity:
    • Intensity of intervention.
    • Characteristics of the patient population.
  • Methodological Quality.
  • Outcome measure and analysis.
  • Chance.

Slide 14

Publication Bias: Time Lag Bias

Publication Bias: Time Lag Bias

  • Statistically significant positive studies published before null studies.
  • Systematic reviews—a cross section cut in time:
    • Thus, trials with positive results could dominate the literature and could introduce bias for several years.

Slide 15

Recommendations for Examining and Interpreting Funnel Plot Asymmetry in Meta-analyses of Randomised Trials

Recommendations for Examining and Interpreting Funnel Plot Asymmetry in Meta-analyses of Randomised Trials

  • As a rule of thumb, tests for funnel plot asymmetry should not be used when there are fewer than 10 studies:
    • Test power is usually too low to distinguish chance from real asymmetry.
  • When there is evidence of funnel plot asymmetry, publication bias is only one possible explanation.
  • As far as possible, a testing strategy should be specified in advance. Applying and reporting many tests is discouraged: if more than one test is used, all test results should be reported.

Slide 16

Recommendations for Examining and Interpreting Funnel Plot Asymmetry in Meta-analyses of Randomised Trials (continued)

Recommendations for Examining and Interpreting Funnel Plot Asymmetry in Meta-analyses of Randomised Trials (continued)

  • Test results should be interpreted in the context of visual inspection of funnel plots:
    • For example, are there studies with markedly different intervention effect estimates or studies that are highly influential in the asymmetry test? Even if an asymmetry test is statistically significant, publication bias can probably be excluded if small studies tend to lead to lower estimates of benefit than larger studies or if there are no studies with significant results.

Slide 17

(Intra-study) Publication Bias

(Intra-study) Publication Bias

  • Selective reporting bias.
  • Outcome reporting bias:
    • Typically statistically positive.
    • Selected by investigators (post hoc).
  • Data analyses reporting bias.

Slide 18

Some Salient Results

Some Salient Results

  • Nearly two-thirds had a change in at least one primary outcome between the protocol and publication.
  • Statistically significant outcomes had a higher likelihood of being reported compared to non-significant ones.

Slide 19

Do Researchers Have a Social Obligation to Study Participants?

Do Researchers Have a Social Obligation to Study Participants?

  • Other realms of life:
    • Airline industry.
    • Hotels.

Slide 20

The SSRI Story

The SSRI Story

  • Selective serotonin reuptake inhibitors [SSRI] commonly used class of antidepressants to treat major depression in children.
  • Concerns that use of these drugs increase risk of suicide.
  • Results of systematic review:
    • Published data support use of Paxil.
    • Addition of unpublished data tip the harm/benefit balance and do not support use of drug.

Slide 21

PRISMA Statement

PRISMA Statement

  • Guideline for reporting systematic reviews and meta-analyses.
  • Item 15:
    • "Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication bias, selective reporting within studies)..."

Slide 22

Rationale for Reporting Assessment of Bias Across Studies

Rationale for Reporting Assessment of Bias Across Studies

  • Reviewers should explore the possibility that the available data are biased.
  • They may examine results from the available studies for clues that suggest there may be:
    • Missing studies (publication bias).
    • Missing data from the included studies (selective reporting bias).
Page last reviewed March 2012
Internet Citation: Making All Research Results Publically Available: The Cry of Systematic Reviewers. March 2012. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/news/events/conference/2011/moher/index.html