Chapter 15. Discussion

Assessing the Evidence for Context-Sensitive Effectiveness and Safety

In this ambitious 1-year project we assembled a Technical Expert Panel (TEP) of patient safety experts, methods experts, and other stakeholders; met with the TEP three times; performed numerous literature reviews; conducted five Internet surveys; and achieved consensus on the following items:

  1. The following five patient safety practices (PSPs) represent a diversity of important domains, including setting, regulation, target of the PSP (in terms of individual clinician versus organizational change), a more common versus a more rare patient safety event, among others:
    1. Checklist to prevent catheter-related bloodstream infection.
    2. The Universal Protocol to prevent wrong procedure, wrong site, wrong person surgery.
    3. Computerized order entry/decision support system.
    4. Medication reconciliation.
    5. Interventions to prevent in-facility falls.

    Interpretation and significance: Subsequent efforts examining PSPs, by AHRQ and others, may wish to use this diverse and representative list of PSPs to help focus their work.

  2. Important evaluation questions for these PSPs are:
    1. What is the effectiveness of the PSP?
    2. What is the implementation experience of the PSP at individual institutions?
    3. What is the success of widespread adoption, spread, and sustainability of the PSP?

    Interpretation and significance: Evaluations of PSPs should explicitly consider these three questions. Journals should consider asking researchers to report on them separately. Also, implementers will want to assess their experience across all three questions.

  3. High-priority contexts for assessing context-sensitive effectiveness at individual institutions are:
    1. Structural organizational characteristics (such as size, location, financial status, and existing quality and safety infrastructure).
    2. External factors (such as regulatory requirements, the presence in the external environment of payments or penalties such as pay-for-performance or public reporting, national patient safety campaigns or collaboratives, or local sentinel patient safety events).
    3. Patient safety culture (not to be confused with the larger organizational culture), teamwork, and leadership at the level of the unit.
    4. Availability of implementation and management tools (such as staff education and training, presence of dedicated time for training, use of internal audit-and-feedback, presence of internal or external individuals responsible for the implementation, or degree of local tailoring of any intervention).

    Interpretation and significance: Context is considered important in determining the outcomes of PSPs. The study investigators and the TEP judged these four domains as the most salient areas of context. This recommendation has broad implications for a variety of audiences. Researchers should be encouraged to measure and report on these contexts when describing a study of a PSP. Consumers of research will want to look for such reports, which will influence their interpretation of the study results and affect the applicability of the PSP to their setting. Accreditors and regulators should be reluctant to mandate adoption of a given PSP if it appears to be very dependent on context. In that case, they should also provide guidance on how that PSP might need to be modified depending on local contexts.

  4. There is insufficient evidence and expert opinion to recommend particular measures for patient safety culture, teamwork, or leadership. Given the plethora of existing measurement tools we identified and reviewed, our recommendation is to use whichever method seems most appropriate for the particular PSP being evaluated.

    1. For patient safety culture, the measurement methods with the most support were the AHRQ Patient Safety Culture surveys, the Safety Climate Scale, and the related Safety Climate Survey.
    2. For teamwork, the most support was given to the ICU Nurse-Physician Questionnaire; no other measure received more than half the votes of respondents.
    3. For leadership, the measures receiving the most support were the ICU Nurse-Physician Questionnaire, the Leadership Practice Inventory, and the Practice Environment Scale.

    Interpretation and significance: Because the four areas of context described under Point 2, above, are judged highest priority, it will be crucial to develop and use valid measures of them in PSP studies. Researchers' use of common validated instruments would better enable readers to evaluate whether published results are applicable to their own settings. The state of the science here is immature, and funders and researchers are encouraged to continue to develop standard measures of the key domains of context.

  5. The PSP field would advance by moving past considering studies of effectiveness as being "controlled trials" versus "observational studies." Although controlled trials offer greater control of sources of systematic error, they often are not feasible, either in terms of time or resources. Also, controlled trials often are not possible for PSPs because they require large-scale organizational change or PSPs targeted at very rare events. Hence, strong evidence about the effectiveness and comparative effectiveness of PSPs can be developed using designs other than randomized controlled trials. However, PSP evaluators are to be discouraged from drawing cause-and-effect conclusions from studies with a single pre- and post-intervention measure of outcome. More sophisticated designs (such as a time series or stepped-wedge design), are available and should be used when possible.

    Interpretation and significance: Given the major push to improve patient safety and the focus on evidence-based practices (which are rapidly embedded in national standards such as those issued by the National Quality Forum, the Joint Commission, the Institute for Healthcare Improvement, and others), it will be crucial to develop standards for appropriate evaluations to answer key safety-oriented questions. The results above will help journal editors, funders, researchers, and implementers adopt robust study methods for PSPs, methods that most efficiently answer the key questions without undue bias.

  6. Regardless of the study design chosen, criteria for reporting on the following items in a PSP evaluation are necessary, both for understanding how the PSP worked in the study site and whether it might work in other sites:
    1. An explicit description of the theory for the chosen intervention components and/or an explicit logic model for "why this PSP should work."
    2. A description of the PSP in sufficient detail that it can be replicated, including the expected change in staff roles.
    3. Measurement of contexts in the four domains described in Point 3, above.
    4. Details of the implementation process, what the actual effects were on staff roles, and how the implementation or the intervention changed over time.
    5. Assessment of the impact of the PSP on outcomes and possible unexpected effects. Including data on costs, when available, is desirable.
    6. For studies with multiple intervention sites, an assessment of the influence of context on intervention and implementation effectiveness (processes and clinical outcomes).

    Interpretation and significance: These criteria (items a-f) are deemed necessary for an understanding of PSP implementation and effectiveness and the degree to which these elements are sensitive to context. Future AHRQ-supported evaluations of PSP implementation should adhere to the criteria developed by this project. Only through repeated assessments and measurements will it be possible to determine the context-sensitivity of PSPs and to build the evidence base for which contexts are most important and how they should be measured and reported.

Limitations

The strengths of our work to arrive at these criteria include the broad-based expertise and viewpoints within the project team and the TEP, the grounding of our work in theory and the practical assessment of literature, and the careful and painstaking process of consensus-building, through formal and informal group judgment processes. Limitations of our work are mainly the limitations of the state of the science: there is no agreed-upon definition of what is "context," the boundaries between context and the intervention are often arbitrary, the intervention and the implementation of the intervention may often be considered to be a single construct, and there are insufficient data or expert opinion to specify in greater operational detail several of our important criteria, such as "description of the intervention in sufficient detail that it can be replicated" or even what constitutes an adequate description of the use of theory. Furthermore, as already discussed, there is insufficient evidence and opinion to recommend specific measures for patient safety culture, teamwork, and leadership, even though these are three important contexts believed to influence intervention effectiveness. Lastly, our discussions were anchored by consideration of five specific PSPs. While they were chosen specifically to be diverse and representative, it is possible that contextual factors may be different for other PSPs. Our results could also benefit from a critical examination by an even wider-ranging group of patient safety stakeholders.

Page last reviewed December 2010
Internet Citation: Chapter 15. Discussion: Assessing the Evidence for Context-Sensitive Effectiveness and Safety . December 2010. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/findings/final-reports/contextsensitive/context15.html