Chapter 7. The Importance of Theory

Assessing the Evidence for Context-Sensitive Effectiveness and Safety

"There is nothing so practical as a good theory"
Kurt Lewin (1952)

Hand-washing by hospital staff is a patient safety practice widely advocated to reduce hospital infections. But how does hand-washing work to reduce infections? On one level, it is because bacteria cause disease, and hand-washing kills bacteria. On another level, a handwashing policy works because—and only to the extent that—staff regularly wash their hands between patients. Those in charge of implementing handwashing policies might come up with a range of ideas for achieving regular handwashing, such as installing motion-activated alcohol-based antibacterial dispensers at every room entrance in the hospital or instituting an educational campaign that emphasizes doorways as the reminder to wash your hands (e.g. "every time you pass through a door, wash your hands").

The above paragraph contains two types of "theories" regarding the effectiveness of handwashing. The first is a theory of how handwashing reduces hospital acquired infections. The second is a theory of how to establish handwashing as business-as-usual in a particular organization. More generally, theories about patient safety practices (PSPs) may fall into two types: theories about how a given PSP results in better patient outcomes (sometimes called the "PSP action theory") and theories about how to establish and implement PSPs (so-called "PSP implementation theory"). Both types of theory are important. It would be difficult to promote a particular PSP without a rationale for why it might reduce harm, and knowing the PSP implementation theory enables decisionmakers and those responsible for implementation to understand the mechanisms of action at the study site and thus to devise ways to carry out similar actions and changes in their situation. In practice, it may be difficult to reliably distinguish between the two types of theory (how a given PSP works and how to implement it), as PSPs are often multifaceted or embedded within more complex programs.

Changing provider and organizational behavior to apply effective PSPs in routine clinical practice is challenging. The implementation of PSPs has only recently become the subject of research. Implementation success is known to vary. This variation may be due to differences in implementation methods, in implementation fidelity, and also in differences in the context in which implementation is performed. However, such studies of change rarely describe the implementation or the context and do not allow "generalization through theory," often a more efficient and appropriate method of generalization than study replication in many possible settings. Neither do they provide theories that might explain variations in outcomes. Without these descriptions or explanations, decisionmakers lack information to make choices about what is required for successful implementation of PSPs in their service and how to implement them effectively.

One way forward is to carry out multiple studies of PSP implementation in many settings so that decisionmakers can learn from studies in settings similar to theirs. For example, audit and feedback are variably effective in changing provider behavior and clinical outcomes.1 The effects of this intervention may vary according to elements such as content of feedback (e.g. comparative or not, anonymous or not), intensity (e.g. monthly, annually), method of delivery (by peer, or non-peer), duration (6, 12 or 24 months), and context (intensive care or nursing home). Varying only five elements produces 288 combinations, without accounting for the need for replication or addition of co-interventions.2 An alternative and more realistic and efficient approach is to use theories relevant to PSP implementation within evaluations and provide information that allows decisionmakers to better assess implementation feasibility or how best to implement the PSP. For example, an evaluation of implementation of an electronic medical record at a hospital in Sweden was based on theories of implementation. The authors' finding that implementation success was associated with factors in Rogers' Theory of the Diffusion of Innovation (plus additional factors postulated by previous research) strengthens our confidence in the usefulness of that theory and those factors to predict successful implementation in other settings.3

What Is "the PSP Implementation Theory"?

The concept of the PSP implementation theory builds on related ideas such as the "logic model,"4 "treatment theory,"5 "program theory,"6 or "theory of change."7-10 A longer overview of theory in quality improvement has been published by Grol and colleagues.11

A logic model describes how an intervention is understood or intended to produce particular results.4 The logic model proposes a chain of events over time in cause-effect patterns in which the dependent variable (event) at an earlier stage becomes the independent variable (causal event) for the next stage.12 "Treatment theory" describes the process through which an intervention is expected to have its effects on a specified target population," in this case, providers or organizations.5 This "small theory" is not a protocol that requires very specific prescribed actions. Instead, it is a set of principles that together are hypothesized to bring about change in the particular situation. These principles might be enacted in several different ways, but they all would achieve the same "functions"13 and intermediate objectives on a chain of events that ultimately lead to improved patient outcomes.

In the field of program evaluation, program theory is defined as the "conceptual basis" of the program: "Comprehensive evaluations address the theory by carefully defining the components of the program and their relationships and then examining the implementation of these components and how they mediate outcomes."14 Experimental designs use "theory" in the sense that the evaluation is designed as a prospective test of a hypothesis. In contrast, in theory-informed program evaluation, the program theory is either a prospective model of how the components lead to the intended results, or a retrospective explanation of how or why the program progressed as it did.6,15,16

A "theory of change" is usually used to describe how those responsible for implementation understand an intervention to work.7-9 It may be explicit, or it may exist as a theory in the sense of being unspoken assumptions or beliefs. Dixon-Woods et al10 describe a theory of change as identifying "plans for change and how and why those plans are likely to work, and indicates the assumptions and principles that allow outcomes to be attributed to particular activities." This is different from an explanation derived from empirical research on possible influences on outcomes.

These types of theories focus on the intervention and conceptualize it as a chain of events, often in a linear sequence, that lead through intermediate changes (including changes in provider and organizational behavior) to final results (clinical or cost outcomes). More sophisticated variants, often relevant to some combined or "bundled" safety interventions, view the implementation as a number of interacting components with a synergistic and system effect.

A wider conceptualization of "PSP intervention theory" goes beyond the focus on the intervention and its causal chain to include an understanding of contextual influences and how they help and hinder implementation of the PSP. A contemporary example of this conceptualization is the realist evaluation idea of context-mechanism-outcome configurations—a theory of an intervention "triggers" action only in a particular context "primed" to be responsive to the intervention and where the intervention can "take hold."17 As yet, the details of how to design and carry out studies to build these more complex "context sensitive" intervention theories are still being developed. An important difference from experimental designs is that influences other than the program are assessed for their influence on the program outcomes, i.e. the program is only one of a number of independent variables that are examined for their influence on the dependent variables.

In summary, the "PSP implementation theory" builds on related concepts such as logic models, treatment theory, and program theory. In practice, use of any of these concepts would improve our current understanding of PSP implementations.

Why Do We Need To Know the "PSP Implementation Theory"?

Systematic reviews of interventions to improve the quality and safety of care consistently indicate that most interventions, across different categories, are effective some of the time, but not all of the time, and that intervention effects range from none to large.18 However, very few such reviews are explicit about the underlying PSP action or implementation theories, let alone use them to explore causes of variation in effectiveness. Many studies of interventions to promote safety currently categorize features of interventions, targeted practices, and contexts on a superficial basis, e.g. computerized decision support systems (CDSS), prescribing, and urban hospitalsm respectively.2 Such classification systems are really descriptive typologies rather than theoretically meaningful groupings. They may be as unhelpful or misleading as classifying drugs into groups according to whether they are taken orally or intravenously or by the color and size of the pill.19,20 It is not surprising that systematic reviews based on such categories or typologies raise more questions than they answer and struggle to extract generalizable lessons about how interventions achieve their effects.21 For example, a CDSS can work in a number of ways, such as by increasing knowledge of safe practice, reinforcing motivation, or prompting recall, and its effects may vary according to what types of clinical behavior are targeted, whether it is used with co-interventions, and so forth. The mechanisms by which more complex interventions work, such as those to reduce falls or rapid response teams, may be both more variable and more sensitive to contextual features.

Therefore evaluations of PSP implementation need to address the processes by which interventions interact with contextual features and outcomes. For example, RCTs ideally should be accompanied by parallel process evaluations that assess the changes in processes, both intended and unintended, that may have contributed to changes in outcomes.22

Improving Safety Research with "PSP Implementation Theory"

Theory has not commonly been used in the field of quality and safety research.23 Within a review of 235 implementation studies, only 53 used theory in any way, and only 14 were explicitly theory-based.24 Similarly, most reports of PSP evaluations do not provide theory or the logic model underpinning the intervention. Even for the five representative PSPs chosen for this project, which are among the most commonly studied PSPs, our review of publications of evaluations of the PSPs found only two articles that even partially reported a theory for why the PSP should work.

Theory can guide or be applied to patient safety research in a range of ways, including the following.

Explaining clinical and organizational behavior. Just as with clinical practice, it is important to diagnose the causes of adherence or non-adherence to recommended practice before intervening. For example, theories of human error suggest that there are several causes of discrepancies between intended plan and actual action, such as slips and lapses leading to the wrong execution of an action sequence.25 Recognition of such human limitations has led to better equipment design (e.g. alarms within anesthesia machines).26

Selection or tailoring of patient safety interventions for a given problem and context. Previous research or practitioner reports can be used to create hypotheses or a provisional model of which actions may lead to which intermediate changes and which context factors may be important for implementation. Researchers can draw on this provisional implementation theory to decide which data to gather, or operationalize variables, to be able to describe implementation actions and intermediate changes, as well as which aspects of context were or were not helpful to the implementation actions. For example, McAteer et al.27 developed an intervention to increase levels of providers' hand hygiene behavior using psychological theory for evaluation in a cluster randomised trial. This involved a review of effective behavior change techniques to inform the theoretical approach taken, development of intervention components with clinicians, and focus groups with the targeted provider groups. It may be that the customization of interventions is more necessary than we appreciate.

Evaluating implementation and mechanisms of action. Theory can be used to help predict or evaluate the process of implementation, potentially distinguish between action theory failure and implementation failure, identify mechanisms of action, and shed light on whether the PSP worked (or not) as hypothesized or by an alternative means, and identify unanticipated outcomes or unintended consequences. For example, Byng et al.28 conducted a qualitative interview study alongside an RCT of a multifaceted intervention to improve the care of people with long-term mental illness. They used a realist evaluation approach to delineate which aspects of the intervention had the greatest impact.

It should be borne in mind that theory is not enough by itself to justify the implementation of a PSP. For example, a program theory may strongly suggest that an intervention works as predicted, but 'triangulation' via experimental or quasi-experimental data may fail to support this.29

Conclusions

The "PSP implementation theory" is a representation of how actions lead to changes in provider and organizational behavior as a result of the PSP and, ultimately, affect patient outcomes. Yet, theoretical perspectives have, hitherto, seldom been incorporated into PSP evaluations. This lack of description and explanation of the assumptions or logic behind the PSP makes it more difficult for others to reproduce or adapt the PSP. Future evaluations should be theory-driven, in order to enhance generalizability and help build a cumulative understanding of the nature of change.

References for Chapter 7

  1. Jamtvedt G, Young JM, Kristoffersen DT, et al. Audit and feedback: Effects on professional practice and health care outcomes. Cochrane Database of Systematic Reviews 2006; Issue 2, Art. No: CD000259. DOI: 10.1002/14651858.CD000259.pub2.
  2. The Improved Clinical Effectiveness through Behavioural Research Group (ICEBERG): Designing theoretically informed implementation interventions. Implement Sci 2006; 23(1):4.
  3. Øvretveit J, Scott T, Rundall TG, et al. Improving quality through effective implementation of information technology in healthcare. Int J Qual Health Care 2007; 19(5):259-66.
  4. Rogers, PJ. 'Logic models' in Mathison S. (ed). Encyclopedia of evaluation. Beverly Hills, CA: Sage Publications; 2005, p. 232.
  5. Lipsey MW. Theory as method: Small theories of treatments. In: Sechrest LB, Scott AG, eds. Understanding causes and generalizing about them. New Directions for program evaluation. San Francisco, CA: Jossey-Bass; 1993, pp. 5-38.
  6. Bickman L, ed. Using program theory in program evaluation.New directions in program evaluation, Vol. 33. San Francisco, CA: Jossey-Bass; 1987.
  7. Sullivan H, Stewart M. Who owns the theory of change? Evaluation 2006; 12:179e99.
  8. Mason P, Barnes M. Constructing theories of change: Methods and sources. Evaluation 2007; 13:151e70.
  9. Connell JP, Klem AC. You can get there from here: Using a theory of change approach to plan urban education reform. J Educ Psychol Consult 2000; 11:93e120.
  10. Dixon-Woods M, Tarrant C, Willars J, Suokas A. How will it work? A qualitative study of strategic stakeholders' accounts of a patient safety initiative. Qual Saf Health Care 2010; 19:74e78.
  11. Grol RPTM, Bosch MC, Hulscher MEJL, et al. Planning and studying improvement in patient care: The use of theoretical perspectives. Milbank Q 2007; 85(1):93-138.
  12. Wholey, J. Evaluation and effective public management. Boston, MA: Little, Brown; 1983.
  13. Hawe P, Shiell A, Riley T. Complex interventions: How out of control can a randomised controlled trial be? Br Med J 2004; 328:1561-3.
  14. Bickman K. The science of quality improvement (letter to the editor). JAMA 2008; 300(4):391.
  15. Weiss, C. How can theory-based evaluation make greater headway. Eval Rev 1997; 21:501-8.
  16. Weiss, C. Theory-based evaluation: past present and future. In: Progress and future directions in evaluation: Perspectives on theory, practice, and methods: New directions for evaluation, No. 76, pp. 41-55. San Francisco, CA: Jossey-Bass; 1997.
  17. Pawson R. Evidence-based policy: A realist perspective. London: Sage; 2006.
  18. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. Health Technol Assess 2004; 8.
  19. Shojania KG, Grimshaw JM. Evidence-based quality improvement: The state of the science. Health Aff 2005; 24:138-50.
  20. Foy R, Francis J, Johnston M, et al. The development of a theory-based intervention to promote appropriate disclosure of a diagnosis of dementia. BMC Health Serv Res 2007; 7:207.
  21. Foy R, Eccles M, Jamtvedt G, et al. What do we know about how to do audit and feedback? Pitfalls in applying evidence from a systematic review. BMC Health Serv Res 2005; 5:50.
  22. Oakley A, Strange V, Bonell C, et al. Process evaluation in randomised controlled trials of complex interventions. Br Med J 2006; 332:413-6.
  23. Walshe K. Understanding what works—and why—in quality improvement: The need for theory-driven evaluation. Int J Qual Health Care 2007; 19(2):57-9.
  24. Davies P, Walker AE, Grimshaw JM. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implement Sci 2010; 5:14.
  25. Parker D, Lawton R. Psychological contribution to the understanding of adverse events in health care. Qual Saf Health Care 2003; 12:453-7.
  26. Thompson PW. Safer design of anaesthetic machines. Br J Anaesth 1987; 59:913.
  27. McAteer J, Stone S, Roberts J, et al. Use of performance feedback to increase healthcare worker hand-hygiene behaviour. J Hosp Infect 2007; 66(3):291-2.
  28. Byng R, Norman I, Redfern S, Jones R. Exposing the key functions of a complex intervention for shared care in mental health: Case study of a process evaluation. BMC Health Serv Res 2008; 8:274.
  29. Brown C, Hofer T, Johal A, et al. An epistemology of patient safety research: A framework for study design and interpretation. One size does not fit all. Qual Saf Health Care 2008; 17:178-81.
Page last reviewed December 2010
Internet Citation: Chapter 7. The Importance of Theory: Assessing the Evidence for Context-Sensitive Effectiveness and Safety . December 2010. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/findings/final-reports/contextsensitive/context7.html