Chapter 10. Description of
Ideal Evaluation Methods: Measuring and Describing the Implementation Context
Special contribution from Gery Ryan, Ph.D., RAND, Santa Monica, CA
attempt to replicate a patient safety practice (PSP) intervention will vary
from context to context. If we wish to compare PSPs across contexts, then
ideally we would like to be able to describe systematically each context to
determine how it is similar to and different from each other context. We also
need to determine to what degree (if any) these similarities and differences
might have an effect on the effectiveness of the PSP intervention being
studied. Measuring both the implementation process and how context influences
the process are part of an ideal and rigorous evaluation.
The context in which an intervention is being implemented can
logically be divided into a description of two main categories: (a) the
intervention and how it was operationalized and (b) the physical and
organizational context in which the intervention was embedded.
The Intervention Context
interventions can be described as someone doing something to someone else for a
particular purpose. So, at a minimum, we need to clearly understand the
are the interveners?
were they selected?
role do they play in the organization?
- What is their relationship with the intended intervenees (i.e., the
targets of the interventions)?
are the intended intervenees?
were they selected (if selected)?
role do they play in the organization?
is their role vis-à-vis patients?
specifically are the interveners doing to the intervenees?
consistent is the interveners' behavior across intervenees (fidelity)?
(if any) new technology or changes to physical plant, organizational structures,
or policies and procedures were introduced?
what degree do intended intervenees vary in their exposure to these changes?
is the intervention expected to influence the behavior of the intervenee(s)?
example of an education-based intervention to train front-line staff to wash
their hands before and after contact with a patient. It would be ideal to know
who was conducting the training, who they trained, what kind of training was
provided, in what format, for how long, and how the training was expected to
affect the specific behaviors of those trained. In some cases, the intervener
may not seem obvious, for instance, administrators may change policies or may introduce
new technologies or facilities. It is important to know if it was the quality
control department that introduced new sinks outside of exam rooms on their own,
or if it was the facilities department that made the decision because of new State
Often interventions are made up of multiple components. Each
component should be treated as a separate intervention and described in the
manner above, although the action of the components may not be independent of
Knowing how each intervention component was operationalized is
also important. At a minimum, we need to understand clearly the following:
what degree and how were expectations of the intervention made explicit to the
intended interveners and intervenees?
(if any) kinds of positive and negative incentives (e.g., monetary, prestige,
in-kind incentives, reprimands, or other disincentives) were used to motivate
interveners or intervenees?
(if at all) was the performance of the interveners or intervenees monitored?
kinds of feedback or consequences (if any) did interveners or intervenees
experience for meeting or not meeting what was expected of them?
These latter four questions essentially describe the
degree to which the expectations, incentives, monitoring procedures, and
resulting consequences of an intervention (or an intervention component) have been
made explicit to all the players involved. We find that evaluators often
overlook these four topics in describing PSP interventions, but they do play a
significant role in the intensity and success of a program.
The Physical and Organizational Environmental Context
To describe the physical and organizational context in which a
PSP is embedded, implementers and evaluators first need to describe clearly the
patient safety behavior that they are trying to improve and the range of people
who are involved. Four types of players or organizational units are important:
(1) the people directly responsible for ensuring that the patient safety
behaviors are carried out; (2) the people who are responsible for initiating
and carrying out the patient safety interventions; (3) the unit(s) within the
organization where the patient safety behavior of interest is located; and (4) policymakers
(e.g., at the State, Joint Commission, etc.).
- What is the patient safety behavior of interest? [Note: patient
safety behavior can refer to changes in individual, organizational, or
are all the players responsible for this behavior?
is responsible in the organization for establishing the standards and clear
expectations regarding this particular patient safety behavior? For example, is
this PSP something that is driven primarily from upper levels of the
administration, or is it something that is championed primarily at the clinic
role does each player have in ensuring that the patient is not harmed? For
example, in PSPs that involve information flow: Who is responsible for
generating information that may or may not harm the patient? Who acts as a conduit
for passing along such information? Who is responsible for ensuring that such
information is accurate and remains accurate throughout the process? Note that
some players may have important roles for each of these activities. The roles
are somewhat different (and therefore have to be measured differently) for PSPs
that are more behaviorally focused, such as hand-washing. Here we want to know:
Who is engaged in the behavior of interest, and who is responsible for
monitoring (through direct or indirect means) that such appropriate behaviors
are indeed being carried out?
are these players affected (if at all) by the intervention? This should include
the players who are directly involved as part of the intervention; others that
the players, in turn, are expected to influence; and the players who may be
affected inadvertently. Take, for example, an intervention that provides
guidance to nurses on how they can help monitor doctors' hand-washing behavior.
The intervention directly affects nurses, and intentionally, but indirectly,
affects doctors. At the same time, the intervention may inadvertently affect
nurse's aides (in a positive manner) or may inadvertently affect the work load
of administrators should tensions between nurses and doctors increase.
consequences are there for the players if they do not adequately perform their
expected role vis-à-vis the patient safety behavior?
are each of these players located in terms of the organizational structure?
does their performance on this patient safety issue affect others in their
unit, division, organization?
are the players responsible for initiating and carrying out the intervention?
In any description of an intervention it is important to note explicitly who is
doing what to whom. Some PSP interventions are commissioned by administrators
and implemented by outsiders. Other interventions are championed, initiated,
and carried out by insiders, and there are many combinations in between.
are the initiators and implementers located in the organization?
role do they have in the organization?
motivates them to participate?
what degree do they motivate others?
what unit within the organizational structure is the intervention located?
important is this patient safety issue to the leadership of this unit? There
are three fundamental ways to measure how important a PSP is to a unit. First,
we can ask units to compare this PSP directly to other issues they may be
addressing. The second and third approaches are a bit more indirect but much
more empirically grounded. Here we can describe the incentives and
disincentives of high and low performance for the unit itself, as well as the
incentives and disincentives the unit imposes on its members. For example, we
could ask directly if the unit has an explicit list of priorities it wants to
address and if so, where does this PSP fall on that list? More indirectly, we
can first ask how (if at all) the unit is incentivized to report and improve
their performance on this PSP? For instance, is the unit required to monitor
their performance and report the results to upper-level administrators? Is the
unit's performance compared with other units? Is the unit rewarded in any way
for improvement? We can also ask how (if at all) the unit tries to monitor or
incentivize its members to improve or achieve high performance. For instance,
how regularly are unit members monitored? Does PSP performance affect a unit
members' career, salary, status, or prestige in any way?
(if any) consequences are there for the unit as a result of the success or
failure of the intervention? Here, we want to know more about the stakes that
surround the intervention itself. For example, is this an intervention being
watched by trustees and hospital administrators? Is this an intervention that
uses scarce resources that was chosen over other important priorities such that
failure might breed ill will? Or is this one of multiple interventions being
tried to improve PSP within the unit or hospital?
kind of resources (e.g., financial, labor, etc.), if any, has the unit
contributed? For example, for interventions focused on training and education,
it would be useful to know how much time was spent by the trainers in preparing
and presenting the materials, and how much staff time was required by the
trainees? Further, was this staff time part of the regular work cycle, or was
it considered extra work? If the latter, was this time compensated in any way?
For interventions that require the acquisition of any new equipment or
materials (e.g., sinks for washing hands, video cameras for monitoring
performance, carts for wheeling around equipment and supplies), it would be
useful to know the initial costs for purchasing and installing such equipment,
as well as the cost of maintaining the equipment over time.
After outlining the key components of an intervention's context, the next
step is to ask how each component should be reported or measured, and to what
degree these reports or measures should be standardized.
Although it is clear that having standardized, close-ended instruments
would facilitate comparisons across cases and therefore make it easier to
conduct meta-analyses, currently there are few (if any) validated instruments
for measuring specific context components. Also, extreme caution is warranted
to ensure that whatever standardized instruments are ultimately selected, they
can be appropriately applied across the full range of PSP contexts. Taking
instruments that have been developed for other purposes and simply applying
them to intervention contexts is a risky venture. Picking inappropriate
instruments (e.g., ones that are overly simplistic or complicated, lack face
validity, are unreliable, or fail to capture the full range of issues) will
make it more difficult (not easier) for researchers and decisionmakers alike to
fully understand the context in which an intervention occurred.
The most practical way to standardize context is to use a
staged approach that moves from an exploratory, open-ended approach of
reporting context components, to a more systemized and close-ended approach to
reporting the context.
The first stage would standardize what particular context
components were to be reported but not standardized specific instruments for
how a particular content component was to be measured. In this stage,
investigators would be presented with a list of key context components (such as
the questions above) and asked to provide a description that is as detailed and
honest as possible for each. Here, researchers would describe each context
component in their own words, drawing on specific examples as appropriate. For
example, consider the question "How important is this patient safety issue to
the leadership of this unit (or units)?" We could imagine one case reporting
that the intervention was one of the key PSP projects championed by the unit's leadership—one
they held up as a quality improvement example and one in which the director of
the unit was personally involved and engaged. We could imagine another case
reporting that intervention was recognized by unit leadership as one of many
quality improvement practices being implemented, and that the unit leadership
took notice and provided additional support once it became apparent that the
intervention was generating noticeable results.
By examining such descriptions across a range of different
settings and different kinds of PSP interventions, we would begin to understand
the range of ways in which "importance to leadership" could be potentially
measured as a context effect. When we can combine these empirical results with
what is found in the literature on leadership effects, we could begin to
develop pilot, close-ended instruments for measuring different aspects of
The second stage
would then add such pilot instruments to the open-ended questions used in the
first stage. Combining the open- and closed-end reporting styles would allow us
to see to what degree the closed-end instruments capture the important nuances
of the context, as well as to test their reliability across contexts. Only
after the new closed-end instruments could be shown to be valid and reliable,
should they be used exclusively as the standardized instrument for a particular
Return to Contents
Proceed to Next Section