Chapter 4. Track Performance With Metrics
The team must employ metrics to fully appreciate the scope of hospital-acquired venous thromboembolism (VTE) and to determine how well its approach to reducing VTE is working. An aim statement can serve as a benchmark for the intervention's success, and run charts provide a visual representation of progress.
Key Metric 1: Prevalence of Appropriate Venous Thromboembolism Prophylaxis
Though Figure 3 was used earlier to understand care delivery, it can now be used to measure care delivery, as shown in Figure 4. Specifically, this diagram will assist in selecting metrics—meaningful and measurable steps the team can use to track performance over time. In most instances the most telling metric is the prevalence of appropriate prophylaxis. Not only does it have the most important causal relationship to the main clinical endpoint, hospital-acquired VTE, but it is also a sensitive indicator of how well the various care delivery steps come together.
Using the prevalence of appropriate VTE prophylaxis as one of the team's two key metrics also offers something that can be measured regularly and reliably. Set up daily, weekly, or monthly data collection for this metric (go to Key Metric 2, below). This data flow offers a reliable way to track performance of the changed care delivery system. What makes the clinical endpoint of hospital-acquired VTE unsuitable as a lone metric for performance tracking is that events are too infrequent and are often subclinical or too delayed in onset for timely, useful feedback.
It should now be clear how the VTE protocol serves not just as the main ingredient for the improvement intervention but also for the measurement system that can track performance.
Key Metric 2: Incidence of Hospital-Acquired Venous Thromboembolism
The team cares most about how well the steps of care delivery come together to prevent hospital-acquired VTE, the main clinical endpoint or outcome. Clearly, the incidence of hospital-acquired VTE must be one of the team's key metrics. A common definition for "hospital-acquired deep vein thrombosis or pulmonary embolism" would be a clot first discovered during the course of hospitalization or discovered within 30 days of a prior hospitalization. Table 3 shows various methods for trying to capture this metric in a useful way. Each has its own advantages in terms of accuracy and efficiency.
Table 3. Methods for Defining Hospital-Acquired Venous Thromboembolism
|Method 1 (Minimum)||Track total number of deep vein thrombosis (DVT) and pulmonary embolism (PE) diagnosis codes in the medical center. (Table 1 in Chapter 1 provides codes for DVT and PE.) Divide that number by 2 to estimate the fraction for those that are hospital-acquired. The literature suggests that approximately half of all cases of DVT and PE diagnosed in the hospital are hospital acquired. Alternatively, use all venous thromboembolism (VTE) codes as a secondary diagnosis as a surrogate for hospital-acquired VTE.|
|Method 2 (Better)||Perform Method 1 and then pull charts post-discharge and retrospectively determine if DVT or PE was hospital or community acquired.|
|Method 3 (Better Yet)||Perform Method 2 and then retrospectively determine if hospital-acquired VTE patients were on appropriate prophylaxis when the VTE developed.|
|Method 4 (Best)||Prospectively capture new cases of DVT or PE as they occur by setting up a reporting system with radiology or vascular departments.|
Method 1 is very simple and can be done with minimal effort. Method 3 introduces the concept that the team can actually get more from a chart review than just a classification of hospital-acquired versus community-acquired VTE. The VTE can now also be classified as "hospital-acquired while on appropriate prophylaxis" versus "hospital-acquired while not on appropriate prophylaxis."
By using Method 3, the team can plot the incidence of preventable hospital-acquired VTE. This subset of all hospital-acquired VTE events communicates the most about the entire VTE prevention effort. Method 3 also allows surveillance for other factors that lead to the formation of a hospital-acquired clot. For example, was the patient sedated or restrained? Did the patient have a central-line-associated clot, and if so, was the line really needed at the time the clot formed? Given the time and resources, the team could do a mini-root cause analysis to generate other potential strategies to prevent hospital-acquired VTE.
Method 4 offers all the benefits of the other methods with the additional advantage that chart review is much easier when the patient is still in the hospital. The chart review can also be more efficient if it has the capability to query a digital imaging system to screen all pertinent imaging studies regularly.
In the 350-bed facility at University of California, San Diego Medical Center (UCSD), a nurse or nurse practitioner screens all pertinent studies from the prior day, identifies all new hospital-acquired clots, and completes a thorough chart review on all new hospital-acquired VTE. The process takes less than an hour each weekday. It can be done efficiently by using automated search criteria if the radiology department uses a suitable digital imaging system. The team should try to create a flow of data that pulls up all pertinent diagnostic studies, complete with their reports, at the click of a button. Depending on the limitations of the radiology information system, the team may come up with another method that is more useful and expedient.
Once the team has defined "hospital-acquired VTE" and figured out how to find the cases, it has another decision to make. Should it simply track the raw number of hospital-acquired VTE, or should it control for the number of patients or patient-days? Controlling for patient-days at risk for VTE adds a little more work, but it reduces some of the noise in the data by controlling for the probability that more hospital-acquired VTE events occur with higher hospital occupancy. At UCSD, for example, each month the team calculates the total number of patient-days for adult inpatients in the hospital for more than 48 hours and uses that as the denominator. The team uses the total number of hospital-acquired VTE events as the numerator. This helped UCSD generate a specific aim, a concept discussed later in this chapter.
Another option to consider, if the team has the capacity to look at all newly diagnosed events of DVT and PE in the hospital, is to track the number of days between hospital-acquired VTE events or potentially preventable hospital-acquired VTE events. This allows the team to chart days between events. Each event becomes a point on the x-axis while the number between events appears on the y-axis.
While data collection can be costly in terms of time and money, the focus should remain on improvement rather than measurement. To track performance regularly and to advance plan-do-study-act (PDSA) cycles, the team needs just enough data to know whether changes are leading to improvement. A sampling strategy that uses 20 randomly selected patient charts per month can be statistically appropriate as well as relatively quick and easy. To make the time commitment more manageable, five charts can be audited each week with the results rolled up into monthly reports. The team should designate an individual or two to collect, collate, plot, and manage the data. Many improvement projects falter or die simply because data collection is inadequate.
The team should also choose between sampling active inpatients or recent discharges. The former approach may offer several real-time advantages. Providers can be alerted to prophylaxis oversights, which might create moments to improve care as well as educate staff. In addition, by sampling active inpatients, insights into process barriers and valid reasons to amend the new process may emerge more readily. Self-coding and scannable forms can lessen the burden of data entry.
Available data collection resources in any given hospital may dictate methods and definitions. Whatever method is chosen, consistency and usefulness are critical. It is usually helpful to pilot the metric definitions and steps in data collection to learn about and solve stumbling blocks. In much the same way as the team performs cycles of PDSA for care delivery improvements, it should go through several cycles of PDSA to perfect the performance tracking system. For example, to refine the VTE protocol and develop it as a valid audit tool, the team can apply the VTE protocol to audit 10 to 20 patients, using three independent reviewers. Questions that should be answered include:
- Did the reviewers arrive at the same risk level?
- Did they agree on absence or presence of contraindications to pharmacologic prophylaxis?
- Did they share the same conclusion about whether the patient was receiving adequate prophylaxis?
There are several questions that sequential pilots of the audit tool should help answer.
- How much time is acceptable in peri-operative or trauma settings for a patient not to be on pharmacologic prophylaxis? (The readings at Appendix C can suggest some parameters.)
- What are the acceptable versus preferred VTE prophylaxis options for each level of VTE risk? Realize that when auditing, there will be VTE prophylaxis options that make sense to consider as adequate, even though they are not listed as recommended in the VTE protocol. For example, the auditor may accept 7,500 units of unfractionated heparin subcutaneously every 12 hours as acceptable prophylaxis for the patient who is at moderate risk for VTE, even if it is not listed as an option on the VTE protocol because of the lack of prepackaged syringes or the absence of clinical trials supporting that regimen.
- What patients will be included in the sampling? Depending on the scope of the initiative, it may make sense to exclude:
– Patients receiving obstetric care.
– Patients being seen on the psychiatric or behavioral health unit.
– Patients hospitalized less than 24 or 48 hours.
– Young patients.
- Which data collection strategy should the team use for performance tracking? The team could look at a representative sample of patients at baseline and then repeat with a representative sample after introducing the VTE protocol. This before-after approach is simple, but the data can be misleading. Day-to-day variation in prevalence of VTE prophylaxis can be as wide as 35 percent. This variation indicates that multiple sampling events are necessary to ensure accurate conclusions. Rather than using several data points before an intervention, use at least 20 data points before an intervention and as many as necessary after the intervention to determine the new steady-state prevalence of prophylaxis. Results can be tracked and trended in run charts.
Several common sampling strategies follow.
Convenience sampling. Reviewers select patients because they are available on the ward, but otherwise there is no particular selection process. Convenience samples categorized by ward or service are a common model.
Random sampling. All patients in a representative population are subject to selection. The University of California, San Diego (UCSD) Medical Center uses this model. All patients over 18 and in house for more than 24 hours are assigned a number, and a random number generator (a free plug-in application for Microsoft® Excel®) produces a list of 10 patients to subject to review that day. The data collector selects the first random patient generated for the audit. This has the advantage of giving an accurate picture of the demographics and VTE risk in the institution. The main disadvantage is the possibility that some small but important patient group will be subject to only a few audits.
Stratified random sampling. Patients from several important patient groups are randomly sampled (e.g., medical versus surgical versus orthopedic, or critical care versus noncritical care). The advantage of this method is the ability to target patient groups at higher risk for VTE or with other criteria important to the VTE prevention effort.
Before piloting and finalizing an audit tool, it will be important to pilot and finalize the VTE protocol. Feedback from the VTE protocol pilot test may change the audit form.
Data Reporting Using Run Charts
At every meeting, the team should review specific aims and present its progress towards the aims. The best way to do this is with a graph. When presenting performance within the institution's reporting structure, graphical formats, such as run charts or statistical process control (SPC) charts, will be more effective than denser tabular formats.
Run charts are easy to make and are usually adequate for graphing improvement data in order to follow performance over time. Compared to tables of data, run charts offer a quicker picture of how an intervention is working relative to a baseline. The table and run chart in Figure 5 represent data from UCSD. The run chart makes it easy to appreciate the dramatic trends in performance over time.
Run charts should be annotated along the x-axis where new interventions or events occur. This addition can make it easier to see the effects of different stages of an intervention or to subtract the effect of known secular trends. For run charts, ubiquitous software (Excel® or any several free online run chart applications) is available, and no statistical expertise is needed.
For quality improvement projects, monthly plots are usually adequate, although when testing new or revised improvement strategies via PDSA, weekly plots may be desirable to see effects quickly.
SPC charts are a special kind of run chart that are useful to help the team gauge whether fluctuations in run charts are due to noise in the data and variation within an unchanged system, versus real change indicating that the underlying process has changed. A full discussion of SPC charts is beyond the scope of this publication,. Improvement teams can learn more about the technique at http://reliability.sandia.gov/Manuf_Statistics/Statistical_Process_Control/statistical_process_control.html.
Transform General Goals Into a Metric-Specific Aim Statement
In Chapter 1, the team set a purposefully ambitious general goal to give a broad sense of the breakthrough success the team wanted to achieve. In the current chapter, the team defined key metrics. With these metrics, the team can commit to accomplishing something specific and formalize that commitment in an aim statement.
Good aim statements articulate a stretch goal that is specific, measurable, time limited, and applicable to a particular population of patients. Figure 4 shows an intermediate outcome (sometimes called a "process measure") and a clinical endpoint. For example:
- Intermediate Outcome: "95 percent of patients admitted to medical units 5G and 6G will be on appropriate VTE prophylaxis as defined by our protocol by October 31, 2009."
- Clinical Endpoint: "Reduce the rate of hospital-acquired VTE from the baseline of 1.2 events per 1,000 patient-days by half to 0.6 per 1,000 patient-days by October 31, 2009."
Referring to the preceding examples, the team should now be able to write an aim statement for its chosen metrics.
Now that the team has an aim statement for its key performance metrics, it is ready to plan changes to the system. But what if the improvement changes lead to unintended consequences for patients or the hospital? How will the team know? The team should consider monitoring potential areas of concern to detect any detrimental effects of improvement changes. These additional metrics are called "balancing measures." For example, the team may decide to track the incidence of heparin-induced thrombocytopenia, bleeding episodes, or the cost of using more pharmacologic prophylaxis as balancing measures.