Page 1 of 1

Chapter 3. How to Mistake-Proof the Design

Mistake-Proofing the Design of Health Care Processes -

Chapter 3. How To Mistake-Proof the Design

The Design Change Imperative

Donald Berwick of the Institute for Healthcare Improvement (IHI) argues that improving patient safety requires changes in the design of health care systems:

...We are human and humans err. Despite outrage, despite grief, despite experience, despite our best efforts, despite our deepest wishes, we are born fallible and will remain so. Being careful helps, but it brings us nowhere near perfection... The remedy is in changing systems of work. The remedy is in design. The goal should be extreme safety. I believe we should be as safe in our hospitals as we are in our homes. But we cannot reach that goal through exhortation, censure, outrage, and shame. We can reach it only by commitment to change, so that normal, human errors can be made irrelevant to outcome, continually found, and skillfully mitigated.1

Berwick is not the only proponent of employing design as the chief approach to improving patient safety. The British Department of Health and The Design Council issued a joint report in the early stages of their patient safety program, calling for "A system-wide design-led approach to tackling patient safety in the National Health Service."2,a


a. The British Department of Health and The Design Council's book, Designing for Patient Safety, is available at http://www.designcouncil.org.uk/resources/assets/assets/pdf/Publications/Design_for_Patient_Safety.pdf.


FMEA Implicitly Requires Changes in Design

In Chapter 2, we reviewed the basic steps involved in performing failure modes and effects analysis (FMEA). With the exception of JCAHO,3 the various versions of FMEA do not explicitly state that design changes are required. JCAHO's FMEA Step 6 is "redesign the process." In versions of FMEA that do not explicitly require them, design changes generally, and mistake-proofing particularly, are implicit requirements of FMEA.

Figure 3.1 illustrates the FMEA form used by the auto industry.4 Failure modes and effects are prioritized according to three 1- to 10-point scales: severity, likelihood, and detectability (a term used in the automotive version of FMEA that serves the purpose of "ensuring that the hazard is obvious"). The results are multiplied to create an overall assessment called the Risk Priority Number, or RPN. In Figure 3.1, two columns are labeled RPN. The first is an initial priority. The second is a recalculation after taking action. The idea is that any worthwhile action should improve the second RPN.

In the health care version of FMEA (HFMEA™), proposed by the Department of Veterans Affairs (VA) National Center for Patient Safety,5 the assessments of severity, likelihood, and detectability are accomplished by employing a decision flowchart instead of merely rating them on a 1- to 10-point scale.

The decision flowchart is shown in Figure 3.2.

The flowchart determines if action is required (proceed) or if existing systems are adequate and action is not required (stop). HFMEA™ does not explicitly state the followup step that is explicit in the automotive version of FMEA. After the recommended actions have been taken, the improvements should eliminate the need for action the next time that an HFMEA™ analysis is revisited. Otherwise, further action is called for. The decision after taking action should be "stop (no action required)."

This implies that the actions must accomplish at least one of the following three objectives in order to be considered effective and to avoid the implied iterations of the FMEA process:

  1. Remove a single-point weakness.
  2. Create one or more effective control measures.
  3. Make the hazard so obvious that control measures are not needed.
Removing Single-Point Weaknesses

The term single-point weakness refers to the creation of redundancies in the system. Fault trees explicitly recommend redundancy as an approach to improving system reliability. Redundancy is created by increasing the number of systems or individuals involved in the process. There are many examples from everyday life of how processes can be made more understandable. One example is shown in Figure 3.3. Having more back-up systems or more checks (double, triple, or more) increases the probability that the process will proceed correctly. Too often in health care, this redundancy is created when a second trained health care professional double-checks the first to ensure that the process has been performed correctly. However, at the time of this writing, many health care organizations are experiencing a nursing shortage; utilizing several nurses to repeat a task for the sake of redundancy is an approach that is too costly, if not impossible, to implement.

An alternative that would not require more staff would be to involve patients themselves, where possible, or a concerned family member or friend. In order for this alternative to work, medical processes must be rendered transparent to untrained individuals; errors must be made obvious. Creating transparency in health care's jargon-rich, complex processes can be very challenging. Implementing visual systems (5Ss, go to Chapter 1, Figure 1.7) and providing the clear process cues suggested by Norman6 could increase transparency significantly. Another option would be to employ mistake-proofing devices or design features in the error-detection process.

Effective Control Measures

In their article explaining HFMEA™, DeRosier, et al,5 cite the pin indexing system in medical gases as an example of an effective control measure. They state:

If your hospital does not use universal adaptors (for regulators), and all the connectors in the building have the correct pin index, the pin indexing would be an effective control measure; it would prevent the incorrect gas from being connected to the regulator.

An example of pin indexing is shown in Figure 3.4. The implication is that an effective control measure stops the process when an error occurs. This is mistake-proofing. Effective control measures are design changes that prevent or stop processes from continuing when an error has occurred by introducing a process failure (Figure 3.5).

Make Hazards Obvious

If hazards are made obvious, HFMEA™ does not require further action. Color-coding is a common approach to making mistakes more obvious. The face of the gauge in Figure 3.6A uses color changes to indicate the range of correct settings.

Other everyday examples of making aspects of a system more obvious are shown in Figure 3.6B-F. Figure 3.7A provides an example of a design to make errors more obvious and Figure 3.7B a secondary safeguard—the warning label.

Design changes are required actions in response to FMEA. The answer to the question of what the design changes should look like is an odd one. To create patient safety through mistake-proofing, the thing to design into processes is failure. The design changes should be carefully designed process failures.

Experts Agree on Designing Failures

Findings from engineering, cognitive psychology, quality management, and medicine all agree that to avoid human error or its impact, it is necessary to create a process that ensures failure.

Henry Petroski,7 a noted engineering author, states:

We rely on failure of all kinds being designed into many of the products we use every day, and we have come to depend upon things failing at the right time to protect our health and safety... Failure is a relative concept, and we encounter it daily in more frequent and broad ranging ways than is generally realized. And that is a good thing, for certain types of desirable failures, those designed to happen, are ones that engineers want to succeed at effecting. We often thus encourage one mode of failure to obviate a less desirable mode.

This approach is supported by recommendations from psychology as well. Norman6 recommends the installation of "forcing functions." Forcing functions create "situations in which the actions are constrained so that failure at one stage prevents the next step from happening." Forcing functions are attractive because they rely "upon properties of the physical world for their operation; no special training is necessary."6

From a quality management perspective, Shigeo Shingo recommends that "when abnormalities occur, shut down the machines or lock clamps to halt operations, thereby preventing the occurrence of serial defects." Source inspections "are based on the idea of discovering errors in conditions that give rise to defects and performing feedback and action at the error stage so as to keep those errors from turning into defects."8

This approach is not unheard of in medicine. Croteau and Schyve9 discuss this approach:

A process that is designed to detect failure and to interrupt the process flow is preferable to a process that continues on in spite of the failure...We should favor a process that can, by design, respond automatically to a failure by reverting to a predetermined (usually safe) default mode.

Petroski7, Norman6, Shingo8, and Croteau and Schyve9 each approach the problem from a different perspective and discipline, yet their prescriptions are identical. To reduce human error, make design changes that prevent or stop processes from continuing when an error has occurred. When a process stops, it is a process failure. Under other circumstances, having the process stop would be undesirable. However, a process stoppage can be a far more benign failure than allowing a medical error to progress. Stopping the process will not always be the appropriate action. Some errors can be more benign, and stopping the process may not be necessary.

The term "benign" is used here in a relative sense to mean favorable or propitious.10 It is possible that what is perceived to be a benign failure in some circumstances might actually be perceived to be very undesirable in others. An example will illustrate this point.

In the decade prior to the U.S. Civil War (1853), Elisha Otis demonstrated his safety elevator at the New York World's Fair.11 The novel feature of the safety elevator was the Otis Elevator Brake (Chapter 1, Figure 1.4). This device would prevent the elevator from falling when the cable broke. What was the failure? The brake was designed to get the elevator stuck, usually between floors. Being stuck in an elevator between floors is a very undesirable failure. However, when compared with falling to one's death at the bottom of an elevator shaft, the option of being stuck between floors becomes a much more palatable alternative.

Return to Contents

Multiple Fault Trees

Multiple fault trees can be seen in the flowchart of mistake-proofing tools (Chapter 2, Figure 2.7). Multiple fault trees are used to help design benign failures. The traditional use of fault trees is to carefully define the current situation, determine causes of undesirable failure, and identify the resources required to generate that undesirable failure. The new, second use of multiple fault trees is to determine ways to cause or create benign failures and use them as preventive measures. The use of multiple fault trees provides insights into the causes of desired failures and identifies the "resources" required to generate them. The design of the process must be changed so that the failure associated with the undesirable event causes the more benign event (desired failure) to occur instead. The objective is to move failures from the harmful event fault tree to the benign failure fault tree.

Figures 3.8 and 3.9 show both fault trees before and after the failures are moved.

In the "before" picture (Figure 3.8), the harmful event has three minimal cut sets. The first set, containing failures 1 and 2 in a redundant system (AND), has an overall probability of 0.01. The other two sets, containing single failures 3 and 4, respectively (OR), have no redundancy and have a probability of occurrence of 0.05 each. Suppose that Cause 4 is selected as the first target for improvement. Assume that a benign failure that would adequately safeguard patient safety, because it would prevent the harmful event from occurring, has been identified. The fault tree for the benign failure is shown in its initial state on the right side of Figure 3.8. The fault tree shows substantial redundancy. Three failures must occur simultaneously in order for a failure to occur, a situation that has a 0.001 chance of happening.

Figure 3.9 shows the fault trees after the mistake-proofing has been accomplished. Cause 4 now appears in the fault tree of the benign failure. The probability of the harmful event occurring has been reduced by approximately 45 percent. However, as the diagram shows, more can be done. Mistake-proofing Cause 3 by moving it to another fault tree should also be considered. If it can be moved successfully, the probability of the harmful event would be reduced to 0.01. Perhaps 0.01 is still unacceptably high. It is, however, a substantial improvement. To further reduce the probability of the harmful event would require that either Cause 1 or Cause 2 be moved to a more benign fault tree.

The fault trees in Figures 3.8 and 3.9 illustrate the logic of the changes sought through mistake-proofing. Go to Chapter 4 for a more technically precise version of these fault trees that model mistake-proofing devices but which may be less than perfectly reliable.

Fault trees enable process designers to anticipate how the process will behave after mistake-proofing has been implemented. After mistake-proofing, the benign failure is far more likely to occur than before. When the benign failure occurs, the staff member using the process must troubleshoot it to determine the reason for the failure. The benign failure in Figure 3.9 is nearly ideal for use with one of the causes. If the benign failure shown occurs, the most likely cause is Cause 4. That cause can be confirmed quickly, and the process can be reset and restarted as needed.

It is important to ensure that a single benign event does not become the failure mode for too many mistake-proofing devices. For example, Causes 1 through 4 should not all be moved to the same benign failure. When multiple causes can stop the process in the same way, the process can become difficult to troubleshoot. The result is that team members may become uncertain about how to re-start the process after a failure.

Effective mistake-proofing should involve a diverse set of fault trees that result in a variety of benign failures.

While designing a mistake-proofed cooktop (Figure 3.10), the designer could consider that the presence of a cooking pan could be detected by the mass or the weight of the pan. The burner could be deactivated if there is no pan sitting on it. Additional features could include a small light near each burner to indicate that the burner is on, and arranging the burner control knobs to correspond to the physical arrangement of the burners. This is a natural mapping.

Although fault trees are central to this discussion, other failure analysis tools can be employed and will yield similar insights (relatively detailed FMEA,4,5 anticipatory failure determination,12 current reality trees,13 and causal trees,14 for example).

Fault trees have two advantages:

  1. The application of the method to a situation is straightforward.
  2. The failure is displayed simply and in substantial detail.

In addition to understanding fault trees, team members need information, resources, and creativity:

  1. Team members must be privy to all information normally generated by the enabling tools.
  2. Team members must have access to detailed knowledge of the medical processes involved.

Only then will the team be able to link the causes of undesirable failures to the outcomes of benign failures in ways that:

  1. Are inexpensive.
  2. Have minimal impact on the existing process.

Return to Contents

Designing Mistake-Proofing Devices that Cause Benign Failures

There are eight primary steps involved in designing mistake-proofing devices.

Step 1. Select an undesirable failure mode for further analysis. In order to make an informed decision about which failure mode to analyze, the RPN or the criticality number of the failure mode must have been determined in the course of performing FMEA or FMECA.

Step 2. Review FMEA findings and brainstorm solutions (Figure 3.11). Most existing mistake-proofing has been done without the aid of a formal process. This is also where designers should search for existing solutions in medicine or elsewhere. The examples in Chapter 6 include comparisons of solutions from medical, industrial, and everyday life. Many exploit the same ideas (Go to Chapter 6, Examples 6.7 and 6.13). Common sense, creativity, and adapting existing examples are often enough to solve the problem. If not, continue to Step 3.

Step 3. Create a detailed fault tree of the undesirable failure mode (Figure 3.12). This step involves the traditional use of fault tree analysis. Detailed knowledge regarding the process and its cause-and-effect relationships discovered during root cause analysis and FMEA provide a thorough understanding of how and why the failure mode occurs. The result of this step is a list and contents of minimal cut sets. Since severity and detectability of the failure mode could be the same for all of the minimal cut sets, the probability of occurrence will most likely be the deciding factor in a determination of which causes to focus on initially.

Step 4. Select a benign failure mode(s) that would be preferred to the undesirable failure. The tools that precede multiple fault trees in Figure 2.7 once again provide information about other failure modes and their severity. Ideally, the benign failure alone should be sufficient to stop the process; the failure, which would normally lead to the undesirable event, causes the benign failure instead.

Step 5. Using a detailed fault tree, identify "resources" available to create the benign failure (Figure 3.13). These resources, basic events at the bottom of the benign fault tree, can be employed deliberately to cause the benign failure to occur.

Step 6. Generate alternative mistake-proofing device designs that will create the benign failure (Figure 3.14). This step requires individual creativity and problem-solving skills. Creativity is not always valued by organizations and may be scarce. If necessary, employ creativity training, methodologies, and facilitation tools like TRIZ (described in Chapter 2) if brainstorming alone does not result in solutions.

Step 7. Consider alternative approaches to designed failures (Figure 3.15). Some processes have very few resources. If creativity tools do not provide adequate options for causing benign process failures, consider using cues to increase the likelihood of correct process execution. Changing focus is another option to consider when benign failures are not available.

If you cannot solve the problem, change it into one that is solvable. Changing focus means, essentially, exploring the changes to the larger system or smaller subsystem that change the nature of the problem so that it is more easily solved. For example, change to a computerized physician order entry (CPOE) system instead of trying to mistakeproof handwritten prescriptions. There are very few resources available to stop the processes associated with handwritten paper documents. Software, on the other hand, can thoroughly check inputs and easily stop the process.

Often, responses to events appropriately include multiple actions, thereby creating their own redundancy.

Of course, there are no guarantees that a failure mode will be solved. If little or no progress is made after completing Steps 1 through 7, it may be prudent to just give up. Some inventions needed to solve patient safety concerns are not, at present, technically or financially feasible. Although it may be necessary to give up, the worst-case scenario is the continuation of the current process with no change at all.

Step 8. Implement a solution. A full discussion of solution implementation is beyond the scope of this book. A rigorous look at implementation issues is available in the change management literature. Some basic tasks usually required as part of the implementation are listed below:

  • Select a design from among the solution alternatives:
    • Forecast or model the device's effectiveness.
    • Estimate implementation costs.
    • Assess the training needs and possible cultural resistance.
    • Calculate the solution priority number (SPN) as described in Chapter 2.
    • Assess any negative impact on the process.
    • Explore and identify secondary problems (side effects or new concerns raised by the device).
    • Assess device reliability.
  • Create and test the prototype design:
    • Find sources who can fabricate, assemble, and install custom devices, or find manufacturers willing to make design changes (more in Chapter 9).
    • Resolve technical issues of implementation.
    • Undertake clinical trials (because of the stakes involved, medical implementations will need to be much more deliberate than those in other industries).
  • Trial implementation:
    • Resolve nontechnical and organizational issues of implementation.
    • Draft a maintenance plan.
    • Draft process documentation.
  • Broad implementation leads to:
    • Consensus building.
    • Organizational change.

The eight steps to creating mistake-proofing devices can be initiated by a root cause analysis or FMEA team, an organization executive, a quality manager, or a risk manager. An interdisciplinary team of 6 to 10 individuals should execute the process steps. An existing FMEA or root cause analysis team is ideal because its members would already be familiar with the failure mode. Help and support from others with creative, inventive, or technical abilities may be required during the later stages of the process. A mistake-proofing device is designed using the eight steps just discussed in the application example that follows.

Return to Contents

An Application Example

Step 1. Determine the undesirable failure mode. For this example, consider the undesirable event of a patient injured by a fall during a transfer to or from a standard (nonpowered) wheelchair. Berg, Hines, and Allen15 report that 37.9 percent of wheelchair users fell at least once in the past 12 months. Of those who fell, 46.7 percent were injured as a result of their fall. Tideiksaar,16 Calder and Kirby,17 and Ummat and Kirby,18 confirm that transfers to and from wheelchairs are common causes of injuries.

Step 2. FMEA is well-known in health care, and detailed instruction on its implementation is available. It may be that a well-done FMEA would be enough to generate ideas for how to change the process design so that falls are prevented during transfers to or from wheelchairs. In order to demonstrate subsequent steps, though, assume that FMEA and brainstorming did not generate an adequate number of possible solutions, so the process continues to Step 3.

Step 3. This step calls for the creation of a detailed fault tree used to understand and make sense of how failures occur. In this case, the information from the literature on wheelchair injuries (cited in Step 1) and information from previously created FMEAs would inform the creation of the fault tree. The fault tree in Figure 3.16 shows the undesirable event—patient falls during transfer to or from a standard wheelchair. Each level of the tree provides additional detail into why the event occurred. The fault trees in this example have purposely been kept small and simplified. They are intended only to illustrate how using fault trees can assist in making sense of the causes of undesirable failures and how more benign failures can be designed into the process instead.

Given the fault tree for the undesirable event in Figure 3.16, there are several possible alternative approaches to preventing the failure. One or more of the causes (often called basic failures) shown in the bold-lined boxes in Figure 3.16 need to be addressed. To avoid "hand brake not engaged," for example, it is necessary to find ways to ensure that the patient does not forget (Box A) and to provide training (Box B). To avoid "footplate present when it should not be," requires actions to prevent patients from failing to move the footplate and to prevent the footplate from moving back into position for use (caused by Boxes 2 and 3). To prevent "Patient falls during transfer..." (the top event), preventive actions must be taken on both the left and right branches of the fault tree, "failure to land on seat..." and "trip on footplate," respectively. The next several paragraphs show how benign failures might be used to think through design changes that will prevent either branch from resulting in patient falls.

Step 4. Selecting a benign failure mode in Step 4 requires asking, "What failure would be preferable to having a patient fall"? Separating this question into sections, we arrive at:

  1. What failure would be more benign than failing to land on the wheelchair's seat?
  2. What failure would be more benign than tripping on the footplate?

The answer to the first of these questions might be to prevent the wheelchair from rolling (Figure 3.17). Although assuring that the wheelchair does not roll is a failure that completely defeats the purpose of having a wheelchair, this outcome could be better than that of wheelchair users injuring themselves, especially if the failure is temporary. The white box in Figure 3.17 shows one of the causes of the wheelchair rolling away, "armrest used for support, seat vacant," being moved into the fault tree for "wheelchair will not roll." This move generates creative or inventive questions. Can a mechanism be invented in which the brake is always engaged when the seat is vacant? Alternatively, is it possible to develop a brake that is activated when most of a patient's weight is on the armrests instead of on the seat? These creative or inventive questions are the starting places for changing the design of the process. It might be necessary to explore several, perhaps many, possible solutions to find the best one.

Proposing a benign failure converts a problem into a question of creativity: Can we invent a mechanism so that the brake is always engaged when the seat is vacant?

Figure 3.17 also provides the information required by Step 5. This tree identifies the resources necessary for creating the benign failure. In many cases, the basic failure (cause) of the undesirable fault tree can be moved directly onto an existing branch of the benign fault tree, thereby employing an already-existing basic failure as a starting point for creating the benign failure. In this case, the existing basic failure "hand brake engaged" is suggestive of a device that solves the problem: an automatically engaged brake.

It turns out that Steps 6 and 7 are unneeded since an automatic locking device that creates the benign failure suggested in Figure 3.17 is already commercially available. It is a braking system that uses a spring to engage the brake whenever the wheelchair is vacant (Figure 3.18). The brake is disengaged by the weight of the wheelchair occupant, which depresses a lever beneath the seat. The device moves the basic failure, "armrest used for support, seat vacant," from the undesirable fault tree to the benign fault tree. Now, the brakes are automatically engaged when the armrest is used for support (Figure 3.19).

This device comes with a significant secondary problem: it is much more difficult to move empty wheelchairs around because their brakes are always engaged. However, this problem is not difficult to resolve. A hand-activated brake release enables attendants to override the automatic brake system.

Is this a "good" solution (part of Step 8)? The locking device is a very effective solution. The probability of falls is reduced dramatically. The device provides a control measure capable of preventing the error of not engaging the hand brake. The cost of the product is moderate. While not affordable out of daily operating funds, funds allocated from a unit-level budget would probably be adequate. The implementation would most likely be considered easy, depending on the culture of the organization. The only training required would be to point out the brake release to the staff. If more than minimal resistance is expected, the organization's problems go well beyond those that mistake-proofing is likely to help. Consequently, the SPN equals 18, the second highest possible score.

Calculating the SPN indicates that this device is a promising direction for improvement. It does not provide a definitive answer to the question of whether this device should or should not be implemented. The remaining tasks in Step 8 must be performed, including an assessment of device reliability, device trials, maintenance planning, drafting process documentation, etc.

Will the automatic brake eliminate the possibility of the top event, "patient falls during transfers?" No. The other branch of the tree, "trip on footplate," must also be addressed. Returning to Step 4, the second question is: What failure would be preferable to having a patient trip on a footplate? One possible response would be the failure of having the footplate absent or completely unavailable.

Step 5. What would cause the footplate of a wheelchair (Figure 3.20) to be absent or unavailable? Figure 3.21 indicates a few possibilities. It could have broken off due to an impact or other excessive force. This failure suggests solutions involving parts that break away, like the attachment of ski bindings to a boot. Under the correct amount and direction of forces, the ski breaks away from the boot and prevents injuries to the skier's legs. Perhaps the footplate should be designed so that it snaps off when the patient's entire weight is put on it; or, if the footplate is bumped from the rear, it could easily detach from the chair frame. The footplate might also be absent or unavailable because it has been disassembled and removed intentionally.

This failure suggests that the footplate should be present to hold the patient's feet while the chair is in use, but the footplate would, ideally, be absent at the time of entry or exit from the wheelchair. A situation in which the footplate is ideally both present and absent is an example of what TRIZ users call "an inherent contradiction," or a "physical contradiction."

Step 6. Step 6 recommends using creative or inventive tools to find directions for further exploration. TRIZ offers ready approaches for resolving contradictions. A contradiction is defined as:

Opposition between things or properties of things. There are two kinds of contradictions:

  1. Tradeoffs—a situation in which if something good happens, something bad also happens; or, if something good gets better, something undesirable gets worse.
  2. Inherent contradictions—a situation in which one thing has two opposite properties.20

Once a contradiction is identified in the language of TRIZ, approaches used by other inventors to solve that contradiction can be looked up on a large matrix. TRIZ approaches for resolving the contradiction of incompatible requirements at different times include the following eight approaches. The number following each approach is the TRIZ "principle number." These numbers are consistent throughout the TRIZ literature, although their descriptive label will vary among authors (as shown in parentheses below). A brief description of each follows. More detailed information is available from other sources.21

  1. Segmentation (fragmentation) (1): "Divide an object into independent parts."21 "Make an object easy to disassemble."20 Example: flexible poles of dome-shaped tents can be folded compactly when not in use (Figure 3.22).
  2. Preliminary counteraction (9): "Preload countertension to an object to compensate excessive and undesirable stress."21
  3. Preliminary action (10): "Perform required changes to an object completely or partially in advance."21 For example (Figure 3.23): Glue a strong cord inside a shipping box and connect a "pull here" tab before use to make it easier to open the box later.
  4. Beforehand compensation (cushion in advance) (11): "Compensate for relatively low reliability of an object with emergency measures prepared in advance." For example: Automotive airbags or guard rails, especially the ends that are designed to attenuate impact (Figure 3.24).
  5. Dynamic parts (dynamicity) (15): Allow (or design) the characteristics of an object ...to change to be optimal..."20 "Divide an object into elements capable of changing their position relative to each other."21 For example: Flaps on airplane wings (Figure 3.25).
  6. Periodic action (19): "Instead of continuous actions, use periodic or pulsating actions."20 For example: Sprinkler does not damage soil by applying water in droplets instead of a steady stream (Figure 3.26).
  7. Hurrying (rushing through) (21): "Perform harmful and hazardous operations at very high speed" (Figure 3.27). For example: For a given surgical procedure, the more rapidly it can be done, the better.21
  8. Discarding and recovery (Rejecting and regenerating parts) (34): "Make portions of an object that have fulfilled their functions go away (discard by dissolving, evaporating, etc.), or modify them directly during the operation. Conversely, restore consumable parts of an object directly in operation."20 For example (Figure 3.28): Dissolvable polylactides screws and pins are used in surgery to mend broken bones, which makes the second operation for their removal unnecessary.20

Clearly, not all of these approaches seem promising for the solution to this particular problem of having the footplate present some of the time but not at others. However, preliminary counteraction, preliminary action, and the related approaches of segmentation, dynamic parts, and discarding and recovery seem promising.

The eight approaches suggested using multiple fault trees fit nicely into these TRIZ recommendations. Having breakaway parts is an example of discarding and recovery. Approaches suggested by the TRIZ principle of "preliminary counteraction" involve putting a barrier in place that will keep the foot from getting behind the footplate, and creating a spring-loaded footplate assembly that is raised whenever the foot is not resting on it. The barrier could take the form of a heel strap or leg strap (Figures 3.29 and 3.30).

Another approach that TRIZ suggests is dynamic parts. Figure 3.31 shows that the use of dynamic parts is already available. Some wheelchairs have footplates that can be released and swung to the side. What this solution lacks is a system for causing it to happen automatically, a simple means of detecting whether the footplates should swing out of the way. Because patients could conceivably reposition themselves by putting most of their weight on the armrests, footplates, and chair back, having a seat lever mechanism similar to the previous example could be problematic. Perhaps in this case the setting function (or detection system) would need to test for total weight exerted on the wheels instead of on the seat itself. What should the final solution be? That remains an open question; but the engineering task does not appear so daunting that it would be difficult to develop if someone were so inclined.

Return to Contents

Conclusion

Changing the design of processes is a critical task in reducing the human errors that plague health care. Of course, the term design can mean different things in different contexts. In the contexts of mistake-proofing and FMEA, it means physical changes in the design of processes. Only these design changes are adequate to escape the repetitive revisiting, followup, and preventive actions that FMEA otherwise requires.

The goal is to rapidly find inexpensive, effective solutions that are easy to implement. The process is only an aid in accomplishing this objective.

Adequate design changes will remove a single point of weakness, create one or more effective control measures, or make the hazard so obvious that control measures are not needed. All of these required actions suggest forms of mistake-proofing. Moreover, experts from the disciplines of engineering, cognitive psychology, quality management, and medicine all agree that these process design changes should not bring processes to a stop. They should introduce benign failures into a process.

A series of tools and some novel applications of those tools were presented in an eight-step process designed to generate and evaluate several potential solutions. An application example was presented. While it was presented in a relatively linear step-by-step fashion, real life circumstances will occasionally prove to be less linear. While all the steps need to be considered eventually, the designer of mistake-proofing devices can opportunistically skip a step. At other times, the designer may repeatedly return to some steps on an iterative basis. The goal is not to complete the eight steps. The goal is to rapidly find inexpensive, effective solutions that are easy to implement. The process is only an aid in accomplishing this objective.

The approach presented here represents an incremental step in thinking about mistake-proofing medical processes. Future research and experience will provide additional tools, techniques, and enhanced approaches for designing effective medical processes that will have the ability to prevent specific, undesirable failure modes. Some of the limitations, drawbacks, and design issues of mistake-proofing are presented in the next chapter.

Return to Contents

References

1. Berwick DM. Not again! BMJ 2001;22:247-8.

2. Department of Health, The Design Council. Design for patient safety. London: Department of Health and The Design Council; 2003.

3. Joint Commission on Accreditation of Healthcare Organizations. Patient safety: essentials for health care, 2nd ed. Oakbrook Terrace, IL: Joint Commission Resources; 2004.

4. Automotive Industry Action Group. Process failure mode effect analysis. Southfield, MI: Automotive Industry Action Group; 1995.

5. DeRosier J, et al. Using health care failure mode and effect analysis: The VA National Center for Patient Safety's prospective risk analysis system. Jt Comm J Qual Improv 2002; 28:248-67.

6. Norman DA. The design of everyday things. New York: Doubleday; 1989.

7. Petroski H. Designed to fail. American Scientist 1997; 85:412-46.

8. Shingo S. Zero quality control: Source inspection and the poka-yoke system. New York: Productivity Press; 1986.

9. Croteau RJ, Schyve PM. Proactively error-proofing health care processes, in Spath PL, ed. Error reduction in health care. Chicago: AHA Press; 2000.

10. Webster's new universal unabridged dictionary. Definition 3. New York: Barnes and Noble Books; 1996.

11. Goodwin O. Giving rise to the modern city. Chicago: Ivan R. Dee; 2001.

12. Kaplan S. New tools for failure and risk analysis/anticipatory failure determination™ (AFD™) and the theory of scenario structuring. Southfield, MI: Ideation International Inc; 1999.

13. Goldratt EM. It's not luck. Great Barrington, MA: North River Press; 1994.

14. Gano DL. Apollo root cause analysis: a new way of thinking. Yakima, WA: Apollonian Publications; 1999.

15. Berg K, Hines M, Allen S. Wheelchair users at home: few home modifications and many injurious falls. Am J Public Health 2002; 92(1):48.

16 Tideiksaar R. Risk management: Falls and wheelchair safety. RN+ Fall Prevention and Restraint Reduction Newsletter. http://www.rnplus.com/pdf.newsletter.1/april.04.pdf. Accessed July 2005.

17. Calder C, Kirby R. Fatal wheelchair-related accidents in the United States. Am J Phys Med Rehab 1990;69:184-90.

18. Ummat S, Kirby R. Nonfatal wheelchair-related accidents reported to the national electronic injury surveillance system. Am J Physical Medicine & Rehabilitation 1994; 73:163-7.

20. Rantanen K, Domb E. Simplified TRIZ: New problem-solving applications for engineers and manufacturing professionals. Boca Raton, FL: CRC Press; 2002.

21. Altshuller GS, Shulyak L. 40 Principles: TRIZ keys to technical innovation. Trans. Rodmans, Shulyak L. Worcester, MA: Technical Innovation Center; 2001.

Return to Contents
Proceed to Next Section

 

Page last reviewed May 2007
Internet Citation: Chapter 3. How to Mistake-Proof the Design: Mistake-Proofing the Design of Health Care Processes -. May 2007. Agency for Healthcare Research and Quality, Rockville, MD. https://archive.ahrq.gov/professionals/quality-patient-safety/patient-safety-resources/resources/mistakeproof/mistake3.html