Page 1 of 1

Chapter 4. Incident Reporting

Heidi Wald, M.D.
University of Pennsylvania School of Medicine

Kaveh G. Shojania, M.D.
University of California, San Francisco School of Medicine

Background

Errors in medical care are discovered through a variety of mechanisms. Historically, medical errors were revealed retrospectively through morbidity and mortality committees and malpractice claims data. Prominent studies of medical error have used retrospective chart review to quantify adverse event rates.1,2 While collection of data in this manner yields important epidemiologic information, it is costly and provides little insight into potential error reduction strategies. Moreover, chart review only detects documented adverse events and often does not capture information regarding their causes. Important errors that produce no injury may go completely undetected by this method.3-6

Computerized surveillance may also play a role in uncovering certain types of errors. For instance, medication errors may be discovered through a search for naloxone orders for hospitalized patients, as they presumably reflect the need to reverse overdose of prescribed narcotics.7,8 Several studies have demonstrated success with computerized identification of adverse drug events.9-11

Complex, high-risk industries outside of healthcare, including aviation, nuclear power, petrochemical processing, steel production, and military operations, have successfully developed incident reporting systems for serious accidents and important "near misses." Incident reporting systems cannot provide accurate epidemiologic data, as the reported incidents likely underestimate the numerator, and the denominator (all opportunities for incidents) remains unknown.

Given the limited availability of sophisticated clinical computer systems and the tremendous resources required to conduct comprehensive chart reviews, incident reporting systems remain an important and relatively inexpensive means of capturing data on errors and adverse events in medicine. Few rigorous studies have analyzed the benefits of incident reporting. This chapter reviews only the literature evaluating the various systems and techniques for collecting error data in this manner, rather than the benefit of the practice itself. This decision reflects our acknowledgment that incident reporting has clearly played a beneficial role in other high-risk industries.6 The decision also stems from our recognition that a measurable impact of incident reporting on clinical outcomes is unlikely because there is no standard practice by which institutions handle these reports.

Practice Description

Flanagan first described the critical incident technique in 1954 to examine military aircraft training accidents.12 Critical incident reporting involves the identification of preventable incidents (i.e., occurrences that could have led, or did lead, to an undesirable outcome13) reported by personnel directly involved in the process in question at the time of discovery of the event. The goal of critical incident monitoring is not to gather epidemiologic data per se, but rather to gather qualitative data. Nonetheless, if a pattern of errors seems to emerge, prospective studies can be undertaken to test epidemiologic hypotheses.14

Incident reports may target events in any or all of 3 basic categories: adverse events, "no harm events," and "near misses." For example, anaphylaxis to penicillin clearly represents an adverse event. Intercepting the medication order prior to administration would constitute a near miss. By contrast, if a patient with a documented history of anaphylaxis to penicillin received a penicillin-like antibiotic (e.g., a cephalosporin) but happened not to experience an allergic reaction, it would constitute a no harm event, not a near miss. In other words, when an error does not result in an adverse event for a patient, because the error was "caught," it is a near miss; if the absence of injury is owed to chance it is a no harm event. Broadening the targets of incident reporting to include no harm events and near misses offers several advantages. These events occur 3 to 300 times more often than adverse events,5,6 they are less likely to provoke guilt or other psychological barriers to reporting,6 and they involve little medico-legal risk.14 In addition, hindsight bias15 is less likely to affect investigations of no harm events and near misses.6,14

Barach and Small describe the characteristics of incident reporting systems in non-medical industries.6 Established systems share the following characteristics:

  • They focus on near misses.
  • They provide incentives for voluntary reporting.
  • They ensure confidentiality.
  • They emphasize systems approaches to error analysis.

The majority of these systems were mandated by federal regulation, and provide for voluntary reporting. All of the systems encourage narrative description of the event. Reporting is promoted by providing incentives including:

  • Immunity.
  • Confidentiality.
  • Outsourcing of report collation.
  • Rapid feedback to all involved and interested parties.
  • Sustained leadership support.6

Incident reporting in medicine takes many forms. Since 1975, the US Food and Drug Administration (FDA) has mandated reporting of major blood transfusion reactions, focusing on preventable deaths and serious injuries.16 Although the critical incident technique found some early applications in medicine,17,18 its current use is largely attributable to Cooper's introduction of incident reporting to anesthesia in 1978,19 conducting retrospective interviews with anesthesiologists about preventable incidents or errors that occurred while patients were under their care. Recently, near miss and adverse event reporting systems have proliferated in single institution settings (such as in intensive care units (ICUs)20,21), regional settings (such as the New York State transfusion system22), and for national surveillance (e.g., the National Nosocomial Infections Surveillance System administered by the federal Centers for Disease Control and Prevention.)23

All of the above examples focus on types of events (transfusion events or nosocomial infections) or areas of practice (ICUs). Incident reporting in hospitals cuts a wider swath, capturing errors and departures from expected procedures or outcomes (Table 4.1). However, because risk management departments tend to oversee incident reporting systems in some capacity, these systems more often focus on incident outcomes, not categories. Few data describe the operation of these institution-specific systems, but underreporting appears endemic.24

In 1995, hospital-based surveillance was mandated by the Joint Commission on the Accreditation of Healthcare Organizations (JCAHO)26 because of a perception that incidents resulting in harm were occurring frequently.28 JCAHO employs the term sentinel event in lieu of critical incident, and defines it as follows:

An unexpected occurrence involving death or serious physical or psychological injury, or the risk thereof. Serious injury specifically includes loss of limb or function. The phrase "or the risk thereof" includes any process variation for which a recurrence would carry a significant chance of a serious adverse outcome.26

As one component of its Sentinel Event Policy, JCAHO created a Sentinel Event Database. The JCAHO database accepts voluntary reports of sentinel events from member institutions, patients and families, and the press.26 The particulars of the reporting process are left to the member healthcare organizations. JCAHO also mandates that accredited hospitals perform root cause analysis (see Chapter 5) of important sentinel events. Data on sentinel events are collated, analyzed, and shared through a Web site,29 an online publication,30 and its newsletter Sentinel Event Perspectives.31

Another example of a national incident reporting system is the Australian Incident Monitoring Study (AIMS), under the auspices of the Australian Patient Safety Foundation.13 Investigators created an anonymous and voluntary near miss and adverse event reporting system for anesthetists in Australia. Ninety participating hospitals and practices named on-site coordinators. The AIMS group developed a form that was distributed to participants. The form contained instructions, definitions, space for narrative of the event, and structured sections to record the anesthesia and procedure, demographics about the patient and anesthetist, and what, when, why, where, and how the event occurred. The results of the first 2000 reports were published together, following a special symposium.32

The experiences of the JCAHO Sentinel Event Database and the Australian Incident Monitoring Study are explored further below.

Prevalence and Severity of the Target Safety Problem

The true prevalence of events appropriate for incident reporting is impossible to estimate with any accuracy, as it includes actual adverse events as well as near misses and no harm events. The Aviation Safety Reporting System (ASRS), a national reporting system for near misses in the airline industry,33,34 currently processes approximately 30,000 reports annually,35 exceeding by many orders of magnitude the total number of airline accidents each year.34 The number of reports submitted to a comparable system in healthcare would presumably number in the millions if all adverse events, no harm events, and near misses were captured.

By contrast, over 6 years of operation, the JCAHO Sentinel Event Database has captured only 1152 events, 62% of which occurred in general hospitals. Two-thirds of the events were self-reported by institutions, with the remainder coming from patient complaints, media stories and other sources.29 These statistics are clearly affected by underreporting and consist primarily of serious adverse events (76% of events reported resulted in patient deaths), not near misses. As discussed in the chapter on wrong-site surgeries (Subchapter 43.2), comparing JCAHO reports with data from the mandatory incident reporting system maintained by the New York State Department of Health36 suggests that the JCAHO statistics underestimate the true incidence of target events by at least a factor of 20.

Opportunities for Impact

Most hospitals' incident reporting systems fail to capture the majority of errors and near misses.24 Studies of medical services suggest that only 1.5% of all adverse events result in an incident report37 and only 6% of adverse drug events are identified through traditional incident reporting or a telephone hotline.24 The American College of Surgeons estimates that incident reports generally capture only 5-30% of adverse events.38 A study of a general surgery service showed that only 20% of complications on a surgical service ever resulted in discussion at Morbidity and Mortality rounds.39 Given the endemic underreporting revealed in the literature, modifications to the configuration and operation of the typical hospital reporting system could yield higher capture rates of relevant clinical data.

Study Designs

We analyzed 5 studies that evaluated different methods of critical incident reporting. Two studies prospectively investigated incident reporting compared with observational data collection24,39 and one utilized retrospective chart review.37 Two additional studies looked at enhanced incident reporting by active solicitation of physician input compared with background hospital quality assurance (QA) measures.40,41 In addition, we reviewed JCAHO's report of its Sentinel Event Database, and the Australian Incident Monitoring Study, both because of the large sizes and the high profiles of the studies.13,26 Additional reports of critical incident reporting systems in the medical literature consist primarily of uncontrolled observational trials42-44 that are not reviewed in this chapter.

Study Outcomes

In general, published studies of incident reporting do not seek to establish the benefit of incident reporting as a patient safety practice. Their principal goal is to determine if incident reporting, as it is practiced, captures the relevant events.40 In fact, no studies have established the value of incident reporting on patient safety outcomes.

The large JCAHO and Australian databases provide data about reporting rates, and an array of quantitative and qualitative information about the reported incidents, including the identity of the reporter, time of report, severity and type of error.13,26 Clearly these do not represent clinical outcomes, but they may be reasonable surrogates for the organizational focus on patient safety. For instance, increased incident reporting rates may not be indicative of an unsafe organization, but may reflect a shift in organizational culture to increased acceptance of quality improvement and other organizational changes.3,5

None of the studies reviewed captured outcomes such as morbidity or error rates. The AIMS group published an entire symposium which reported the quantitative and qualitative data regarding 2000 critical incidents in anesthesia.13 However, only a small portion of these incidents were prospectively evaluated.14 None of the studies reviewed for this chapter performed formal root cause analyses on reported errors (Chapter 5).

Evidence for Effectiveness of the Practice

As described above, 6 years of JCAHO sentinel event data have captured merely 1152 events, none of which include near misses.29 Despite collecting what is likely to represent only a fraction of the target events, JCAHO has compiled the events, reviewed the root cause analyses and provided recommendations for procedures to improve patient safety for events ranging from wrong-site surgeries to infusion pump-related adverse events (Table 4.2). This information may prove to be particularly useful in the case of rare events such as wrong-site surgery, where national collection of incidents can yield a more statistically useful sample size.

The first 2000 incident reports to AIMS from 90 member institutions were published in 1993.13 In contrast to the JCAHO data, all events were self-reported by anesthetists and only 2% of events reported resulted in patient deaths. A full 44% of events had negligible effect on patient outcome. Ninety percent of reports had identified systems failures, and 79% had identified human failures. The AIMS data were similar to those of Cooper19 in terms of percent of incidents with reported human failures, timing of events with regard to phase of anesthesia, and type of events (breathing circuit misconnections were between 2% and 3% in both studies).13,19 The AIMS data are also similar to American "closed-claims" data in terms of pattern, nature and proportion of the total number of reports for several types of adverse events,13 which lends further credibility to the reports.

The AIMS data, although also likely to be affected by underreporting because of its voluntary nature, clearly captures a higher proportion of critical incidents than the JCAHO Sentinel Event Database. Despite coming from only 90 participating sites, AIMS received more reports over a similar time frame than the JCAHO did from the several thousand accredited United States hospitals. This disparity may be explained by the fact that AIMS institutions were self-selected, and that the culture of anesthesia is more attuned to patient safety concerns.47

The poor capture rate of incident reporting systems in American hospitals has not gone unnoticed. Cullen et al24 prospectively investigated usual hospital incident reporting compared to observational data collection for adverse drug events (ADE logs, daily solicitation from hospital personnel, and chart review) in 5 patient care units of a tertiary care hospital. Only 6% of ADEs were identified and only 8% of serious ADEs were reported. These findings are similar to those in the pharmacy literature,48,49 and are attributed to cultural and environmental factors. A similar study on a general surgical service found that 40% of patients suffered complications.39 While chart documentation was excellent (94%), only 20% of complications were discussed at Morbidity and Mortality rounds.

Active solicitation of physician reporting has been suggested as a way to improve adverse event and near miss detection rates. Weingart et al41 employed direct physician interviews supplemented by email reminders to increase detection of adverse events in a tertiary care hospital. The physicians reported an entirely unique set of adverse events compared with those captured by the hospital incident reporting system. Of 168 events, only one was reported by both methods. O'Neil et al37 used e-mail to elicit adverse events from housestaff and compared these with those found on retrospective chart review. Of 174 events identified, 41 were detected by both methods. The house officers appeared to capture preventable adverse events at a higher rate (62.5% v. 32%, p=0.003). In addition, the hospital's risk management system detected only 4 of 174 adverse events. Welsh et al40 employed prompting of house officers at morning report to augment hospital incident reporting systems. There was overlap in reporting in only 2.6% of 341 adverse events that occurred during the study. In addition, although the number of events house officers reported increased with daily prompting, the quantity rapidly decreased when prompting ceased. In summary, there is evidence that active solicitation of critical incident reports by physicians can augment existing databases, identifying incidents not detected through other means, although the response may not be durable.37,40,41

Potential for Harm

Users may view reporting systems with skepticism, particularly the system's ability to maintain confidentiality and shield participants from legal exposure.28 In many states, critical incident reporting and analysis count as peer review activities and are protected from legal discovery.28,50 However, other states offer little or no protection, and reporting events to external agencies (e.g., to JCAHO) may obliterate the few protections that do exist. In recognition of this problem, JCAHO's Terms of Agreement with hospitals now includes a provision identifying JCAHO as a participant in each hospital's quality improvement process.28

Costs and Implementation

Few estimates of costs have been reported in the literature. In general, authors remarked that incident reporting was far less expensive than retrospective review. One single center study estimated that physician reporting was less costly ($15,000) than retrospective record review ($54,000) over a 4-month period.37 A survey of administrators of reporting systems from non-medical industries reported a consensus that costs were far offset by the potential benefits.6

Comment

The wide variation in reporting of incidents may have more to do with reporting incentives and local culture than with the quality of medicine practiced there.24 When institutions prioritize incident reporting among medical staff and trainees, however, the incident reporting systems seem to capture a distinct set of events from those captured by chart review and traditional risk management40,41 and events captured in this manner may be more preventable.37

The addition of anonymous or non-punitive systems is likely to increase the rates of incident reporting and detection.51 Other investigators have also noted increases in reporting when new systems are implemented and a culture conducive to reporting is maintained.40,52 Several studies suggest that direct solicitation of physicians results in reports that are more likely to be distinct, preventable, and more severe than those obtained by other means.8,37,41

The nature of incident reporting, replete with hindsight bias, lost information, and lost contextual clues makes it unlikely that robust data will ever link it directly with improved outcomes. Nonetheless, incident reporting appears to be growing in importance in medicine. The Institute of Medicine report, To Err is Human53, has prompted calls for mandatory reporting of medical errors to continue in the United States.54-57 England's National Health Service plans to launch a national incident reporting system as well, which has raised concerns similar to those voiced in the American medical community.58 While the literature to date does not permit an evidence-based resolution of the debate over mandatory versus voluntary incident reporting, it is clear that incident reporting represents just one of several potential sources of information about patient safety and that these sources should be regarded as complementary. In other industries incident reporting has succeeded when it is mandated by regulatory agencies or is anonymous and voluntary on the part of reporters, and when it provides incentives and feedback to reporters.6 The ability of healthcare organizations to replicate the successes of other industries in their use of incident reporting systems6 will undoubtedly depend in large part on the uses to which they put these data. Specifically, success or failure may depend on whether healthcare organizations use the data to fuel institutional quality improvement rather than to generate individual performance evaluations.

Table 4.1. Examples of events reported to hospital incident reporting systems25-27

Adverse OutcomesProcedural BreakdownsCatastrophic Events
  • Unexpected death or disability.
  • Inpatient falls or "mishaps."
  • Institutionally-acquired burns.
  • Institutionally-acquired pressure sores.
  • Errors or unexpected complications related to the administration of drugs or transfusion.
  • Discharges against medical advice ("AMA").
  • Significant delays in diagnosis or diagnostic testing.
  • Breach of confidentiality.
  • Performance of a procedure on the wrong body part ("wrong-site surgery").
  • Performance of a procedure on the wrong patient.
  • Infant abduction or discharge to wrong family.
  • Rape of a hospitalized patient.
  • Suicide of a hospitalized patient.

Table 4.2. Sentinel event alerts published by JCAHO following analysis of incident reports46

  • Medication Error Prevention—Potassium Chloride.
  • Lessons Learned: Wrong Site Surgery.
  • Inpatient Suicides: Recommendations for Prevention.
  • Preventing Restraint Deaths.
  • Infant Abductions: Preventing Future Occurrences.
  • High-Alert Medications and Patient Safety.
  • Operative and Postoperative Complications: Lessons for the Future.
  • Fatal Falls: Lessons for the Future.
  • Infusion Pumps: Preventing Future Adverse Events.
  • Lessons Learned: Fires in the Home Care Setting.
  • Kernicterus Threatens Healthy Newborns.

References

1. Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med 1991;324:370-376.

2. Studdert D, Thomas E, Burstin H, Zbar B, Orav J, Brennan T. Negligent care and malpractice claiming behavior in Utah and Colorado. Medical Care 2000;38:250-260.

3. Van der Schaaf T. Near miss reporting in the Chemical Process Industry [Doctoral Thesis]. Eindhoven, The Netherlands: Eindhoven University of Technology; 1992.

4. Ibojie J, Urbaniak S. Comparing near misses with actual mistransfusion events a more accurate reflection of transfusion errors. Br J Haematol 2000;108:458-460.

5. Battles J, Kaplan H, Van der Schaaf T, Shea C. The attributes of medical event-reporting systems. Arch Pathol Lab Med 1998;122:231-238.

6. Barach P, Small S. Reporting and preventing medical mishaps: lessons from non-medical near miss reporting systems. BMJ 2000;320:759-763.

7. Whipple J, Quebbeman E, Lewis K, Gaughan L, Gallup E, Ausman R. Identification of patient controlled analgesia overdoses in hospitalized patients: a computerized method of monitoring adverse events. Ann Pharmacother 1994;28:655-658.

8. Bates D, Makary M, Teich J, Pedraza L, Ma'luf N, Burstin H, et al. Asking residents about adverse events in a computer dialogue: how accurate are they? Jt Comm J Qual Improv 1998;24:197-202.

9. Jha A, GJ K, Teich J. Identifying adverse drug events: development of a computer-based monitor and comparison with chart review and stimulated voluntary report. J Am Med Inform Assoc 1998;5:305-314.

10. Naranjo C, Lanctot K. Recent developments in computer-assisted diagnosis of putative adverse drug reactions. Drug Saf 1991;6:315-322.

11. Classen D, Pestotnik S, Evans R, Burke J. Computerized surveillance of adverse drug events in hospital patients. JAMA 1991;266:2847-2851.

12. Flanagan J. The critical incident technique. Psychol Bull 1954;51:327-358.

13. Webb R, Currie M, Morgan C, Williamson J, Mackay P, Russell W, et al. The Australian Incident Monitoring Study: an analysis of 2000 incident reports. Anaesth Intens Care 1993;21:520-528.

14. Runciman W, Sellen A, Webb R, Williamson J, Currie M, Morgan C, et al. Errors, incidents and accidents in anaesthetic practice. Anaesth Intens Care 1993;21:506-519.

15. Fischoff B. Hindsight does not equal foresight: the effect of outcome knowledge on judgment under uncertainty. J Exp Psych: Human Perform and Percept 1975;1:288-299.

16. Food and Drug Administration: Biological products; reporting of errors and accidents in manufacturing. Federal Register 1997;62:49642-49648.

17. Safren MA, Chapanis A. A critical incident study of hospital medication errors. Hospitals 1960;34:32-34.

18. Graham JR, Winslow WW, Hickey MA. The critical incident technique applied to postgraduate training in psychiatry. Can Psychiatr Assoc J 1972;17:177-181.

19. Cooper JB, Newbower RS, Long CD, McPeek B. Preventable anesthesia mishaps: a study of human factors. Anesthesiology 1978;49:399-406.

20. Buckley T, Short T, Rowbottom Y, OH T. Critical incident reporting in the intensive care unit. Anaesthesia 1997;52:403-409.

21. Hart G, Baldwin I, Gutteridge G, Ford J. Adverse incident reporting in intensive care. Anaesth Intens Care 1994;22:556-561.

22. Linden JV, Paul B, Dressler KP. A report of 104 transfusion errors in New York State. Transfusion 1992;32:601-606.

23. Emori T, Edwards J, Culver D, Sartor C, Stroud L, Gaunt E, et al. Accuracy of reporting nosocomial infections in intensive-care-unit patients to the National Nosocomial Infections Surveillance System: a pilot study. Infect Control Hosp Epidemiol 1998;19:308-316.

24. Cullen D, Bates D, Small S, Cooper J, Nemeskal A, Leape L. The incident reporting system does not detect adverse events: a problem for quality improvement. Jt Comm J Qual Improv 1995;21:541-548.

25. Incident reports. A guide to legal issues in health care. Philadelphia, PA: Trustees of the University of Pennsylvania. 1998:127-129.

26. Joint Commission on the Accreditation of Healthcare Organizations. Sentinel event policy and procedures. Available at: http://JCAHO.org/sentinel/se_pp.html. Accessed March 15, 2001.

27. Fischer G, Fetters M, Munro A, Goldman E. Adverse events in primary care identified from a risk-management database. J Fam Pract 1997;45:40-46.

28. Berman S. Identifying and addressing sentinel events: an Interview with Richard Croteau. Jt Comm J Qual Improv 1998;24:426-434.

29. Joint Commission on Accreditation of Healthcare Organizations. Sentinel Event Statistics. Available at: http://www.JCAHO.org/sentinel/se_stats.html. Accessed April 16, 2001.

30. Joint Commission on Accreditation of Healthcare Organizations. Sentinel Event Alert. Available at: http://JCAHO.org/edu_pub/sealert/se_alert.html. Accessed May 14, 2001.

31. Lessons learned: sentinel event trends in wrong-site surgery. Joint Commission Perspectives 2000;20:14.

32. Symposium: The Australian Incident Monitoring Study. Anaesth Intens Care 1993;21:506-695.

33. Billings C, Reynard W. Human factors in aircraft incidents: results of a 7-year study. Aviat Space Environ Med 1984;55:960-965.

34. National Aeronautics & Space Agency (NASA). Aviation Safety Reporting System. Available at: http://www-afo.arc.nasa.gov/ASRS/ASRS.html. Accessed March 14, 2000.

35. Leape L. Reporting of medical errors: time for a reality check. Qual Health Care 2000;9:144-145.

36. New York State Department of Health. NYPORTS—The New York Patient Occurrence and Tracking System. Available at: http://www.health.state.ny.us/nysdoh/commish/2001/nyports/nyports.htm. Accessed May 31, 2001.

37. O'Neil A, Petersen L, Cook E, Bates D, Lee T, Brennan T. Physician reporting compared with medical-record review to identify adverse medical events. Ann Intern Med 1993;119:370-376.

38. Data sources and coordination. In: Surgeons. ACo, editor. Patient safety manual. Rockville, MD: Bader & Associates, Inc. 1985.

39. Wanzel K, Jamieson C, Bohnen J. Complications on a general surgery service: incidence and reporting. CJS 2000;43:113-117.

40. Welsh C, Pedot R, Anderson R. Use of morning report to enhance adverse event detection. J Gen Intern Med 1996;11:454-460.

41. Weingart SN, Ship AN, Aronson MD. Confidential clinician-reported surveillance of adverse events among medical inpatients. J Gen Intern Med 2000;15:470-477.

42. Frey B, Kehrer B, Losa M, Braun H, Berweger L, Micallef J, et al. Comprehensive critical incident monitoring in a neonatal-pediatric intensive care unit: experience with the system approach. Intensive Care Med 2000;26:69-74.

43. Findlay G, Spittal M, Radcliffe J. The recognition of critical incidents: quantification of monitor effectiveness. Anaesthesia 1998;53:589-603.

44. Flaatten H, Hevroy O. Errors in the intensive care unit (ICU). Act Anaesthesiol Scand 1999;43:614-617.

45. Connolly C. Reducing error, improving safety: relation between reported mishaps and safety is unclear. BMJ 2000;321:505.

46. The Joint Commission on Accreditation of Healthcare Organizations. Sentinel Event Alert. Available at: http://www.JCAHO.org/edu_pub/sealert/se_alert.html. Accessed May 31, 2001.

47. Gaba D. Anaesthesiology as a model for patient safety in health care. BMJ 2000;320.

48. Shannon R, De Muth J. Comparison of medication error detection methods in the long term care facility. Consulting Pharmacy 1987;2:148-151.

49. Allan E, Barker K. Fundamentals of medication error research. Am J Hosp Pharm 1990;47:555-571.

50. Rex J, Turnbull J, Allen S, Vande Voorde K, Luther K. Systematic root cause analysis of adverse drug events in a tertiary referral hospital. Jt Comm J Qual Improv 2000;26:563-575.

51. Kaplan H, Battles J, Van Der Schaaf T, Shea C, Mercer S. Identification and classification of the causes of events in transfusion medicine. Transfusion 1998;38:1071-1081.

52. Hartwig S, Denger S, Schneider P. Severity-indexed, incident report-based medication error-reporting program. Am J Hosp Pharm 1991;48:2611-2616.

53. Kohn L, Corrigan J, Donaldson M, editors. To Err Is Human: Building a Safer Health System. Washington, DC: Committee on Quality of Health Care in America, Institute of Medicine. National Academy Press. 2000.

54. Pear R. U.S. health officials reject plan to report medical mistakes. New York Times Jan 24 2000: A14(N), A14(L).

55. Prager LO. Mandatory reports cloud error plan: supporters are concerned that the merits of a plan to reduce medical errors by 50% are being obscured by debate over its most controversial component. American Medical News March 13 2000:1,66-67.

56. Murray S. Clinton to call on all states to adopt systems for reporting medical errors. Wall Street Journal Feb 22 2000:A28(W), A6(E).

57. Kaufman M. Clinton Seeks Medical Error Reports; Proposal to Reduce Mistakes Includes Mandatory Disclosure, Lawsuit Shield. Washington Post Feb 22 2000:A02.

58. Mayor S. English NHS to set up new reporting system for errors. BMJ 2000;320:1689.

Return to Contents
Proceed to Next Chapter

Page last reviewed July 2001
Internet Citation: Chapter 4. Incident Reporting. July 2001. Agency for Healthcare Research and Quality, Rockville, MD. http://archive.ahrq.gov/research/findings/evidence-based-reports/services/quality/er43/ptsafety/chapter4.html