Environmental Scan of Patient Safety Education and Training Programs

Chapter 2. Electronic Searchable Catalog

In this chapter, we detail the development of a standardized taxonomy, which served as the basis for the query tool in the searchable database of patient safety education and training programs, the development of the template for abstracting information about included programs, and finally the searchable catalog itself. For the full reports, please refer to the previous deliverables, Standardized Taxonomy for Environmental Scan2 and Standardized Template for Data Abstraction.3 

Standardized Taxonomy

Initially, AIR developed a taxonomy of features to categorize patient safety education and training programs. The taxonomy was designed to serve as the platform for the search engine for the resulting catalog. We began by conducting a thorough review of the programs stemming from the environmental scan phase of this project with the aim of yielding a list of the most common, salient characteristics of these patient safety education and training programs. Next, AIR's project team discussed each of these characteristics and identified additional information that could be critical for inclusion in the taxonomy. We next grouped the resulting list of characteristics into categories to form the basis of the resulting taxonomy. The resulting taxonomy was refined through collaboration with AHRQ.

AIR Taxonomy Categories

AIR grouped common program characteristics into five main categories, as follows:

  • Content.
  • Instructional strategy.
  • Mode of delivery.
  • Instructional model.
  • Available evaluation measures.

Information about program content was identified as particularly important for end users of the database, who likely will want to search for programs that cover specific aspects of patient safety training, such as:

  • Communication—techniques for encouraging effective team communication.
  • Disease-specific focus—programs that focus on patient safety for a specific condition (e.g., diabetes or cancer).
  • Driving change—techniques for managing organizational change.
  • Health Care Failure Mode and Effects Analysis (HFMEA)—a five-step process that uses an interdisciplinary team to proactively evaluate a health care process.
  • Human factors—techniques for mitigating how environmental, organizational, and job factors, as well as human and individual characteristics, influence work behavior in a way that negatively affects health and safety.
  • Just culture—techniques for facilitating an organizational culture that promotes accountability and medical error reporting.
  • Risk assessment—techniques for evaluating whether or not an organization is putting its patients at risk.
  • Root cause analysis—techniques for identifying sources of problems or incidents.
  • Specific patient care issues—programs that focus on one specific area of patient safety, such as fall prevention, medication safety, or surgical infection.
  • Teamwork—techniques for encouraging effective collaboration between staff members.
  • Triage questions—techniques to categorize patients based on their needs or benefit from treatment.
Instructional Strategy

Instructional strategy refers to the types of methods used to train program participants. These methods include, but may not be limited to:

  • Information—the provision of materials containing patient safety concepts.
  • Demonstration—displaying or showing patient safety concepts to participants.
  • Practice—exercises or assignments that allow participants to practice using training concepts either within the classroom environment or on the job.
  • Feedback—evaluative or corrective information provided to participants as they practice using patient safety concepts.
Mode of Delivery

Mode of delivery refers to the primary method or medium in which the program is conducted. These methods include:

  • Classroom instruction—participants gather in a classroom setting where they are taught patient safety concepts in person by a trainer.
  • Self-directed study—the program does not have a trainer and instead relies on the participant reviewing provided materials at his/her own pace and initiative.
  • Web-based training—the program is facilitated by a trainer over the Internet via Web conferencing software or via video, DVD, or CD-ROM.
Instructional Model

Programs were also grouped based on the model used to disseminate training information to each participant's organization. These models include the following:

  • External training—participants are taught outside their facility by a trainer or team of trainers from an external organization.
  • Internal training—training is conducted by the health care facility for its own staff.
  • Academic training—training is offered as part of a degree-seeking or certificate-granting program for health care professionals.
Available Evaluation Measures

Some programs identified via the environmental scan included measures for evaluating the overall effectiveness of the training program. These measures typically followed the Kirkpatrick model of training evaluation5 and include:

  • Level 1 Participant Reaction to Training—the most basic level, measures participant reactions to the training. Results usually illustrate how much the participants liked the training (i.e., affective reactions).
  • Level 2 Participant Learning—evaluates the extent to which learning has occurred as a result of training. More specifically, it can measure changes in trainee attitudes, improvements in knowledge, or increases in skill as a result of the training program.
  • Level 3 Transfer of Training to the Job— measures if and/or how concepts taught in the program are put into practice by participants and the extent to which behavior (i.e., job performance) has changed as a result of the training.
  • Level 4 Training Impact—measures the program's overall impact on patient safety at the participating organization, including outcomes such as improved patient safety, improved processes and/or interventions, and improved communications.

Coordination with PSNet

AHRQ is currently considering the merits of having the catalog reside on or be incorporated with the existing PSNet. As a result, we examined the taxonomy for the PSNet search engine against the structure developed for the catalog and coordinated with the PSNet developers to ensure consistency between the two databases. AHRQ facilitated two telephone meetings with PSNet developers from the University of California, San Francisco and Silverchair Information Systems (www.silverchair.com). The taxonomy used for PSNet includes the following categories:

  • Setting of care—the location where the case took place or the facility in which the error occurred.
  • Target audience—the audience to whom the publication is directed or is most likely to read the publication.
  • Clinical area—the medical specialty related to the article, including in which field the case/error took place.
  • Safety target—the type of concern/issue presented in the case; which area of safety was breached by the error.
  • Error types—classification of error(s) in order to identify root cause(s) and offer solution(s).
  • Approach to improving safety—solutions to the problems.
  • Origin/sponsor—author location trumps publication source, location trumps funding source.

Due to the similarities between the two taxonomies, we combined the common elements. However, the direct application of the PSNet taxonomy was limited by its primary application to publications, as opposed to the focus of this project on education and training programs. Despite the fundamental differences inherent in the purposes underlying the two taxonomies, AIR combined the relevant elements of both to enable the possibility that the searchable database may be combined in the future with PSNet should the need arise. The following categories were used in the final version of the taxonomy:

  • Mode of delivery (as specified in the AIR taxonomy).
  • Instructional strategy (as specified in the AIR taxonomy).
  • Available evaluation measures (as specified in the AIR taxonomy).
  • Program sponsor (PSNet's Origin or Sponsor category options with an additional optional write-in field for the name of specific sponsors).
  • Clinical area (as specified in the PSNet taxonomy).
  • Content area (PSNet's Safety Target and Approaches to Improving Safety category options, integrating unique content options from AIR's Content category).

Return to Contents 

Standardized Database Template for Data Abstraction

Next, AIR developed a template for abstracting information for programs identified during the environmental scan phase of this project into the database. In this section, we provide information on how we developed the standardized templates and categories, the definitions of each data field, and the templates used to populate the searchable Microsoft Access database.

Template Development

AIR conducted a comprehensive review of existing patient safety program catalogs, which fostered our team's collective knowledge regarding the available and relevant information at our disposal. Through this process, we identified a series of elemental questions for each piece of the framework included in the final database template. The framework (as detailed in Methodology and Inclusion/Exclusion Criteria) consists of seven categories of information:

  • Inclusion criteria.
  • Background information.
  • Pre-training.
  • Content.
  • Design and delivery.
  • Implementation.
  • Post-training.

Data Fields by Category

During the data abstraction phase, AIR collected and categorized elements of each patient safety education and training program. The database template included the list of inclusion criteria (as a double check during abstraction to ensure that included programs are still relevant), as well as programmatic features categorized into each of the seven major categories (Exhibit 5). To facilitate data abstraction, AIR drafted a set of pointed questions to determine pertinent program information for abstraction, including the data fields defined in the standardized taxonomy.

Exhibit 5. Data Abstraction Template by Category

Category Data Field Questions
Inclusion Criteria Patient Safety Oriented Is the core content of the training program truly patient safety oriented?
Instructional Objectives Is the program based on core instructional objectives?
Target Audience Is the target audience health care professionals, health care students (medical school, nursing school, EMT, etc), patients and families, or another stakeholder group?
Current in the United States Is the education or training program currently being offered in the United States?
Adapted for Health Care Is the training program designed for another industry and merely being applied to quality improvement and patient safety?
Background Sponsor Type Is the program sponsored by a private company, nonprofit organization, the Federal Government, an academic institution, or jointly sponsored?
Origin/Sponsor What is the name of the program's sponsor?
URL What is the Web address for the program?
Reach Does the program have nationwide, statewide, community-wide, school-wide or institution-wide applications?
Pre-Training Prerequisites Does the program have prerequisites for participation?

What are the prerequisites for participation (e.g., reading, coursework)?

Content Evidentiary Basis Is the program evidence-based?
What evidence forms the basis of the program?
Content Areas What are the program content areas (e.g., teamwork, root cause analysis)?
Program Objectives/Description What are the program's objectives or how is the program described?
Learning Objectives (by module) What are the objectives of each module?
AHRQ Tools and Resources What AHRQ patient safety tools and resources are used in the program?
Organizational Needs Assessment Does the program include an organizational needs assessment?
What kind of organizational needs assessment is included (e.g., survey, external, internal)?
Cultural Readiness Assessment Does the program include a cultural readiness assessment?
What kind of cultural readiness assessment is included (survey, external, internal)?
In-service Delivery Option Does the program include an in-service delivery option?
Clinical Area Which medical specialty does the program target?
Design and Delivery Training Delivered By What is the title/organization of the person who delivers the training?
Mode of Delivery How is the program delivered (e.g., in-person, via Web)?
Instructional Strategy What educational approaches are used to train participants (information, demonstration, practice, feedback)?
Instructional Model Is the training delivered internally, externally, or in an academic setting?
Target Audience Who are the participants by job title?
Setting of Care What type of organization is the program geared towards?
Implementation Travel Requirement Is travel required for participation in the program?
Length of Program How long does the program take to complete?
Continuing Education Credits Does the program provide credits for completion?
How many CE credits/hours are awarded after completion of the program?
What credentials are awarded (e.g., CE credits, degrees)?
What is the accrediting body for the credentials?
Certification Does the program provide a certification?
What kind of certification does the program provide?
Per Person Cost How much does the program cost per person?
Approaches to Implementation What are the approaches to implementation? (e.g., dosing, targeted implementation)?
Recommendations for Implementation How are the program resources rolled-out or recommended to be rolled-out (e.g., master trainer, Internet)?
Post-Training Evaluative Methods Does the vendor provide evaluation services?
On which of Kirkpatrick's levels of evaluation can the program be evaluated?
Followup Components What followup methods are used to sustain change?
Incentives and Reinforcement What methods are used to reinforce and reward positive teamwork behaviors, team progress, and sustained change?


Return to Contents 

Database Development

Using the standardized template, AIR's database development team created a Microsoft Access database with two functional components: (1) data entry and (2) search engine.

Data Entry

The data entry process was designed to minimize error in abstraction through a series of drop-down menus, checkboxes, and write-in data fields. Abstraction itself refers to the method of extracting the details of each program that are either readily available or identifiable through additional inquiries.

We abstracted information identified through our comprehensive environmental scan of patient safety education programs into the Access database. The data entry fields were grouped into five data entry tabs based on the categories in the abstraction template: (1) inclusion criteria/background/pre-training, (2) content, (3) design and delivery, (4) implementation, and (5) post-training. A screen shot of each data entry screen is presented in Appendix B.

Data abstraction was a multi-step process, beginning with the data abstraction team reviewing all potential programs captured during the environmental scan against the inclusion criteria. Each team member evaluated programs he or she did not review initially during the environmental scan phase. This was done as a quality control measure to ensure that all programs were reviewed by multiple researchers.

Programs that met the inclusion criteria were abstracted into the Access database. All programs that were not patient safety oriented and those not currently available in the United States were marked for exclusion. Programs that appeared to be patient safety oriented but lacked enough information for abstraction, as well as programs that raised additional questions, were flagged for a subsequent round of reviews by another member of the data abstraction team. Researchers met to discuss whether a program should be excluded from the database, was ready for abstraction, or whether the program's sponsor should be contacted for more information. In cases where consensus among researchers could not be reached, another researcher (the Project Director or the Principal Investigator) was asked to assess whether the abstraction had been conducted correctly.

Many programs did not have detailed objectives and only presented brief descriptions of the program. Even when objectives were provided, they were often vaguely worded. In these cases, the team included the programs if sufficient information about relevant content was identified. When we were unable to identify content areas or objectives, we contacted program sponsors for more information. The final decision was to exclude any program from the catalog if: (1) the program lacked identifiable content areas or objectives and (2) the vendor either did not respond to our inquiries for more information or the vendors' responses did not provide sufficient information about the program for abstraction as deemed by the project team.

As part of our quality control efforts, members of the abstraction team validated the abstracted records prepared by their team members. This process consisted of: (1) evaluating each program against the inclusion and exclusion criteria; (2) ensuring that all searchable fields, especially Content Areas, were properly captured; (3) a final review of each taxonomical category; (4) a final review for grammar and punctuation; and (5) a check of the program sponsor's Web site for any additional patient safety education and training programs. Weekly meetings were held for researchers to cross-reference their findings and to assess the extent of inter-rater agreement. This served as frame-of-reference training for all researchers to develop a shared mental model of appropriate abstraction protocol.

As evident from the final database, our ability to populate the fields was dependent on the amount of information available at the primary information source (in most cases, the Internet). Thus, in cases where available information was limited, we were not able to populate all of the fields.

Query Tool

AIR also developed a query tool to allow the end user to search for programs based on the data fields and characteristics outlined in the abstraction template. AIR, in collaboration with AHRQ, identified the following data fields to serve as the foundation for the query tool:

  • Program name.
  • Program sponsor.
  • Mode of delivery.
  • Instructional strategy.
  • Available evaluation measures.
  • Content area.

These categories were selected because they were deemed to be the most relevant to the end user and yielded the richest information. That is, some categories, although important and of value to the end user, did not contain information that demonstrated any variability across programs. This was most often due to the limited or insufficient information available during data abstraction. All information abstracted into the database is presented in the final query result.

Features of the Query Tool

The query tool has several different features, including write-in fields, checkboxes, and a nested search feature with "and/or" decision rules. Screen shots of the search screen are presented in Appendix C. Exhibit 6 outlines the decision rules, underlying the multiple selection feature of the query tool.

To reduce the possibility of error and facilitate use of the query tool, there are only two write-in search fields, Program Name and Other Sponsor. Other Sponsor was created as a write-in field to compensate for the design of the PSNet taxonomy, which was intended to capture the location and/or publisher of a publication. All other fields are designed with checkboxes, allowing a user to see the possible options for the search field rather than having to guess possible search terms.

The Program Sponsor and Content Area fields have a nested search feature. That is, if a user selects a high-level option, its corresponding lower-level options will automatically be included in the search. For example, if Error Analysis is selected, then Failure Mode and Effects Analysis, Narrative/Storytelling, and Root Cause Analysis will also be selected because they are specific examples of Error Analysis. When a user selects multiple options in the Program Sponsor search field, programs meeting any of the criteria will be displayed in the query results. This rule also holds true for Content Area and Mode of Delivery.

When multiple options are selected in either Instructional Strategy or Available Evaluation Measures fields, all criteria must be met for a program to be included in the query results. For example, if Information and Demonstration are selected as instructional strategies, only programs that used both strategies will be displayed in the query results. Using one or the other is not sufficient for inclusion. When a user selects options across multiple search fields, the individual criteria within each search field must be met in order for a program to be included in the query results.

Exhibit 6. Decision Rules for Multiple Selection Feature of the Query Tool

Field Name Example of Field Options Multiple Select Results
Program Name [Write-In] Not Applicable
Program Sponsor Up to 19 options including Other and:

  • Department of Health and Human Services:
    • Agency for Healthcare Research and Quality.
    • Centers for Disease Control and Prevention.
    • Centers for Medicare & Medicaid Services.
    • Food and Drug Administration.
If a main heading is selected, the subheadings below will also be searched. Programs meeting any of the criteria will be displayed in the results.
Other Program Sponsor [Write-In] Not Applicable
Mode of Delivery
  • Classroom Instruction.
  • Web-based training.
  • Self-directed Study.
Programs meeting any of the criteria will be displayed in the results.
Instructional Strategy
  • Information.
  • Demonstration.
  • Practice.
  • Feedback.
All criteria must be met for a program to be included in the query results.
Available Evaluation Measures
  • Level 1 Participant Reaction to Training.
  • Level 2 Participant Learning.
  • Level 3 Transfer of Training.
  • Level 4 Training Impact.
All criteria must be met for a program to be included in the query results.
Content Area Up to 140 options including:

  • Error Analysis:
    • Failure Mode and Effects Analysis.
    • Narrative/Storytelling.
    • Root Cause Analysis.
If a main heading is selected, the subheadings below will also be searched. Programs meeting any of the criteria will be displayed in the results.


Query Results

Once a user executes a search, the results are displayed as a series of reports, one for each program that matches the search criteria. Each report displays only the information that was available for that program. Fields that could not be populated during data abstraction will not display. Examples of a query result are presented in Appendix D.

Return to Contents 

Results from Data Abstraction

The resulting catalog contains 333 programs. As noted previously, the abstraction phase started with 821 possible programs identified during the environmental scan. Through the course of abstraction and further review, the number of possible patient safety programs increased to 950. We contacted the vendors of 142 programs for more information, of which 15 programs were abstracted and included in the database. Unfortunately, the vendors of 20 programs responded with insufficient information to abstract, and vendors for the remaining 107 programs did not respond to our request for more information. Ultimately, 627 possible programs were excluded from the database.

The number of programs ultimately represented in the catalog reflects the varying state of patient safety education and training programs during the time the environmental scan and data abstraction phases were conducted. For example, AIR identified a number of Quality Improvement Organizations (QIOs) as possible sources of information about training programs during the environmental scan phase. However, at the time that data abstraction was conducted, very few QIOs had any training programs available. Upon contacting these organizations, we learned that the QIOs were in a transition period between the 9th Scope of Work (SOW) and 10th SOW. As a result, if the environmental scan and abstraction occurred at a different time in the 3-year SOW cycle, there would likely have been many more programs from these organizations included in the catalog. The QIOs that responded anticipated they would have new training opportunities in place by mid-2012.

In addition to QIOs that were identified as possible sources of information about patient safety programs, there were a number of other possible entries from the environmental scan that were not included in the final catalog for a variety of reasons. As noted previously, during the environmental scan, we chose to err on the side of inclusion so as not to unnecessarily limit the scope of the final catalog. However, upon further review, many potential programs identified during the environmental scan were ultimately excluded from the catalog because they did not meet the inclusion criteria as well as initially thought, likely due to the fact that these programs were only tangentially, not specifically, related to patient safety.

In some cases, program materials were identified during the environmental scan for further investigation; however, upon attempted abstraction, it became clear that the materials were stand–alone presentations that were not associated with an available training program or educational opportunity. In these cases, the record was excluded from the catalog.

Summary of Database Contents

AIR conducted frequency analyses on several key data fields included in the catalog. The results of these analyses are presented in Exhibits 7 through 12 for Content Area, Setting of Care, Clinical Area, Mode of Delivery, Instructional Strategy, and Instructional Model, respectively. Due to the nested nature of the taxonomy and the number of categories and subcategories available for Content Area and Clinical Area, we aggregated these data fields at the highest level. More detailed frequency tables for Content Area and Clinical Area are in available in Appendix E.

Content Area

The Content Area data field specifies subject areas targeted during training. Of the 142 options specified within the content area data field, only 103 options were actually used during data abstraction. Exhibit 7 presents the number of programs that include instructional material in each of the 26 top-level content area categories in descending order of frequency. Please note that the Education and Training category and its subcategories were excluded from the database because this information was captured in the Mode of Delivery, Target Audience, and Implementation data fields of the abstraction template.

Exhibit 7. Content Area Frequencies

Content Area Categories Frequency
Error Reporting and Analysis 206
Quality Improvement Strategies 186
Communication Improvement 179
Culture of Safety 151
Medication Safety 126
Risk Analysis 114
Teamwork 112
Human Factors Engineering 109
Technological Approaches 73
Legal and Policy Approaches 57
Driving Change 56
Logistical Approaches 56
Specific Patient Care Issues 52
Medical Complications 26
Surgical Complications 25
Psychological and Social Complications 21
Diagnostic Errors 18
Identification Errors 18
Nonsurgical Procedural Complications 15
Fatigue and Sleep Deprivation 13
Specialization of Care 10
Device-related Complications 6
Discontinuities, Gaps, and Hand-Off Problems 4
Transfusion Complications 4
Triage Questions 2
Education and Training 0


Some notable content areas that were not found during data abstraction include Postoperative Surgical Complications and Preoperative Complication under the top-level category of Surgical Complications. Additionally, fewer results than may be expected were found for Device-Related Complications and Technological Approaches, given the increased focus recently on health care information technology and the overall reliance on technology by the general public.

Setting of Care

The Setting of Care data field specifies the type of health care setting to which programs may be targeted. Unfortunately, many programs did not specify a target setting of care, and the category of Hospitals was coded as the default setting of care. Exhibit 8 presents the number of programs targeting particular settings of care.

Exhibit 8. Setting of Care Frequencies

Taxonomy ID Setting of Care Frequency
102 Hospitals 319
103  -- General Hospitals 65
104  ---- Intensive Care Units 4
105  ---- Emergency Departments 23
106  ---- Operating Room 17
107  ---- Labor and Delivery 0
109  -- Children's Hospitals 19
110  -- Specialty Hospitals 2
112 Ambulatory Care 36
113  -- Home Care 3
114  -- Ambulatory Clinic or Office 3
115  -- Outpatient Pharmacy 9
108 Psychiatric Facilities 14
111 Residential Facilities 26
116 Outpatient Surgery 14
117 Patient Transport 3


As can be seen in Exhibit 8, setting of care was not typically specified in detail, which we suspect is due to a reluctance to limit consumer use of the programs. That is, these programs may be valuable to many different settings because of the generalizabilty of the knowledge and skills required to improve patient safety across settings.

Clinical Area

The Clinical Area data field captures the targeted specialty or specialties for which the programs were designed. As with Setting of Care, many programs did not specify a target clinical area. In these cases, the top-level category of Medicine was coded as the default clinical area. Exhibit 9 presents the number of programs targeting each of the six top-level clinical area categories in descending order of frequency. Again, the lack of specification of a clinical area may be due to the generalizability of the material across clinical specialties.

Exhibit 9. Clinical Area Frequencies

Clinical Area Category Frequency
Medicine 323
Nursing 45
Pharmacy 34
Allied Health Services 8
Dentistry 1
Complementary and Alternative Medicine 0


Mode of Delivery

The Mode of Delivery data field allows for multiple options to be selected, including self-directed study, Web-based training, and classroom instruction. Although each program specifies at least one mode of delivery, multiple options may be selected. Exhibit 10 presents the number of programs specifying each of these options. As evident in the exhibit, self-directed study and Web-based training were the most common ways patient safety instruction is available for delivery.

Exhibit 10. Mode of Delivery Frequencies

Mode of Delivery Options Frequency
Self-directed Study 251
Web-based Training 211
Classroom Instruction 148


Instructional Strategy

Similar to Mode of Delivery, the Instructional Strategy data field, which specifies the educational approaches used to train participants, allows for multiple options to be selected, including information, demonstration, practice, and feedback. That is, programs typically included more than one approach to presenting and learning material. Exhibit 11 presents the number of programs specifying each of these instructional strategy options.

Exhibit 11. Instructional Strategy Frequencies

Instructional Strategy Options Frequency
Information 333
Demonstration 126
Practice 103
Feedback 56


Notably, only 56 programs indicated that they provide feedback. However, it may be more likely that this small number is due to a lack of sufficient information available on the Internet than to programs not including this approach. Programs that include opportunities to practice a new skill typically also provide feedback to reinforce behaviors.

Instructional Model

Finally, the Instructional Model data field provides information about how a program may be conducted—internally (i.e., training that can be conducted within one's organization), externally (i.e., training offered outside one's organization), and through an academic institution (i.e., a program offered by an academic institution and typically involving a degree or certification). Exhibit 12 presents the number of programs specifying each of these options.

Exhibit 12. Instructional Model Frequencies

Instructional Model Options Frequency
External Training 278
Academic Education 50
Internal Training 11


One possible explanation for the low number of programs specifying the internal training model may be due to insufficient information being available about the extent to which external training programs can be offered for internal use by health care organizations.

Issues Encountered During Data Abstraction

AIR encountered a number of issues during data abstraction, including timing of scanning and abstracting, programs not publicly available, other exclusion factors, and lack of available information.

Timing of Project Phases

As noted previously, the timing of the two phases of this project limited the number of programs that were included in the final catalog. It is likely that new programs were created or made available on the Internet subsequent to our completion of the environmental scan phase and were not identified during the data abstraction phase. Likewise, some programs identified during the environmental scan were no longer available at the time of abstraction, thus ultimately making it necessary to exclude them from the catalog. Additionally, this suggests the possibility that programs that were abstracted early in the process may no longer be active or available but have been included in the catalog.

Programs Not Publicly Available

An important criterion for catalog inclusion is that the program is available to the general public. As a result, some programs identified during the environmental scan phase were later excluded from the catalog because they were not in fact available to the public. For example, one medical school program, Masters in Patient Safety Leadership, was excluded because these classes are only available to currently enrolled students and are not publicly available. In addition, certain medical school programs, residencies, and fellowships identified during the environmental scan were not included because they lacked a patient-safety orientation. Patient safety was most often a curricular component or theme of a specific module in these instances. Hospital-specific training initiatives also did not meet the publicly available inclusion criterion standard, as they are only available to individuals affiliated with the specific hospital where they were being used.

Other Exclusion Factors

Annual conferences were identified in the environmental scan but ultimately excluded because the content changes each year and lacks instructional objectives. Research journal articles with continuing medical education credits were excluded as well if they were not attached to an actual program of instruction. Although AIR identified a number of health literacy programs during the environmental scan, most of these programs were ultimately excluded from the catalog because many of these programs were primarily focused on health literacy and lacked a patient safety orientation. Programs designed to improve patient safety through increased health literacy, however, were included.

Lack of Information

As discussed previously, the Internet did not provide all of the information we planned to capture during abstraction. The following fields were commonly left blank during data abstraction:

  • AHRQ Tools and Resources. Programs did not typically provide information regarding AHRQ tools and resources, although AHRQ was often cited in their reference lists.
  • Program Focus. It was often difficult to determine whether the program focus was on master trainers or participants; rather, programs appeared to be tailored towards both groups or simply did not specify this information.
  • Approaches to Implementation and Recommendations for Roll-out/Implementation. Programs rarely specified recommendations for effective implementation, information which may be available upon inquiry but may not be a standard marketing feature of programs.
  • Clinical Area and Setting of Care. Another difficulty in collecting data came in applying the PSNet taxonomy. These particular fields yield valuable information when applied to publications such as books and articles but are less useful when applied to patient safety educational opportunities and training programs.

Return to Contents

Page last reviewed June 2013
Page originally created June 2013
Internet Citation: Chapter 2. Electronic Searchable Catalog. Content last reviewed June 2013. Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/findings/final-reports/environmental-scan-programs/envscan-program2.html