Preparing and Analyzing Data, and Producing Reports
After closing out the data collection period, the collected survey data will need to be prepared for analysis. As mentioned in Section 2, you may want to hire a vendor to conduct data entry, data analysis, or to produce feedback reports for your hospital. If you elect to do your own data entry, analysis, and report preparation, this section will guide you through the various decisions and steps. If you choose to hire a vendor, use this section as a guide to establish data preparation protocols. Data coding and cleaning will be minimized, in the event you choose to conduct a Web-based survey, because the programming needed to make the survey form interactive and publish it to your Web site will perform some of these steps for you.
You or your vendor will need to accomplish a number of tasks to prepare the survey data for analysis. Several data files will need to be created during the data preparation process, however, it is important to maintain the original data file that is created when survey responses are entered. Any changes or corrections should be made to duplicate files, for 2 reasons:
- Retaining the original file allows you to correct possible future errors made during the data cleaning or recoding processes.
- The original file is important, should you ever want to go back and determine what changes were made to the data set or conduct other analyses or tests.
Identify Complete and Incomplete Surveys
Each survey needs to be examined for completeness, prior to entering the survey responses into the data set. A complete survey is one in which every item or at least many items have a response. If a few items throughout a survey form have been left blank, or if 1 or 2 entire sections of the survey have not been answered, you may still consider the survey to be sufficiently complete to warrant its inclusion in the data set.
At a minimum, we recommend including only those surveys in which the respondents complete at least one whole section of the survey. If a respondent has not answered most of the items in at least one section of the survey, you will be missing relevant data on too many items. This will become problematic when calculating the safety culture composite scores. Therefore, we recommend using the following criteria to identify incomplete surveys and exclude them from your data set.
Exclude the responses from a survey form if the respondent answered:
- Less than one entire section of the survey.
- Fewer than half of the items throughout the entire survey (in different sections).
- Every item the same (e.g., all "4"s or all "5"s). If every answer is the same, the respondent did not give the survey their full attention. The survey includes reverse-worded items that exercise both the high/positive and low/negative ends of the response scale to provide consistent answers.
Code and Enter the Data
Some problematic answers may need to be coded before the data is entered into an electronic data file. Coding involves decision making with regard to the proper way to enter ambiguous responses. Potential coding issues are described below. These coding steps will not be necessary if you are using a Web-based platform or scannable forms.
Illegible, Mismarked, and Double-Marked Responses
Respondents may provide responses that cannot be read easily or, in some cases, their intended answer may be difficult to determine. For example, a respondent may write in an answer such as 3.5, when they have been instructed to circle only 1 numeric response. Or, they may circle 2 answers for 1 item. Develop coding rules for these situations and apply them consistently. Examples of coding rules are to mark all of these types of inappropriate responses as missing, or to use the highest response when two responses are provided (e.g., a response with both 2 and 3 would convert to a 3). Once surveys have been coded as necessary (most surveys will not need to be coded), the data can be entered into an electronic file using statistical software such as those manufactured by SAS® or SPSS®, a Microsoft Excel® spreadsheet, or by entering the data into a flat file or text file that can be easily imported into a data analysis software program.
If identifiers (identification numbers or codes) were used on surveys, once you close out data collection, destroy any information linking the identifiers to individual names, because you no longer need this information and you want to eliminate the possibility of linking responses on the electronic file to individuals. Once the linkage information is destroyed, you may enter the identification number in the electronic data file. If no identifiers were used on the surveys or if you wish to include a different identifier in the data file, create an identification number for each survey and write it on the surveys in addition to entering it into the electronic data file. This identifier can be as simple as numbering the returned surveys consecutively, beginning with the number one. This number will enable you to go back and check the electronic data file against the respondents' original answers if there are values that look like they were entered incorrectly.
Respondents are given the opportunity to provide written comments at the end of the survey. Comments can be used to obtain direct quotes for feedback purposes. If you wish to analyze these data further, the responses will need to be coded according to the type of comment that was made. For example, staff may respond with positive comments about patient safety efforts in their unit. Or, they may comment on some negative aspects of patient safety that they think need to be addressed. You may assign code numbers to similar types of comments and later tally the frequency of each comment type. Open-ended comments may be coded either before or after the data has been entered electronically.
Once the surveys have been coded as necessary and entered electronically, it is necessary to check and clean the data file before you begin analyzing and reporting results. The data file may contain errors. You can check and clean the data file electronically by producing frequencies of responses to each item and looking for out-of-range values or values that are not valid responses. Most items in the survey require a response between 1 and 5. Check through the data file to ensure that all responses are within the valid range (e.g., that a response of "7" has not been entered for a question requiring a response between 1 and 5). If out-of-range values are found, return to the original survey and determine the response that should have been tallied.
Analyze the Data and Produce Reports of the Results
Feedback reports are the final step in a survey project and are critical for synthesizing the collected information. Ideally, feedback should be provided broadly-to hospital management, administrators, boards of directors, hospital committees, and to hospital staff, either through their units or through a centralized communications tool such as E-mail or newsletters. The more broadly the results are disseminated, the more useful the information is likely to become. The feedback also will serve to legitimize the collective effort of the respondents and their participation in the survey. It is gratifying and important for respondents to know that something worthwhile came out of the information they provided. Different types of feedback reports can be prepared for each different audience, from 1- or 2-page executive summaries to more complete reports that use statistics to draw conclusions or make comparisons.
Frequencies of Response
One of the simplest ways to present results is to calculate the frequency of response for each survey item. We developed a Microsoft® PowerPoint® presentation to accompany this Survey User's Guide, with modifiable feedback report templates that you may use to communicate results from the Hospital Survey on Patient Safety Culture. The feedback report template groups survey items according to the safety culture dimension each item is intended to measure. You can easily adapt the PowerPoint® template by inserting your hospital's survey findings in the charts to create a customized feedback report. You can also customize the report to display unit-level data, in addition to hospital-level data. To make the results easier to view in the report, the 2 lowest response categories have been combined (Strongly Disagree/Disagree and Never/Rarely) and the 2 highest response categories have been combined (Strongly Agree/Agree and Most of the time/Always). The midpoints of the scales are reported as a separate category (Neither or Sometimes). The percentage of answers corresponding with each of three response categories then are displayed graphically:
Sample Graph Displaying Frequencies of Response to an Item
Because each survey item most likely will have some missing data, missing responses are excluded from the total (or denominator) when calculating these percentages. In the example shown, assume there were 200 total survey respondents. Twenty people did not answer this particular item, however, so the total number of people who responded to the item was 180. The percentage of respondents who Strongly Agreed/Agreed was 50 percent or 90/180. The percentage of respondents who either Strongly Disagreed/Disagreed or responded "Neither" was 25 percent or 45/180. Excluding missing data from the total allows the percentages of responses within a graph to sum to 100 (actually 99 to 101, due to the rounding of decimals to whole numbers).
There are placeholder pages in the electronic feedback report template for highlighting your hospital's strengths and areas needing improvement, respective of patient safety issues covered in the survey. We define patient safety strengths as those positively worded items that about 75 percent of respondents endorsed by answering "Strongly Agree/Agree" or "Always/Most of the time" (or those negatively worded items that about 75 percent of respondents disagreed with). The 75-percent cutoff is somewhat arbitrary, and your hospital may choose to report strengths using a higher or lower cutoff percentage. Similarly, areas needing improvement are identified as those items that 50 percent or fewer respondents did not answer positively (they either answered negatively or "Neither" to positively worded items, or they agreed with negatively worded items). The cutoff percentage for areas needing improvement is lower, because if half of the respondents are not expressing positive opinions with regard to a safety issue, there probably is room for improvement.
It also is important to present frequency information about the background characteristics of all the respondents as a whole—the units to which they belong, how long they have worked in the hospital or their unit, their staff position, etc. This information helps others to better understand whose opinions are being represented in the data. Be careful not to report frequencies in small categories (e.g., the number of hospital presidents who responded), where it may be possible to determine which employees fall into those categories.
Composite Frequencies of Response
The survey items can be grouped into dimensions of safety culture, and so it can be useful to calculate one overall frequency for each dimension. One way of doing this is to create a composite frequency of the total percentage of positive responses for each safety culture dimension. Composites can be computed for individual units or sections of a hospital, or for the hospital as a whole. For example, a composite frequency of 50 percent on Overall Perceptions of Safety would indicate that 50 percent of the responses reflected positive opinions regarding the overall safety in the unit or hospital.
To create an overall composite frequency on a safety culture dimension:
Step 1. Determine which items are related to the dimension in which you are interested, and which items related to that are reverse worded (negatively worded). Items are grouped by dimension in Items and Dimensions, which also identifies the items that are reverse worded. There are three or four items per dimension. Step 2. Count the number of positive responses to each item in the dimension-"Strongly Agree/Agree" or "Most of the time/Always" are positive responses for positively worded items. For reverse worded items, disagreement indicates a positive response, so count the number of "Strongly Disagree/Disagree" or "Never/Rarely" responses. Step 3. Count the total number of responses for the items in the dimension (this excludes missing data). Step 4. Divide the number of positive responses to the items (answer from Step 2) by the total number of responses (answer from Step 3).
Number of positive responses to the items in the dimension
Total number of responses to the items (positive, neutral, and negative) in the dimension
The resulting number is the percentage of positive responses for that particular dimension.
Here is an example of computing a composite frequency percentage for the Overall Perceptions of Safety dimension:
- There are 4 items in this dimension—2 are positively worded (A15) and (A18), and 2 are negatively worded (A10) and (A17). Keep in mind that disagreeing with the negatively worded items indicates a positive perception of safety.
- To count the total number of positive responses, complete Table 2:
The composite frequency percentage is calculated by dividing the total number of positive responses on all 4 questions (numerator) by the total number of responses to all 4 questions excluding missing responses (denominator). There were 500 positive responses, divided by 1,000 total responses, which results in a composite of 50 percent positive responses for Overall Perceptions of Safety.
While there are many other ways to analyze survey data, we have presented only basic options here. If you are working with an outside vendor, the vendor may suggest additional analyses that you may find useful.
This survey development effort was sponsored by the Medical Errors Workgroup of the Quality Interagency Coordination Task Force (QuIC), and was funded by the Agency for Healthcare Research and Quality (AHRQ contract no. 290-96-0004). Westat conducted this work under a subcontract with BearingPoint. The authors wish to thank Matthew Mishkind, Ph.D., a former Westat staff member, who contributed to the development of the pilot instrument and conducted cognitive testing; Rose Windle for survey administration; and Theresa Famolaro for assisting with data cleaning and analysis. We are grateful to Dorothy B. "Vi" Naylor, MN, of the Georgia Hospital Association; and Tracy Scott, Ph.D., and Linda Schuessler, MS, of the Emory Center on Health Outcomes and Quality, Rollins School of Public Health, for sharing part of the data they collected in 10 Georgia hospitals using the pilot survey so we could include their data in this psychometric analysis. We also wish to thank a Risk Manager at a Veterans Health Administration (VHA) Hospital for administering the pilot survey to staff at a VHA hospital and sharing the data with Westat. In addition, we thank Eric Campbell, Ph.D., Barrett Kitch, M.D., M.P.H., and Minah Kim, Ph.D., of the Institute for Health Policy at Massachusetts General Hospital in Boston for their suggestions to improve the pilot survey and for recruiting four hospitals to participate in the pilot. Finally, we wish to thank our AHRQ project officer, James Battles, Ph.D., for his guidance and assistance.
Details on the development, pilot testing, and psychometric properties of the Hospital Survey on Patient Safety Culture are contained in the following technical report:
Sorra, JS and Nieva, VF. Psychometric analysis of the Hospital Survey on Patient Safety. (Prepared by Westat, under contract to BearingPoint, and delivered to the Agency for Healthcare Research and Quality [AHRQ], under Contract No. 29-96-0004.)
Return to Contents
Proceed to Next Section