Chapter 1. Introduction
Chapter 5. Analyzing Data and Producing Reports
You will need to prepare the collected survey data for analysis. If you decide to do your own data entry, analysis, and report preparation, use this chapter to guide you through the various decisions and steps. If you decide to hire a vendor for any of these tasks, use this chapter as a guide to establish data preparation procedures.
If you plan to conduct a Web survey, you can minimize data cleaning by programming the Web survey to perform some of these steps automatically. Also, if you plan to administer the survey in more than one community pharmacy, you will need to report the results separately for each site.
Examine each returned survey for possible problems before the survey responses are entered into the dataset. We recommend that you exclude returned surveys that:
- Are completely blank or contain responses only for the background demographic questions, or;
- Contain the exact same answer to all the questions in the survey (since a few survey items are negatively worded, the same exact response to all items indicates the respondent probably did not pay careful attention and the responses are probably not valid).
After you have identified which returned surveys will be included in the analysis data file, you can use the following formula to calculate the official response rate:
Number of returned surveys − incompletes and ineligibles
Number of eligible staff who received a survey
Note that the numerator may be smaller than in your last preliminary response rate calculation because, during your examination of all returned surveys, you may find that some of the returned surveys are incomplete or ineligible.
In this section we describe several data file preparation tasks.
Edit Illegible, Mismarked, and Double-Marked Responses
Problematic responses may occur with paper surveys if some respondents write in an answer such as 3.5 when they have been instructed to mark only one numeric response. Or they may mark two answers for one item. Develop and document editing rules that address these problems and apply them consistently. Examples of such rules are to use the highest response when two responses are provided (e.g., a response with both 2 and 3 would convert to a 3) or to mark all of these types of inappropriate responses as missing.
Create and Clean Data File
Paper survey data files. After your paper surveys have been edited as necessary, you can enter the data directly into an electronic file by using statistical software such as SAS® SPSS®, or Microsoft Excel®, or you can create a text file that can be easily imported into a data analysis software program. The next step is to check the data file for possible data entry errors. To do so, produce frequencies of responses for each item and look for out-of-range values or values that are not valid responses.
Most items in the survey require a response between 1 and 5, with a 9 coded as Does Not Apply/Don't Know. Check through the data file to ensure that all responses are within the valid range (e.g., that a response of 7 has not been entered). If you find out-of-range values, return to the original survey and determine the response that should have been entered.
Web surveys. Your pretesting should have ensured that responses would be coded and captured correctly in the data file, so the file should not contain invalid values. But you should verify that this is so by again checking that all responses are within the valid range.
Include Individual Identifiers on Your Data File
If you used individual identifiers on your surveys, after you close out data analysis and enter identification numbers in the electronic data file, destroy any information linking the identifiers to individual names. You want to eliminate the possibility of linking responses in the electronic file to individuals.
If you used paper surveys without individual identifiers on them, you will need to include some type of respondent identifier in the data file. Create an identification number for each completed paper survey and write it on the completed paper survey in addition to entering it into the electronic data file. This identifier can be as simple as numbering the returned surveys consecutively, beginning with the number 1. This number will enable you to check the electronic data file against a respondent's original answers if any values look like they were entered incorrectly.
If you used Web surveys without respondent identifiers, you can electronically generate and assign an identifier to each respondent in the data file.
Respondents are given the opportunity to provide written comments at the end of the survey. Comments can be used to obtain direct quotes for feedback purposes, but they should be carefully reviewed and deidentified first to ensure they do not contain any information that could be used to identify the person who wrote the comment or identify individuals who may be commented about.
You may also want to analyze the comments and identify common themes (e.g., communication, staffing, teamwork). You can then assign code numbers to match comments to themes and tally the number of comments per theme. Open-ended comments on paper surveys may be coded either before or after the data have been entered electronically.
Feedback reports are the final step in a survey project and are critical for synthesizing the survey responses. Ideally, feedback should be provided broadly—to pharmacy management, chain and system patient safety officers and other senior managers, and pharmacy staff, either directly during meetings or through communication tools such as Email, Intranet sites, or newsletters.
The more broadly the results are disseminated, the more useful the information is likely to become and the more likely respondents will feel taking the survey was worthwhile. Feedback reports can be customized for each audience, from one- or two-page executive summaries to more complete reports that use statistics to draw conclusions or make comparisons.
Frequencies of Response
One of the simplest ways to present results is to calculate the frequency of response for each survey item. To make the results easier to view in a report, you can combine the two lowest response categories (e.g., Strongly Disagree/Disagree or Never/Rarely) and the two highest response categories (e.g., Strongly Agree/Agree or Most of the Time/Always). The midpoints of the scales can be reported as a separate category (Neither Agree nor Disagree or Sometimes).
Most survey items include a Does Not Apply/Don't Know response option. In addition, each survey item will probably have some missing data from respondents who simply did not answer the question. Does Not Apply/Don't Know and missing responses are excluded when displaying percentages of response to the survey items.
When using a statistical software program, you will recode the "9" response (Does Not Apply/Don't Know) as a missing value so that it is not included when displaying frequencies of response. An example of how to handle the Does Not Apply/Don't Know and missing responses when calculating survey results is shown in Table 2.
|Item A2. Staff Treat Each Other With Respect|
|Response||Frequency (Number of Responses)||Response Percentage||Combined Percentages|
|1 = Strongly Disagree||1||10%||30% Negative|
|2 = Disagree||2||20%|
|3 = Neither||1||10%||10% Neutral|
|4 = Agree||4||40%||60% Positive|
|5 = Strongly Agree||2||20%|
|9 = Does Not Apply/Don't Know and Missing (did not answer)||3||-||-|
|Total Number of Responses||13||-||-|
Item and Composite Percent Positive Scores
It can be useful to calculate a composite score for each dimension. To calculate your pharmacy's composite score on a particular safety culture dimension, simply average the percent positive response on each item that is included in the composite. Here is an example of computing a composite score for the dimension Response to Mistakes:
Example: There are four items in this dimension—three are positively worded (items C1, C4, and C7) and one is negatively worded (item C8). Keep in mind that DISAGREEING with a negatively worded item indicates a POSITIVE response.
Calculate the percent positive response at the item level (see example in Table 3). In this example, there were four items with percent positive response scores of 71 percent, 75 percent, 70 percent, and 64 percent. Averaging these item-level percent positive scores [(71% + 75% + 70% + 64%) / 4] results in a percent positive composite score of 70 percent positive on Response to Mistakes.
|Four items measuring Response to Mistakes||For positively worded items, # of "Strongly Agree" or "Agree" responses||For negatively worded items, # of "Strongly Disagree" or "Disagree" responses||Total # of responses to the item (excluding NA/DK & missing responses)||Percent positive response to item|
|Item C1-positively worded:
"Staff are treated fairly when they make mistakes"
|Item C4-positively worded:
"This pharmacy helps staff learn from their mistakes rather than punishing them"
|Item C7-positively worded:
"We look at staff actions and the way we do things to understand why mistakes happen in this pharmacy"
|Item C8-negatively worded:
"Staff feel like their mistakes are held against them"
|NA = Not applicable||Average percent positive response across the 4 items = 70%|
Do Not Report Results If There Are Not Enough Respondents
To protect the confidentiality of individual respondents, do not provide any type of survey feedback report for a pharmacy if fewer than five respondents have answered the survey. Also, if a pharmacy has five overall respondents, but fewer than three respondents answered a particular survey item, do not report percentages of positive, neutral, or negative response for that item—simply indicate there were not enough data to report results for the item.
It is also important to present information about the background characteristics of all respondents—how long they worked in the pharmacy, their staff positions, and weekly hours. This information helps others to better understand whose opinions are represented in the data. However, be careful not to report item results if the total number of respondents is fewer than three, where it may be possible to determine which employees fall into those categories. For example, if only two employees reported that they work in the pharmacy for 1 to 16 hours per week, you can combine those respondents with respondents reporting they work 17 to 31 hours per week if the total number of respondents in the combined group is three or more.
For free technical assistance on the Community Pharmacy Survey on Patient Safety Culture regarding survey administration issues, data analysis and reporting, or action planning for improvement, you can Email SafetyCultureSurveys@westat.com.